Historical and recent data trend - storage and efficiency -

First post to the forum. Hello everyone. Thanks in advance for any advice, and sorry if this is a noob question.

This pertains to optimizing efficiency for storage and review of a number of tags. Here is the situation:
Let’s say I have 1000 tags that I want to store the data at 1 or 10 minute (or longer) intervals so that they can be reviewed long term. For those same 1000 tags, I would also like to be able to have the operators view in a real-time trend (or maybe just a sparkline) with a resolution of 1 second. However, it would only need to be going back in time a short period of time. Some tags for 5 or 10 minutes, others for an hour at most.

So, what is the most efficient way to set this up?

Currently I have a transaction group at 1 second interval that deletes after x time, plus I set up a separate historical tag with a longer scan class. This seems to be working well so far.
I worry that the constant deleting of the transaction group history once the tags go beyond x time takes up unnecessary resources. Perhaps, an automatic periodic delete or something instead? Perhaps something completely different?

The other thought I had was just to go ahead and save the high resolution data, even though I don’t need it. However, it seems pointless to store many times the data to SQL long term and take up the space unnecessarily.

As my system grows, I wonder what is best and most efficient. I foresee there are many more items that would be like this. Suggestions?


One option would to leave the transaction group to run forever, then have a Gateway timer script that runs periodically to delete data older than a certain time range.

Thanks for the prompt response PGriffith. That makes sense and from what I can tell so far, that seems to be the most efficient way to accomplish my goals.

A follow up question. So let’s say I use this setup, wouldn’t the PLC and server be ‘handling’ the same tag twice?

What I mean is – Let’s say I have a transaction group with 10,000 tags set up for a 1 second update to pull from the PLC and send data to the SQL for my ‘short term’ realtime trends. Then I would also need to have the same 10,000 tags defined for a 1 second standard scan class to update the actual values for the screens and scripts and such.
Correct me if I’m wrong, but from what I can tell, the system does not take advantage of the fact that they are both looking at the same thing and at the same rate from the PLC. Essentially it would be as if the server and PLC were handling 20,000 tags, even though they are the same 10,000 tags twice. Is this correct? If it is, is there a way to avoid that and still accomplish my goals?
Trying to get my head around the how to proceed. Just really getting rolling with developing my first production project and I want to make good planning decisions up front.

For the above setup, I understand I would additionally configure one or more longer period historical classes which would keep my long term history with the lower resolution historical data.

Thanks again,

Ignition (unless you specifically work around it) will only communicate to your PLCs from one source - the Ignition OPC-UA server. Any tag reads/subscriptions/writes from Ignition clients, or transaction groups, or any other source in Ignition will just be transactions against the OPC-UA server, which will be as efficient as possible and avoid ‘doubling’ information requested from the PLC.

Excellent! That helps immensely.
Just for clarity, the attached screenshot is why I thought otherwise. What you are saying is that this is not truly indicative of the OPCUA server ‘doubling up’?