Ignition Edge - Buffer limitations

Hi,

The current Ignition edge can buffer data for up to 35 days or 10Mio data points. I was wondering about the definition of a data point.

I’m seeking clarification on the definition of a "data point" .

Specifically, if we were to use Ignition Edge as a buffer during network interruptions, and we have 500 tags updating every second, this would generate 500 data points per second. Based on this rate, the 10 million data point buffer would be exhausted in approximately 5.5 hours.

Can you confirm whether this interpretation is correct, or if there are any optimization strategies extend the effective buffer time in this scenario?

Thank you,
André

Your assessment is correct, in that the Edge historian can store up to 10M rows of data (in your case, one row per tag per second). If you're not careful (and are syncing data to a full gateway w/ a historian), this amount of data can result in >100GB of disk space required each month.
Spend time to set appropriate storage settings for each tag to minimize the data you're storing. Utilize an appropriate Tag History Deadband Style, and store only significant value changes.

1 Like

When you use "Analog" deadband style on your per-tag history settings, with a non-zero deadband, the Ignition historian will check that against the tag's trend line, not the most recent value, and will suppress extra stored points. The short term trend lines are recomputed on query to reconstruct with interpolation. (Do read the docs on how sample recording is delayed in this mode.)

When you use "Discrete" deadband style, new values are checked against just the prior data point.

When the actual data source behaves like a typical analog sensor, with a little noise (within the deadband), the analog deadband style is a big space saver.

The default deadband style is "Auto", which mean "Analog" for floating point tags, and "Discrete" for all others.

Further note: the 10 million row limit does not trigger pruning. Buffering simply stops. Pruning only happens at the 35-day boundary. You must make sure your application will not produce 10 million rows in any 35-day period, or you will lose data. If you cannot achieve this, you cannot use Ignition Edge.

Or, utilize Edge Sync services to a full gateway, where all data will be forwarded based on sync period defined in the Edge gateway.

I don't think Sync Services triggers any pruning on the Edge side, so I wouldn't expect this to help. You have experience otherwise?

Correct. But, sync services are agnostic of pruning. All historical data still in the Edge gateway (that has not already been received by Full gateway - confirmed by acknowledgement sent by Full gateway) is sent while comms are up (reasonable to expect <30sec delay for all real time data from Edge if trending data from a full gateway).
As long as sync settings (payload size & period) exceed data ingress, no data will be lost, despite not directly accessible by Edge after 10M rows or pruning passes.

I'm referring to incoming data points being dropped up front because the row limit has been reached. Those data points will never be seen by Sync Services.

Ergo, let me repeat:

I'd like to be shown to be wrong.

Ahhh, I understand your comment now.
Data is stored FIFO (the oldest data is dropped to store the newest).

Here is a trend from a client who opted for high-res data from an Edge Panel site. Here is a screenshot trending 3 weeks of data from Edge project (which is pushed via EAM from full GW):

Note that not a single tag has direct access to history older than approximately 1 week at this site.

And another with the same trend from the full gateway (running the same project, except all queries are executed against the SQL DB when viewed on Full GW):

Using the PowerChart (and all its glory) w/ aggregation (500pt w/ MinMax) - disregard any small changes you might spot on the trends. I assure you that all points on Edge exist in remote DB.

Here are two more, trending only last 10 hrs of data:
Edge:

Full:

Ah, good to see. Has the FIFO behavior changed? There were posts some weeks ago where it wasn't FIFO.

Surprising. I'm unsure of other behaviors. LIFO would be a dealbreaker for me.
I believe that site is still on 8.1.28 (deployed mid 2023).

Thank you for your feedback.
We can see a similar behaviour as described by Chris - 10 Mio rows are visualised on an edge project with a time frame of 6-7 hours, but you can see the full timeframe on a standard gateway without any data gaps. So there must be a FIFO for the visualisation on the edge.

But does this also count for the sync service in case of a disconnect to a standard gateway?

The doc says:

Sync is unique in that if the connection to the remote Ignition Gateway is severed, the Edge Gateway will use its 35 days of storage as a Store and Forward buffer, allowing you to store up to 35 days of data before data is lost. Once the connection is restored, the Edge Gateway will send data over based on the max batch size and data frequency until all previous data is sent.

It does not mention any row limitations.

This might help:

Note that Edge Sync Services are drawing from the Edge historian, so the row limit does apply to it.

2 Likes

Yes. As @pturmel mentioned, Edge sync draws from the Edge historian, starting with oldest record still in the DB that has not already been sync'd.
In your case (10M / 7hrs), if your connection to Full goes down for 8hrs (say, starting at noon), expect to lose 1hr of data (from noon-1p), as all data older than 1p will be evicted from Edge by the time it is able to S&F to Full.