Data loss and the Store and Forward system

Is it true that the Store and Forward system does not guarantee that no data will be lost?

The Ignition online help says:
By setting the write size and write time of both the local cache and forwarder to low values, the data will spend less time in the memory buffer. While the memory buffer can be set to 0 in order to bypass it completely, this is not usually recommended, as the buffer is used to create a loose coupling between the history system and other parts of Ignition that report history. This disconnect improves performance and protects against temporary system slowdowns. In fact, it is recommended that for reliable logging this value be set to a high value, in order to allow the maximum possible amount of data to enter the system in the case of a storage slowdown.

I reach two conclusions:

  1. There is a chance that data will be lost–never make it to the database or to quarantine.
  2. The entity that deposits data in the store has no way of knowing that the system is having trouble, and therefore will continue depositing, and losing, data.



Yes, that is correct. There is a limit to how much can be cached, and when that limit is reached, new values will be dropped. Most sources of data don’t watch for this, because there’s not much they would do differently knowing that data is being dropped, but from a module author point of view, they could theoretically monitor the state of the pipeline by looking at information in DataSinkInformation returned by HistoryManager#getStatusInfo… most notably isAvailable(). The information is a list because it represents the various stages of the pipeline. As each part fills up, it reports false for isAvailable.