Hello,
I need to histrate a large number of tags on each server acting as the backend in a scale-out architecture with one frontend server and two backend servers. Each backend server histrates 6,000 tags per second. When a database connection is lost, I've found that after 13-14 hours, both the disk and memory buffer fill up, and data loss begins. I'm following Ignition's recommendation to not configure the maximum local cache size above 50,000 records. My client doesn't want to use the option to archive data for later loading because it requires manual actions. The backend servers are histating on a single external server dedicated to the database. What options does Ignition offer when the store-and-forward buffer fills up?
You can increase these values to give you more offline capacity before it fills up. I’m not sure what else you’re hoping it will do for you
When you say “external”, how external is it? Is it on the same fast local EtherNet network ie just running in another server on the same network? Or is the DB in the cloud or other higher latency connection?
I am a witness to a corrupted cache when you increase far above the recommended max.
I was observing on a planned oitage (DB maintenance) that did not go as planned. Just before local cache filled up, I opted to ignore the recommended limit and increase to 2M temporarily. Afterall, I was about to lose the data anyway…and didnt feel like babysitting each time the archive needed saved off.
After DB came back online, the S&F engine locked up, I had to archive the store in order to kickstart the S&F again, then tinker on a backup S&F for hours to save the data.