Both the local cache and memory buffer are overflowing in Store & Forward. See screen shots below.
Increasing the memory size just delays the inevitable.
Need help ASAP
Ignition Version: 8.1.0 (b2020110211)
You are asking your database to do more than it can. Or it is possibly mis-optimized (missing indices, etc).
General advice (in the absence of any of your installation details):
-
Don’t put Ignition and your DB on the same machine. Fine for testing, not fine for production. Separate VMs or containers is fine, as long as the hypervisor/host is not overcommitted.
-
Tune your database for its hardware. Many databases come “out of the box” with settings for wimpy hardware, especially in regards to memory buffers.
-
Tune your database for your collected data. Many databases use statistics from your actual data to generate efficient query execution plans. This often needs to be refreshed manually or by timed script.
-
Look for slow queries and use your database’s “explain” functionality to find missing indices or inefficient joins. You may need to turn on query logging in your DB to obtain the query text to explore.
Here’s our guess:
Data throughput is faster than the time it takes to write the data to the table.
I’m wondering if it has to do with the default history table being too large. We are 18 days into this month, and we have 77 million lines in the table sqlt_data_1_2021_01. Could it be that querying and updating this table takes too long.
-
When making a tag historical, the only option is what database is used. My understanding is that it’s the default sqlt_data_xxx table. Is there a way to designate the table data will be stored in?
-
If we’re limited to the generic table, what’s the best way to break up the table?
-
Is there a way to optimize this table?
Thank you.
Almost certainly. Especially if you're seeing the issue escalate as we move further into the month.
On the gateway, for each historical provider you can configure the data partitioning. Just a caution that being to finite here can be as much of a headache as being to broad, albeit in a different way.
If you're going to use the historian then you can't really do much. If you can potentially move over to transaction groups then you have more control over how and where things are stored.
There are ways to optimize the queries. It depends a lot on the database you're using. @pturmel's suggestions 2,3, & 4 are probably the best places to start.
You may also find the discussion here interesting, it isn't about exactly the same thing but definitely has some insight into your issue.
Thank you, lrose, for the reply. This helps confirm what we’re thinking. We were already doing the 4 points of general advice that ptrumel suggested above. In addition to that we are:
- Segregating data into four separate databases and still use the generic historian table. These databases will be based on the departments in our organization that use the historian the most. This is tedious since the historical data will be split between the two DBs. I’ll need to duplicate charts: one to show historical data for yesterday and back, and one chart to show historical data from today going forward.
- Using transaction groups to specify tables for specific data.
- Finally, we’re looking at ways to optimize the tables.
DN