since 2 weeks the CPU usage of our Linux MySQL database server is constantly at 99% upwards (2x2,1 GHz and 8GB RAM). We have made no change to the system, which makes this problem all the more bizarre. Only the Ignition database is running on this server. The database and gateway have separate servers. The connections to the database have been from 3 up to 6 connections.
Yes, I restarted both of them. I'll assign 2 more vCPU cores to the server and will look into the performance then. Maybe it is enough to work normally. I just increased the innodb buffer size from 128mb up to 1gb and now the buffer is between 85-90% and not constantly 100%. Then I increased it up to 2gb again and the bufffer size went down to 30%.
The queue that starts every minute, often runs longer than 1 minute. But it is not that complex. Aggregate the data for the last minute and write it into a table. This is for 6 tags each. The query processes in the standard sqlt_data table so it has no index. Can that be the problem?
Oh, yes. Consider not accessing the historian's tables yourself. Ignition provides scripting functions to perform queries for you. Post-process the results in your script if you need additional functionality.
Or don't use the historian (for these tags). Consider using transaction groups (the SQL Bridge module) or their equivalent in scripts to record your data to a traditional "wide" table in your database. Then your summary queries will be dramatically simpler.
I am using a Scheduled Gateway Event Script and I am reading the data via system.tag.queryTagHistory(...).
I assigned 8 cores (2 cores before) and now the cpu load is constant 50-60% which is ok. But I am wondering why the script still runs about 2 minutes long. RAM and CPU are not fully loaded but the query still takes this long.
Is there a possibility to provide more power specifically to this queries?
I am giving the path to the queryTagHistory function and query the "Maximum" from the last minute. I also divide the count I get by the time produced in this minute. So I am getting like 8 values and run a namedQuery wo write in another table in the end. Writing the data is pretty fast but reading the internal database via queryTagHistory is pretty slow / maybe too much workload? Reading a product count, divide it by time producing, add count to specific variables, get bad count and this for 11 tags every minute. Or should this be doable? I need minute values to build a graph of count inside a BI system.
What size are your partitions set for?
How many tags total?
We will need more information to help but I bet you have the default 1 month partition with millions of records in each. I have found a 3 day partition size to be the sweet spot for our setups.
If they are getting recorded on a schedule unconditionally, the tag historian is not an efficient storage method. Consider using transaction groups (SQL Bridge module) to store many tags together in traditional "wide" tables. This can also be scripted on scheduled events or timer events if you do not wish to purchase that module.
I found that many expression tags where created with historical deadbad mode "off". These tags wrote at a fixed rate of 1000ms. I have to lookup now, how to clean the database from all those redundant entries. I also changed alot at deadbands and min/max time historization. The load of data per day went down by 60%.
I'll also shorten the partion size from month to 1-2 weeks. I took over the system and wondered why the reports needed that long as well. No wonder with 130 million + entries per month from only 9k tags.