High CPU load on database server and high number of connections to database

Hi everyone,

since 2 weeks the CPU usage of our Linux MySQL database server is constantly at 99% upwards (2x2,1 GHz and 8GB RAM). We have made no change to the system, which makes this problem all the more bizarre. Only the Ignition database is running on this server. The database and gateway have separate servers. The connections to the database have been from 3 up to 6 connections.

I looked into the database server status:



There are about 20-230 Selects per seconds but the InnoDB Buffer is always on 100% usage. I often get a warning about a query that runs every minute:

Does this mean there is already a queue of over 3300 queries waiting to be executed
, or how should this message be interpreted?

Only thing I found suspicious so far: The "Innodb_buffer_pool_size" is only 128mb big and "Innodb_buffer_pool_pages_free is constantly 0.

Any tips how to solve this problem?

Sounds like you have a scheduled script that is kicking off a large query that doesn't finish before the next scheduled time.

I suspect you've overloaded your database, while still hammering it so it cannot recover.

Have you restarted both gateway and database?

(You likely need a more powerful database long-term.)

Yes, I restarted both of them. I'll assign 2 more vCPU cores to the server and will look into the performance then. Maybe it is enough to work normally. I just increased the innodb buffer size from 128mb up to 1gb and now the buffer is between 85-90% and not constantly 100%. Then I increased it up to 2gb again and the bufffer size went down to 30%.

The queue that starts every minute, often runs longer than 1 minute. But it is not that complex. Aggregate the data for the last minute and write it into a table. This is for 6 tags each. The query processes in the standard sqlt_data table so it has no index. Can that be the problem?

Oh, yes. Consider not accessing the historian's tables yourself. Ignition provides scripting functions to perform queries for you. Post-process the results in your script if you need additional functionality.

Or don't use the historian (for these tags). Consider using transaction groups (the SQL Bridge module) or their equivalent in scripts to record your data to a traditional "wide" table in your database. Then your summary queries will be dramatically simpler.

I am using a Scheduled Gateway Event Script and I am reading the data via system.tag.queryTagHistory(...).

I assigned 8 cores (2 cores before) and now the cpu load is constant 50-60% which is ok. But I am wondering why the script still runs about 2 minutes long. RAM and CPU are not fully loaded but the query still takes this long.

Is there a possibility to provide more power specifically to this queries?

I am giving the path to the queryTagHistory function and query the "Maximum" from the last minute. I also divide the count I get by the time produced in this minute. So I am getting like 8 values and run a namedQuery wo write in another table in the end. Writing the data is pretty fast but reading the internal database via queryTagHistory is pretty slow / maybe too much workload? Reading a product count, divide it by time producing, add count to specific variables, get bad count and this for 11 tags every minute. Or should this be doable? I need minute values to build a graph of count inside a BI system.

That sounds like missing indices in the DB, causing full table scans for even the simplest queries.

Consider having support remote in to look over your shoulder.

What size are your partitions set for?
How many tags total?
We will need more information to help but I bet you have the default 1 month partition with millions of records in each. I have found a 3 day partition size to be the sweet spot for our setups.

Yes, the partitions are set to 1 month (default).

This could be an idea to change the sets to 1 week for testing. I'll look into the documentation how to change it.

We have about 1.8k tags. And some of them get historized every second.

One month of data is about 120 - 160 million rows.

But is there a possibility to insert indices automatically in the tag tables? These get auto created by ignition itself.

If they are getting recorded on a schedule unconditionally, the tag historian is not an efficient storage method. Consider using transaction groups (SQL Bridge module) to store many tags together in traditional "wide" tables. This can also be scripted on scheduled events or timer events if you do not wish to purchase that module.

Not sure. I rarely use the tag historian--most of my clients want to record gobs of data at high rates.

I found that many expression tags where created with historical deadbad mode "off". These tags wrote at a fixed rate of 1000ms. I have to lookup now, how to clean the database from all those redundant entries. I also changed alot at deadbands and min/max time historization. The load of data per day went down by 60%.

I'll also shorten the partion size from month to 1-2 weeks. I took over the system and wondered why the reports needed that long as well. No wonder with 130 million + entries per month from only 9k tags.