Performance prolems on SQL server

A client of ours got an automatic warning this morning that their SQL server was hanging on 100% CPU

After some investigation, it looks to come from Ignition Tag historian. When we disable the historical tag provider via the web interface, the CPU drops down a lot, and shows a usual CPU usage (with peaks up to 60%, but enough time of low CPU usage)

When we look at the most time-consuming queries, they’re all the same query:

SELECT TOP 1 "tagid","intvalue","floatvalue","stringvalue","datevalue","t_stamp","dataintegrity" FROM sqlt_data_1_2017_03 WHERE "tagid" IN (?,?) and "t_stamp"<? ORDER BY "t_stamp" DESC

Remarkably, that query has a syntax error: I assume the question marks should be filled in but aren’t.

The amount of tags logged didn’t really change (I executed some queries to count the amount of tags logged over time, and got about 60 000 records per hour when they were in operation).

Some other data:

  • The amount of reads and writes overall by Ignition isn’t that big, there are other services (the MES f.e.) that execute a tenfold of that
  • Ignition is using the pagefile a lot, it’s clearly visible that the reads and writes to the pagefile drops a lot when the tag historian is stopped, while there’s no shortage of RAM
  • The SQL server used is a MS SQL server, on a dedicated machine separate from the Ignition server
  • Nothing changed on the server configuration for a while, but this morning was the first time the sysadmins got a notification of high CPU usage, and it’s easily reproducible by enabling the Tag historian

For now, we disabled the tag historian, so other processes can still use the SQL server, but we would really like to get the history logging up ASAP. Does anyone have an idea where this can come from?

All statements from tag historian are prepared statements, some of the logged messages have the values that will be substituted and some don’t, but there’s nothing wrong with the query itself.

How many records are in that particular data table? How fragmented are the indexes (a common problem for tag historian tables in MSSQL)?

The partitioning is done on a per-month basis. There are currently 23 milliion records for March, and 32 million for February.

When querying the fragmentation, the indexes do look fragmented, however, there’s no difference between the February and the March table (while there were no problems in February). See the two included screenshots.

Should I attempt an index rebuild, or should the tables be made smaller (partition it per week f.e.), or is there something else to look for?

Tens of millions of records is absolutely too much for a single table. That’s going to be extremely slow to index and read through, and very intensive on your database. I would absolutely switch to more frequent partitioning as a first step.

Thanks a lot for your help so far.

We defragmented the database now, and after letting it run for a while it seems to be a lot lighter.

The historian settings of certain tags were also modified to cause less logging (bigger deadband and longer time between logging). So future tables should be smaller.

Do you have an estimation how big the tables can get before running into performance issues?

1 Like