Ignition 8.3.2 queryHistory() latency

Hello,

I had a queryHistory() method that I was using in Ignition 8.3.1 that was querying 1000 tags which had a response time of approximately 2s, and I have upgraded my Ignition to 8.3.2 which caused the same method to have a response time that was approximately 10s and more.

I had been calling this method using 1000 tags that are historized with metadata=true and being in discrete mode.

I am using the queryHistory() like this:

queryHistory(queryParams, tagHistoryDataset)

where queryParams is:

new BasicTagHistoryQueryParams(
        tagPaths,
        new Date(startTime),
        new Date(endTime),
        1,
        TimeUnits.SEC,
        AggregationMode.Average,
        ReturnFormat.Wide,
        alias,
        Collections.singletonList(AggregationMode.Average)
);

tagPaths is a List, start and end times are long-types spanning 10 minutes from the start time, and alias is a List that is using the string of the equivalent tagPath

I am providing the fully qualified path to tagPaths in the format “histprov:TestHistorian:/sys:ignition-8dc6e43e15ba:/prov:default:/tag:testsine0“.

The tagHistoryDataset is:

BasicStreamingDataset tagHistoryDataset = new BasicStreamingDataset();

I am wondering if I have this set up incorrectly (such as missing flags or similar) for 8.3.2 and would like advice or pointers to the correct solution to get a response time within approximately 2s again.

Separately, but in relation to this, it seems that some query calls are being deprecated on the perspective scripting side (8.1 to 8.3 Upgrade Guide | Ignition User Manual,) such as from “system.tag.queryTagHistory” to “system.historian.queryRawPoints”. Is something similar occurring on the SDK side as well?

After discussing with IA staff in support, switching to an external database and setting it up as the historian has solved my issue.

2 Likes

I am not sure why it was initially ok when I had switched to an external historian, but the issue seems to have returned. The gateway is now also producing this output:

Has anyone else come across this warning before?

That pattern shows up when you query the historian for a large number of tags in one request, or some tags in the requests have experienced pathological reconfigurations (producing many tagid for one tag). If you have scripts writing to tag configuration properties (enabling/disabling/changing alarm limits/whatever) regularly, this can happen.

You would need to show query plans with actual tagids filled in to see why your DB is choking on these.

(The warning is generic for slow queries on any database for any subsystem.)

Do verify that your data partitions are indexed by both t_stamp and tagid.

Thank you very much for your quick response.

I have taken a look at the database table and the tagid and t_stamp are both indexed.

Are there any settings on my db that may be affecting it negatively?

Would you be able to give me some pointers on how to retrieve the execution plan?

From what I understand the Ignition SDK was handling the parsing of the query parameters into an actual query after I called the queryHistory method.

Hmmm. I'd think the primary key should be t_stamp then tagid. You'd need some query plans to verify the performance either way, and that the DB is actually using the indices.

To run a query plan is DB-specific, but typically some form of "EXPLAIN" command.

You would run the query EXPLAIN SELECT tagid.... with actual tag IDs in place of the question marks. (You would have to look them up yourself in the DB.)

What is your db, with some db there are some limits on the number of parameters...
We have a similar use case in a module, we have a limit on the number of tags to avoid this kind of error and we plan to manage multiple request and merge the results....but I am not sure we use wide mode,

@pturmel

Ok, thank you, I’ll see what I get.

@mazeyrat

At the moment I am using MariaDB.

The limit on the number of tags sounds like a good way to implement it.

in my use case I query with the tall mode.
If we do several query to limit the numbers of tagtahs per query, we will need to concat the dataset results and sort it.

another idea, perhaps check max_allowed_packet parameter in mariadb

For the EXPLAIN SELECT tagid… I have got the following result:

That first step looks wrong. Should be an index scan or the MariaDB equivalent. That it uses a temporary table suggests that is the culprit.

Consider adding a new index on (t_stamp, tagid) and seeing if your query will use it.

I see. I will try what you suggested for the new index.

@mazeyrat

The max_allowed_packet is currently the default = 16777216

After updating my index to be PRIMARY_KEY(t_stamp, tagid) like this:

The EXPLAIN SELECT ... result now looks like this:

The rows that had the Using temporary; Using filesort is no longer there and the response time has improved.

I will run some more tests to make sure I have not missed anything, but this does appear to have solved the issue that I was having.

Thank you @pturmel and @mazeyrat for the help and suggestions.

1 Like

I am similarly having issues with queryHistory() response times in 8.1.50. I tried checking indexes and primary keys as suggested in this thread, but no luck with improvement. I am using MSSQL and the query plan looks like this:

The primary key is set up as (tagid, t_stamp). But it seems that the query is not taking any extra steps to execute like @robert.t experienced. I tested switching the primary key to (t_stamp, tagid) but experienced no improvement. Any other IgnitionHistorical configuration suggestions for a MSSQL user like myself?

Assuming some of those column values are seconds, then MSSQL is telling you the vast majority of time is spent on IO. Are you on mechanical hard disks/are they particularly fragmented/is the index itself fragmented on disk/are you in a virtualized environment that's overconstrained at the hypervisor level?

2 Likes

Could you elaborate on the VM being over constrained? Do you just mean in CPU cores/memory allocation?

IO being (catastrophically) slow generally means either:

  1. CPU pressure (everything will be slower)
  2. Spinning rust platters that are simply too slow. If it's a VM host managing lots and lots of services that are all trying to do disk operations at once, there's only so much any hardware can do, but SSDs of any kind are significantly faster than mechanical hard disks.

It could be memory pressure (not enough memory leading to paging, which is significantly slower than RAM and bottlenecks IO), but that's not where I'd start.