Graph data for a long period are not being shown, DB query dropped by itself wo notice or exception

Hello, dear IA and community.
We are using Ignition 7.8.4
Storing historical data every 5seconds. Sometimes there is a need for making a graph for 2 month period using many tags simultaneously. And then we got a trouble. The graph data are not being shown. No records in Ignition’s console, no error windows, no any other exceptions. DB(Postgre SQL 9.5) log has no information about that. From the Windows resource monitor i can see, that postgre process starts reading from the drive but then it ends.
When period is set for 1-2 weeks, there is no problem with reading and displaying the data. Only with bigger queries, that take a long time. We have also relocated DB to the SSD drive. Got very big progress, but the system still hungs on longer periods.

Windows 2012 R2
16GB of RAM
Xeon 1220 v5 CPU
Intel enterprice SSD drive

Some kind of timeout? Why no exceptions then? The QuickChart just stays blank as there is no data at all.

Any opinions, please?
Thank you!

There’s a query time limit in the client project properties you may need to play with. The difficulty of handling long time-series queries is not new, and inspired me to write a module that would both break such queries into smaller chunks and cache the chunks so that scrolling and zoom in/out wouldn’t hammer the database. Take a look.

Hi Phil, Ran across these canned time-slicing queries for MySQL. I’ve tested them out in the Query Browser, and they appear to be pretty fast indeed for surgically querying slices of time series data.

I’m a SQL novice, but I’m learning. The query methods they are using look legit, but I’ve no trained eye.

  1. I’ve considered your caching module for time series data, but if I can reliably run these queries without a noticeable delay, then I will likely press them into service first to see how robust they are. Not crazy about “a solution looking for a problem” approach.

Would I be able to use a canned system.db script (i.e. system.db.runPrepUpdate()) to use those “TRUNC” functions, would I need to approach it another way, in your opinion.

Thanks.

These queries are grouping operations -- they boil lots of data down into summaries. My caching module doesn't summarize. It delivers large quantities of raw data. Apples and oranges. Summary queries don't yield detailed trend datasets.

Stored procedures and functions are defined in the database once, and remain there. Like part of a schema. I hardly ever use MySQL, so I can't be more help. I almost always use PostgreSQL, which has a built-in date_trunc() function that can handle all of these cases. When summaries by date subdivisions is called for.

Ok. Thanks for clarifying the caching module function, Phil. Would the cache module be useful in a binning process that sources large quantities of raw data?

I’m in the early stages of performing binning on sets of time series data in various granularities for benchmarking against a target, and for comparative analysis from one period of time to another. Process values are generally stored in 1-sec, 5-sec, 5-min, 1-hr, and 3-hr sampling intervals depending on their rate of change. Benchmark targets are for 5-min, hourly, 8-hr, 16-hr, and 2 daily intervals that are skewed by 10 hours.

I’m guessing I can get away with sampling intervals from say 5-min and slower, depending on how much history I’m trying to process at once, and how often without the aid of caching. I’ll have to try and see.

But generally, given my use case above, would you conclude that the data stores with higher data density could benefit from the cache module if I’m looking to overlay the raw data onto the summary data on the same trend?

Like so:

The cache module could help in that case. It doesn’t matter if it is summarized data or not, but how dense the result sets are. I use the cache module in two primary cases:

  1. A single client will be charting one or more datasets with hundreds of thousands of rows or more, particularly if in historical mode and the user will be doing much panning, zoom, and sliding of the date range. If your chart takes tens of seconds to get the data, or often times out, the chunking in the cache module can greatly improve the user experience.

  2. Multiple clients will be charting the same data with repeated requerying, like realtime charts. The cache module will consolidate these overlapping queries into single DB queries, and distribute the result data to all clients. For realtime in particular, the queries will be optimized to pick up new data within a narrow time span to further reduce DB load.

The module documentation’s Operation section describes the cache behavior in more detail.

1 Like

Right on. Thanks for the handy link. I’ll put that on my reading list.