Historian Table Partitioning

Looks like a Big Data project. You need to look into some Big data solutions like Apache Kafka which has the ability to store and serve large volumes of historical data. You can PubSub massive volumes of data from Ignition to Kafka broker. You may have to build a “Kafka PubSub client module” for Ignition. There’s no JDBC driver for Kafka. I couldn’t find any other better solution. This is a very good challenge and you must take it head on.

Check the professional solutions and services provided by Confluent.

Which db do you have?
Are you using TG for datalogging?. If so, can you provide TG config details?
At what rate data is updated?. Like, no of rows per x secs?.

Thanks for your feedback, guys.

Phil, your idea might have been the right answer if it had occurred to me about a year ago. But at this point, I’m tied into using the historian because we have an entire “graphical replay” system with dozens and dozens of templates that is dependent on using system.tag.queryTagHistory() functions. I’m sure I could figure out a way around using the historian, but with the number of tags our system has, I’m not sure it’s feasible to recommend that path now.

R.Alamsha, the site has a PI historian already, and this information is duplicated there. PI functionality is fine, so we don’t want another Big Data solution. The issue I have is with the Ignition trending/charting. Retrieving data over a one month or longer period is almost infeasible. Yes, I’m logging using a historical tag group. It executes at once per second, which is the customer requirement.

Way before I started this project, I ran this hypothetical architecture past IA, and was told that we’d have no trouble. Well, we have no trouble storing the data, but long queries are just terrible. The trending system is almost unusable. Any additional thoughts?

Thanks again.

Do you mean, you need a better Data Visualization solution?.

You don’t have your easy charts setup for raw data resolution, do you? Or some really big number in the fixed resolution mode? I don’t think either of these is the default for an easy chart, but it doesn’t hurt to ask…

No, the problem is not with the visualization, but the slow query speeds when populating a trend with any significant duration. It doesn’t matter whether I’m using an Easy Chart, or a system.tag.queryHistory() function: if I request data spanning much beyond a week or so, the speed of the query is not acceptable.

This issue, as @pturmel alluded, is one of database performance. That’s why I asked about table partitioning and for more information about “pre-processed partitions.” I think this is just a limitation of SQL Server or any other database server. When I have tens of millions of rows per table, it’s just going to be slow. But if I partition the tables further, then many of my queries will span more tables, which will also be slow. So I don’t know if there’s a sweet spot with the partitions, if pre-processing is helpful, or if I just need to tell my customer that they’re going to have to live with poor performance.

1 Like

No, the issue isn’t with the Easy Chart, but thanks.

You might try asking on StackOverflow, they have some really good DB forums IMHO.

I haven’t had to optimize the Historian, as I don’t ever put millions of rows in it. /-: But I’d start with studying all of the indices on the tables to make sure the queries are optimizable and optimization statistics are present. I would make sure there is a BRIN index on the t_stamp column, and a separate b-tree index on the tag id column. Recommended indexing would vary with DB brand, of course. Larger partitions should be tolerable with proper indices.

The EXPLAIN ANALYZE … command will show you what the optimizer decides and how well it works. I wouldn’t use anything older than PostgreSQL 9.5.

Check out this writeup…

If the width of your plot is 1000 pixels then there is no point in returning more than 1000 datapoints. If you are plotting a whole day you will have 86,400 samples but can only display 1000. If you set the pre-processed partition roll up time period to 60s then you will have another table per partition with one sample per minute or 1440 samples per day. If your query/plot doesn’t require more than 1 sample per minute then it should use the pre-processed table. I suppose if you are plotting a week it will have to hit 7 pre-processed tables with an individual query, but maybe 7 queries is faster than downsampling 500k samples for each series?

I ran in to some funny business where the database faulted for unknown reason and then ignition stopped populating the pre-processed table after the database connection was restored so long time period plots didn’t have any data, so it might not be fully baked.

It matters if you wish to use the built-in zooming and panning. Those operations do not requery for the changed resolution.

1 Like

We have a historian performing well with 48 million rows per 12 hrs. Our partitioning is set to 12hr intervals. We’re using Debian OS with MariaDB and TokuDB storage engine. If changing OS and storage engines is an option for you then I think this would work.

1 Like

Thank you, Joe. When you say that this performs well, does it perform well when querying data that spans a month or more? That would be 60+ joins, and I can’t imagine that being very speedy. But if it is, I’ll consider switching, or at least using your suggested architecture for future projects.

I did some trials using system.tag.queryTagHistory for 2 months of data, Aug 1st to Oct 1st.
Tags were a mix of bools, integers and reals
3 tags took 5.7s, 5 tags took 4.7s, 6 tags took 4.4s, 7 tags took 5.4s.

Here are the results and script.

Number of tags: 3
returnSize :10000
Query start time: 2018-10-10 12:21:22:022
Query end time: 2018-10-10 12:21:27:027
Query duration in millisecs: 5750

Number of tags: 5
returnSize :10000
Query start time: 2018-10-10 12:21:55:055
Query end time: 2018-10-10 12:22:00:000
Query duration in millisecs: 4703

Number of tags: 6
returnSize :10000
Query start time: 2018-10-10 12:23:23:023
Query end time: 2018-10-10 12:23:28:028
Query duration in millisecs: 4406

Number of tags: 7
returnSize :10000
Query start time: 2018-10-10 12:24:24:024
Query end time: 2018-10-10 12:24:29:029
Query duration in millisecs: 5453

startQueryTime = system.date.now()
tags =[mix of Bool, Integer, Real tags removed for IP reasons]
returnSize = 10000
dataset = system.tag.queryTagHistory(paths = tags, startDate ='2018-08-01', endDate='2018-10-01', returnSize=returnSize, aggregationMode='Maximum')
endQueryTime = system.date.now()

print 'Number of tags: ' + str(len(tags))
print 'returnSize :' +str(returnSize)
print 'Query start time: ' +str(system.date.format(startQueryTime,'yyyy-MM-dd HH:mm:ss:sss'))
print 'Query end time: ' +str(system.date.format(endQueryTime,'yyyy-MM-dd HH:mm:ss:sss'))
print 'Query duration in millisecs: ' + str(system.date.millisBetween(startQueryTime,endQueryTime))

For direct history queries our most common day to day use would be easy charts with up to a week of data and they respond well. Load in less than 1 second.

This might be off topic but might help. For reports that show aggregates of weeks or months of data we archive history data into other tables to make the queries faster. We run gateway timer scripts every 5 mins or 1 hour etc. The scripts query history for the info we are interested in say energy used per motor then store the results in a different table giving us energy used per motor per hour.
Maybe that approach could help?

1 Like


I really appreciate you taking the time to do those tests. Honestly, these are remarkable results compared to what we’re seeing. My database is Microsoft SQL Server because it’s the corporate standard for this organization. But my Easy Charts will actually time out when trying to retrieve one month of data for just two or three tags. And you have twice as much data as I do. Do you really think it’s just the database engines that are the bottleneck? These are VMs – managed by the customer’s IT staff. Any chance that resources on the database server could cause this sort of issue?

What were the primary reasons for using MariaDB instead of MySQL? Just trying to keep to more traditional open source, or were there actual performance reasons?

Thanks also for the recommendation to pre-aggregate the data for reporting purposes. We actually do that already, and it’s been very helpful in speeding up report generation.

Thanks again for your time.


Hi Jeff,

I don’t know if this could be related to your problem, but if so it might be worth a shot.

I had a problem with the historian and the store&forward, where some of the data was dropped when forwarded to the historian.
To fix the problem I set the ‘Max Active’ property in the setup of the DB connection on the gateway to 50 instead of 8.
I also adjusted the partition units and length for the historian. Instead of having the partition length to be a year I changed it to be a week instead. I can see that you already have this set to a day. What happens if you lower this even more?
This fixed my problem.

FYI I read somewhere that it is advice able to only have 1-2 million rows in each partition - can’t find it now though…

Hi Jeff,

MS SQL is known to have problems with index fragmentation on big tables.

Perhaps it’s worth to read this, and check out if it matches with your problems.


1 Like

We are using Linux and MariaDB with tokuDB engine for performance reasons. On our first large ignition job back in 2014 we originally planned for Windows server and MS SQL Server but Kyle Chase recommended this setup for performance reasons, so we went this direction.
The server is a VM on VMWare, allocated 2 processors of 6 cores each, ram (um) 400GB but 100GB appears to be in use, from 2014-2017 it was 16GB and running okay.

I’m no expert on databases but the Fractal Tree Indexing TokuDB uses often gets credited with maintaining speed on large tables because it doesn’t fragment.


In this case, please consider using a real-time database, for example CanaryLabs Historian or Kepwaer Local Historian.

1 Like