New Historian - QuestDB

Hi,

might be too early, but anyway, some questions regarding the new historian based on QuestDB.

  1. Is there a way to set it up in a high availability mode. E.g. with a redundant gateway, or are there other ways to setup read replicas with the new historian?
  2. Is there a way to backup the database?
  3. There is a native archiving functionality, but I was not able to find a recovery functionality? How would this work?
  4. Are there any plans to provide a native export function from the internal historian to a SQL based database (similar to other SCADA systems)?
  5. Are there any prelimary benchmarks available?

Thank you,
André

1 Like
  1. We do have plans to offer this, but not yet at this time.
  2. It is possible to manually perform a backup of the database, but Ignition will need to be shut down for this. We do have plans for backup / restore functionality for the full database, but not at this time. Another option would be to use ZFS snapshots on Linux.
  3. The initial offering for archiving is to allow for partitions to be restored on an external QuestDB instance. However, not to sound like a broken record, having an integrated solution within Ignition is on the roadmap.
  4. This is not on the roadmap as this can be accomplished using scripting.
  5. Internally we are seeing a 7x throughput improvement for writes compared to a SQL Historian using MSSQL.
3 Likes

Will this interfere with system level backups of the files? Will the gateway need to be shutdown during scheduled system backups?

How are historical records recovered in the case of disaster recovery then?

How about READ throughput?

This is the advice in the Ignition 8.1 user manual for databases:

For production systems, we recommend that your database is on it's own server, not installed on the computer with Ignition. This is helpful for many reasons, but mostly because databases can potentially take up a lot of resources on a computer. If the database is on it's own computer, you don't have to worry about other programs starving for memory or CPU.

My questions for QuestDB:

  1. Is it possible to install it on its own server, rather than on the Ignition server?
  2. Is this recommended?
  3. If not recommended, what has changed to prevent QuestDB taking up lots of resources that affect Ignition?
2 Likes

Appreciate the feedback!

The benchmarks are great. If robust backup and recovery mechanisms, especially for disaster recoverycan be provided, this new historian could become a good alternative to existing solutions on the market. Implementing high availability through read replicas would further differentiate it!
Until these features are in place, I'm not sure how it can be really used in production, but maybe there are still some use-case

Thank you!

Here are some read performance results utilizing the current database schemas.
This is querying 2 weeks of data from a 600 million points distributed evenly from 500 tags. The results are aggregates returning 300 points per query.

The server has an 11th gen i7 CPU, 32 GB RAM and NVME Gen3 storage.

The MinMax aggregate is slower on QuestDB because it utilizes two queries using native aggregation vs the SQL based historians using one query and Ignition aggregation.

Please also note that this chart is using a logarithmic scale.

1 Like

What are the partition size settings in SQL?
We run 3 day partitions across 40+ historian databases through a single server with a little over 50,000 historical tags configured for on-change storage. It stores to a SQL cluster on a separate server, so the local storage only option will kill us too.

Our through puts are good right now. The Historian upgrade is the top priority for us when it comes to 8.3, and it looks like we will have to be waiting for future 8.3.x releases to even consider the move. :frowning:

2 Likes

The tables utilize a partition per day in the above benchmark.

As I understand it, that data collection configuration would remain on an 8.3 upgrade. It just wouldn't be in the historian (the quest DB part).

I think you're raising a very important point, though.

The comparison I'd like to see added is TimescaleDB. I may have to see if I can set up some test with simulated data myself, but I've had great experiences with responsiveness and storage efficiency. While it takes a few extra steps to set up partitioning and compression separately from Ignition, it's not very complex overall and has been working very well.

1 Like

TigerData now :roll_eyes:

1 Like

Did they just change the name of the company but left the software with the TimescaleDB name? Wonder when that naming changed.

This is important for setting expectations: The local QuestDB will be comparable to a proper DB without the config headaches, and way faster than the SQLite-based internal DB.

I don't even think a comparison to the internal SQLite option is even warranted.
We (as in all of us) are looking at scale, redundancy, backups, restores, storage requirements, through put (in AND out), customization's from outside of the Ignition system etc.

In manufacturing the historians are the heart of the system, so some really detailed, documented, points will need to be available to even justify looking at making a global move of this scale.
I actually just went and totaled up our history tags and we are over 88,000 now. The existing system is handling the input/output just fine but the storage requirements are getting large.

1 Like

You might look into TimescaleDB (by TigerData? now) with compression. I'm not storing quite the number of tags you're storing, but as some rough numbers, we're recording roughly 2.5M data points per day for just over 5k tags and before compression it's using about 300MB/day, so about 107GB/year, but we have compression enabled for data over 1 week old, which shrinks those numbers down to about 15-17MB/day or about 5.5-6.2GB/year. (Yes, we're getting about 94% compression reduction on data)

4 Likes

Redundancy, backups, backup/restore and moving partitions between storage locations are all planned for early 8.3.x versions.

2 Likes

Yeah been looking at all kinds of options.
TimeScaleDB run on Postgresql right?
We would need to get the DBAs up to speed on that before even thinking about it.

Just for reference
88,259 tags, at an average of storing at 5 seconds each is 1,525,115,520 records per day.
We have an entire cluster just for historian storage.

In the internal QuestDB historian, 2.5M points would consume approximately 176 MB, uncompressed. Utilizing ZFS compression would compress that down to approximately to 31 MB.

**This is all an approximation using the numbers we are seeing from our test datasets **

2 Likes

I wish more of my clients would switch to Linux, but unfortunately many want to stick with Windows. I'm assuming this is using the default of LZ4 compression, which seems to be the most common I've seen.

Yes, the tests were performed using LZ4 compression.

1 Like