I’m working on an IIoT project using Ignition Edge + Ignition central MQTT broker. We are storing data to a postgreSQL database and we are using an abstraction layer called Timescale. Basically this abstraction layer automatically breaks your database into partitions and maintains performance up to 1 Billion rows. (PostegreSQL starts to drop performance after 50 million or so rows in a table) This is nice because you no longer have to use the Ignition Historian partitioning function and all of your timeseries data appears to you as if it’s stored in one table which is really nice for querying from external sources.
Check it out: https://www.timescale.com/
Also let me know if you would be interested in learning more about our application in a video.
I’ve heard very good things about Timescale, though I haven’t needed it (yet). FWIW, I have worked with hundreds of millions of rows in plain PostgreSQL without trouble. BRIN indices on timestamp columns are especially helpful.
Thank you Phil! I appreciate that! That is reassuring.
Zack - I stumbled across your post. We are an Ignition shop and are also evaluating Timescale. I was wondering how your project is progressing. Are you still bullish on the Timescale??
Hi Zack! Also curious how you like Timescale. Post again about your experiences
Ive been playing with it lately. Installing timescaledb was simple enough because the instructions were good on the Timescaledb website. The install will want to modify the Postgres conf file so make sure you back it up. Timescaledb modifies the conf file in order to optimize the db for insertions. For the most part the db will continue to behave normally but you will notice a bunch of extra tables in the query browser. These tables are for use by Timescaledb to manage the data. The ideal senario is for an Ignition transaction group to log to the Timscale hypertable. The hypertable is an ordinary timeseries table that timescale manages. Its a little tricky to set the transaction group up to use a hypertable but once you do the transaction group logs away as fast as you set it to do so. In a perfect world there would be a button on an historical transaction group that will create the hypertable for you.
I thought about this some more. If transaction groups will create the table automatically then a great feature would be for the group to allow the use a preview and ability to edit the table create statement before starting the group. Alternatively the user can create the table first then point the group to it to acheive the same affect.
IIRC, the auto-create table functions of transaction groups take effect when saving edits on a group, not when the group runs.
Can someone tell me if this will work with the native Ignition Historian module?
Yes, it works. Just make sure you turn off ignition’s features for managing the history tables. This includes data partitioning, pruning, and pre-processing. You are are left with the single history table remaining. At this point, you need knowledge of timescale functionality and how to make that history table into a hypertable. It gets slightly complicated if you are not very comfortable with postgres or SQL.
Things you will need to setup on the table (some optional):
- create hyper table and define how big each “chunk” will be in terms of time. This is timescale’s terminology for data partitions.
- Create function that translates the timestamp column for timescale standard
- define a “now” function for getting current time in the same format
- setup table for compression
- add a policy for compressing chunks
- add a policy for dropping “chunks”. This is timescale’s data pruning.
Their website has some good documentation for performing these operations.
Can TSDB be used without a Transact-Group license? I can do this with MS SQL using Python scripts. Wondering if I can do the same with TSDB and Python Scripts
Do I just configure it as a PostgreSQL DB?
Most of this is defaulted, but I wanted to share my progress setting up the Ignition tag historian and converting it’s sqlth_table to a TimescaleDB hypertable. This thread was the kickstart I needed, so thank yall!
Connect Ignition to your Timescale DB database using the Postgres Driver
Modify the Datasource History Provider that your Timescale connection auto generated. Basically, disable all partition jobs, pruning, etc.
Make sure tags are configured to historize with your Timescale History Provider.
In pgAdmin, find the autogenerated ‘sqlth_1_data’ table, right-click > Query Tool.
SELECT create_hypertable('sqlth_1_data','t_stamp',migrate_data => 'true', chunk_time_interval => 86400000)
Data Output should display a result confirming creation of tag.
An easy way to confirm the table has been converted is to run the SELECT statement again. If the table was created the first time, your result for the 2nd try will error and say table is already a hypertable.
After these steps, my tags continued to populate the database, the native Perspective Tag Browers and History setup on the different charts worked as expected.
I’ve attached some photos to share more detail. Now to dig further into the list of items @alexraymond shared for compression and more!
thanks a lot for your guidance.
what’s about performance with Ignition compare with SQL DB
for transaction group as well, we have to do the same step for hypertable
We have had issues where errant data got recorded to the partition data tables and we had to modify it. With this setup is it easy to modify specific rows of data?
what about t_stamp format … as per mentioned by @alexraymond in the previous reply in the same post.
"Create a function that translates the timestamp column for timescale standard"
Any performance increase with TimescaleDB? Or are people just using TSDB to get the external illusion that all rows are in one table? (which is nice)
I thought that TSDB had an affinity for tags in separate columns? Meaning that we loose the performance increase that TSDB could give us. In other words: Are we loosing some functionality here? I mean, TSDB will look at the 3 value columns in the data table, and assume these are 3 tags.
Further, if what I wrote above is not incorrect. Maybe the plain Ignition Historian(no TSDB), is faster?
I love both TSDB and the Ignition Historian, and would very much like to combine them if there are any advantages.
The instructions from @wbaker above are for converting the Tag Historian to use TSDB, remaining one tag value per row. I haven’t used it, but the implication is that it is faster.
I became interested in Timescale to offload some of the load/responsibility from Ignition and let the database do more work with the partitioning, etc. Timescales ability to compress data is also a benefit for systems without the storage capacity or $$ to increase it. After following @alexraymond ’s guide, I haven’t experienced any negative impacts after 1.5 weeks of use, but really still have a ways on my side to ever know if it’s better.
Currently the DB structure stays the same. It’s the tall sqlth_1_data table Ignition auto creates, along with the various other tables auto created by ignition. Currently, this has allowed Ignitions tag history bindings to function normally even with the sqlth_1_data table converted to a TimescaleDB Hypertable.
I’ve watched a few videos on Timescale’s YouTube channel and the way they claim to be faster is the following. Say the sqlth_1_data table is 3 columns wide, but with a million rows. A TimescaleDB Hypertable will sample 2000 rows and compress/combine that data into a single row/3 column format, but the data is now in an array. So your t_stamp cell within this single row should have an array type list of t_stamp values for the first 2000 rows that were sampled. It’s a policy that’s configured on the Hypertable to do this and it does this for all rows in a 2000 row interval.
I still have a ton to learn to really ever provide advice on this, but it’s been fun setting it up and trying something other than MSSQL.
I cant wait to try this…
While currently implementing a cloud based Ignition server. I think this might be a great solution for how to expose historian data to other cloud services.
I know there are multiple options for external services. Obviously normal TSDB API, but also postgREST:
What a time to be alive