Ignition Store and Forward Bottleneck

Hi All,

We are currently load testing our databases using the Tag Historian. Our default database is MSSQL, more recently we’ve been testing TimescaleDB.

We increase the number of tags historized per second and monitor Ignition, the Database and the servers we run everything on. With MSSQL we get to around 80k tags/second before seeing the Cache start to fill up. With TimescaleDB, despite all our monitoring indicating that Timescale is under less stress (Database RAM, throughput between Database and NAS, CPU etc.) we see the Ignition Cache fill up at around 40k tags/second.

With MSSQL we are partitioning the historized database over multiple database connections so for example, for the 80k tag test, there are 5 database connections and each Store and Forward engine handles 16k tags. With MSSQL we also observe Cache Take, Storage and Forward throughputs to be equal most of the time. With TimescaleDB, Cache Take and Forward are regularly 0, whilst the Cache Storage throughput matches the Memory Buffer throughput. There is just one Store and Forward engine when using TimescaleDB

My question is whether the Store and Forward engine is a bottleneck in situations like this above 20k tags? Do we always need to create multiple database connections if we want to historize e.g. 200k tags/second? Does anyone have similar experiences? Our expectation was that we would be able to historize more data per second with TimescaleDB

Have you played with the pool size in your DB connections? (Ignition recycles connections in a pool instead of making a new connection for each query.)

Also, TimescaleDB is built on top of PostgreSQL, and many optimizations in postgresql.conf would still apply. (Particularly memory allowances.)

  1. Do you have the following connection property set?
    reWriteBatchedInserts=true;
  2. How are you creating your hyper tables?
  3. Have you ran the following command (or the windows equivalent) to help tune your database?
    timescaledb-tune
  4. What are your database server specs?
2 Likes

My impression (after investigating an issue long ago) was that each S&F engine always utilizes a single connection to the database.

1 Like

Thanks, I’ve also had Chris’s experience where the connections rarely if ever go above 1. Regardless of the pool size. The postgresql.conf file is standard, all the memory settings seem reasonable.

  1. This is a JDBC driver extra property? I am not using any extra connection properties so this might be the issue.
  2. From the database shell once the timescale extension is added, I create hypertable, then add compression to the table and add a compression policy to the table to compress chunks older than 7 days. Using this guide as reference
  3. Yes I ran the timescaledb-tune
  4. Database sits on a 32GB VM running Windows Server 2025, 12vCPUs, 300GB Disk size, sitting on ESXI server.

To confirm ‘rewriteBatchedInserts’ refers to the JDBC property, and the Database Connection → Extra Connection Properties in Ignition I should add

reWriteBatchedInserts=true;

Thanks!