Hi All,
We are currently load testing our databases using the Tag Historian. Our default database is MSSQL, more recently we’ve been testing TimescaleDB.
We increase the number of tags historized per second and monitor Ignition, the Database and the servers we run everything on. With MSSQL we get to around 80k tags/second before seeing the Cache start to fill up. With TimescaleDB, despite all our monitoring indicating that Timescale is under less stress (Database RAM, throughput between Database and NAS, CPU etc.) we see the Ignition Cache fill up at around 40k tags/second.
With MSSQL we are partitioning the historized database over multiple database connections so for example, for the 80k tag test, there are 5 database connections and each Store and Forward engine handles 16k tags. With MSSQL we also observe Cache Take, Storage and Forward throughputs to be equal most of the time. With TimescaleDB, Cache Take and Forward are regularly 0, whilst the Cache Storage throughput matches the Memory Buffer throughput. There is just one Store and Forward engine when using TimescaleDB
My question is whether the Store and Forward engine is a bottleneck in situations like this above 20k tags? Do we always need to create multiple database connections if we want to historize e.g. 200k tags/second? Does anyone have similar experiences? Our expectation was that we would be able to historize more data per second with TimescaleDB