We have an Ignition 8.2 project in the POC stage logging to a MSSQL 2019 datasource.
It has roughly 200 tags set up for logging at 10 second intervals (no deadbanding) so we’re dumping 20 records/sec to the database.
I added reWriteBatchedInserts=true to the Extra Connection Properties box and then disabled and re-enabled the driver, but I’m still seeing the records go into the database one at a time with SQL profiler. Any suggestions?
Thanks so much for your quick reply. Can you tell me a little more about this? I know reWriteBatchInserts is more a PostgreSQL thing, but ..well.. batched inserts have been in dusty HMIs like Cimplicity PE for maybe 20 years now. Is it a super heavy lift to add this feature? It would have a significant positive performance impact on heavily loaded SQL hosts since you’re not having to spin up and close out a transaction for every single record.
Ignition wraps JDBC drivers in a vanilla JDBC connection pooling wrapper. The actual vendor-specific objects are not reachable without extensive reflection and behavioral work-arounds. Good luck.
As it happens, you can pass arrays into the vanilla wrapper to be handed to PostgreSQL's UNNEST() function. Doesn't seem to work with other brands.
I added the useBulkCopyForBatchInsert=true in the Extra Connection Properties box. It didn’t make any difference in the Profiler traces that I can tell. How would one explicitly create batch inserts with a collection of 200 tags being historized every 10 seconds?
If you're using tag history you wouldn't, this is something the history system would do.
I could have sworn it already did that... and I'm having some weird deja vu from an old thread about this where the profiler just doesn't make it clear that a batch is happening, but I could be completely misremembering.
Apparently this is a known issue with MSSQL and the historian specifically. We do attempt to use batched queries on the backend, but (according to the ticket) the MSSQL JDBC driver rejects them because we use datetime instead of datetime2 as the column type.
Changing your translator settings might fix this particular issue (on future historian data tables we automatically create), though might also cause any number of other issues.
Ignition’s tstamp column in sqlth_1_data is a bigint, not a datetime or datetime2. Where are the datetime & datetime2 columns being defined or referenced?
It has been a really long time since MS introduced the datetime2 column type, that doesn't lose the true point-in-time, necessary to properly restore the correct java.util.Date in all connection timezones.
This default in the translator should be changed. (And for PostgreSQL, changed from timestamp to timestamptz.)
Ideally, the MariaDB translator would be changed to timestamp(3) too, but docs would need to strongly suggest the use of the explicit_defaults_for_timestamp setting, to avoid the unfortunate automatic default behavior. (That the setting exists is an admission that the original behavior was ridiculous.)