We have an Ignition 8.2 project in the POC stage logging to a MSSQL 2019 datasource.
It has roughly 200 tags set up for logging at 10 second intervals (no deadbanding) so we’re dumping 20 records/sec to the database.
I added reWriteBatchedInserts=true to the Extra Connection Properties box and then disabled and re-enabled the driver, but I’m still seeing the records go into the database one at a time with SQL profiler. Any suggestions?
Thanks so much for your quick reply. Can you tell me a little more about this? I know reWriteBatchInserts is more a PostgreSQL thing, but ..well.. batched inserts have been in dusty HMIs like Cimplicity PE for maybe 20 years now. Is it a super heavy lift to add this feature? It would have a significant positive performance impact on heavily loaded SQL hosts since you’re not having to spin up and close out a transaction for every single record.
Ignition wraps JDBC drivers in a vanilla JDBC connection pooling wrapper. The actual vendor-specific objects are not reachable without extensive reflection and behavioral work-arounds. Good luck.
As it happens, you can pass arrays into the vanilla wrapper to be handed to PostgreSQL's UNNEST() function. Doesn't seem to work with other brands.
I added the useBulkCopyForBatchInsert=true in the Extra Connection Properties box. It didn’t make any difference in the Profiler traces that I can tell. How would one explicitly create batch inserts with a collection of 200 tags being historized every 10 seconds?
If you're using tag history you wouldn't, this is something the history system would do.
I could have sworn it already did that... and I'm having some weird deja vu from an old thread about this where the profiler just doesn't make it clear that a batch is happening, but I could be completely misremembering.
Apparently this is a known issue with MSSQL and the historian specifically. We do attempt to use batched queries on the backend, but (according to the ticket) the MSSQL JDBC driver rejects them because we use datetime instead of datetime2 as the column type.
Changing your translator settings might fix this particular issue (on future historian data tables we automatically create), though might also cause any number of other issues.
Ignition’s tstamp column in sqlth_1_data is a bigint, not a datetime or datetime2. Where are the datetime & datetime2 columns being defined or referenced?
It has been a really long time since MS introduced the datetime2datetimeoffset column type, that doesn't lose the true point-in-time, necessary to properly restore the correct java.util.Date in all connection timezones.
This default in the translator should be changed. (And for PostgreSQL, changed from timestamp to timestamptz.)
Ideally, the MariaDB translator would be changed to timestamp(3) too, but docs would need to strongly suggest the use of the explicit_defaults_for_timestamp setting, to avoid the unfortunate automatic default behavior. (That the setting exists is an admission that the original behavior was ridiculous.)
Thanks Phil for the pointers. I tried changing the datetime column in the translator to datetime2, disabled the driver then re-enabled. No apparent change (i.e. no batched inserts) shown in the profiler.
The translator value would only take effect when creating new tables. For existing tables, you will need to alter the columns yourself. Depending on the connection time zone, you may need to:
add a new column of datetime2,
populate it with values from the original column, applying the right time zone,
alter the original column to datetime2,
copy from the new column to the original,
then drop the temporary column.
{ Be sure to disable the historian connection during steps 3 & 4. }
datetime2column types in MSSQL aren’t timezone aware - for that you use datetimeoffset.
Actually sqlth_1_data in our POC database is a view with instead-of triggers, not a table. I can’t imagine this is complicating things FWIW I did change the column type in the view to a datetime2.
I tried changing the datetime column in the translator to datetimeoffset, changed the view return type to datetimeoffset as well, disabled the driver then re-enabled. No apparent change (i.e. no batched inserts) shown in the profiler.
That article does say something about logging an error before going to the fallback (which I assume is happening) Is this error log available somewhere? I’m not seeing anything in the Diagnostics→Logs dashboard.