[IGN-7967]Does reWriteBatchedInserts=true work on MSSQL?

We have an Ignition 8.2 project in the POC stage logging to a MSSQL 2019 datasource.

It has roughly 200 tags set up for logging at 10 second intervals (no deadbanding) so we’re dumping 20 records/sec to the database.

I added reWriteBatchedInserts=true to the Extra Connection Properties box and then disabled and re-enabled the driver, but I’m still seeing the records go into the database one at a time with SQL profiler. Any suggestions?

I can't find any evidence or documentation that that MSSQL JDBC driver supports this parameter.

Thanks so much for your quick reply. Can you tell me a little more about this? I know reWriteBatchInserts is more a PostgreSQL thing, but ..well.. batched inserts have been in dusty HMIs like Cimplicity PE for maybe 20 years now. Is it a super heavy lift to add this feature? It would have a significant positive performance impact on heavily loaded SQL hosts since you’re not having to spin up and close out a transaction for every single record.

Or maybe bulk inserts on MSSQL JDBC are enabled some other way? It looks like there is rudimentary support in the driver for it.

Ignition wraps JDBC drivers in a vanilla JDBC connection pooling wrapper. The actual vendor-specific objects are not reachable without extensive reflection and behavioral work-arounds. Good luck.

As it happens, you can pass arrays into the vanilla wrapper to be handed to PostgreSQL's UNNEST() function. Doesn't seem to work with other brands.

Try useBulkCopyForBatchInsert=true

Not sure if this works unless the caller is explicitly creating batches, though.

I added the useBulkCopyForBatchInsert=true in the Extra Connection Properties box. It didn’t make any difference in the Profiler traces that I can tell. How would one explicitly create batch inserts with a collection of 200 tags being historized every 10 seconds?

If you're using tag history you wouldn't, this is something the history system would do.

I could have sworn it already did that... and I'm having some weird deja vu from an old thread about this where the profiler just doesn't make it clear that a batch is happening, but I could be completely misremembering.

1 Like

Apparently this is a known issue with MSSQL and the historian specifically. We do attempt to use batched queries on the backend, but (according to the ticket) the MSSQL JDBC driver rejects them because we use datetime instead of datetime2 as the column type.

Changing your translator settings might fix this particular issue (on future historian data tables we automatically create), though might also cause any number of other issues.

1 Like

Ignition’s tstamp column in sqlth_1_data is a bigint, not a datetime or datetime2. Where are the datetime & datetime2 columns being defined or referenced?

1 Like

Not the t_stamp, the datevalue column.

Apparently even though it's null in the inserts, that's enough to break the optimization in the driver.

Again, according to the ticket :person_shrugging:

It has been a really long time since MS introduced the datetime2 column type, that doesn't lose the true point-in-time, necessary to properly restore the correct java.util.Date in all connection timezones.

This default in the translator should be changed. (And for PostgreSQL, changed from timestamp to timestamptz.)

Ideally, the MariaDB translator would be changed to timestamp(3) too, but docs would need to strongly suggest the use of the explicit_defaults_for_timestamp setting, to avoid the unfortunate automatic default behavior. (That the setting exists is an admission that the original behavior was ridiculous.)

2 Likes