[IGN-7967]Does reWriteBatchedInserts=true work on MSSQL?

We have an Ignition 8.2 project in the POC stage logging to a MSSQL 2019 datasource.

It has roughly 200 tags set up for logging at 10 second intervals (no deadbanding) so we’re dumping 20 records/sec to the database.

I added reWriteBatchedInserts=true to the Extra Connection Properties box and then disabled and re-enabled the driver, but I’m still seeing the records go into the database one at a time with SQL profiler. Any suggestions?

I can't find any evidence or documentation that that MSSQL JDBC driver supports this parameter.

Thanks so much for your quick reply. Can you tell me a little more about this? I know reWriteBatchInserts is more a PostgreSQL thing, but ..well.. batched inserts have been in dusty HMIs like Cimplicity PE for maybe 20 years now. Is it a super heavy lift to add this feature? It would have a significant positive performance impact on heavily loaded SQL hosts since you’re not having to spin up and close out a transaction for every single record.

Or maybe bulk inserts on MSSQL JDBC are enabled some other way? It looks like there is rudimentary support in the driver for it.

Ignition wraps JDBC drivers in a vanilla JDBC connection pooling wrapper. The actual vendor-specific objects are not reachable without extensive reflection and behavioral work-arounds. Good luck.

As it happens, you can pass arrays into the vanilla wrapper to be handed to PostgreSQL's UNNEST() function. Doesn't seem to work with other brands.

Try useBulkCopyForBatchInsert=true

Not sure if this works unless the caller is explicitly creating batches, though.

I added the useBulkCopyForBatchInsert=true in the Extra Connection Properties box. It didn’t make any difference in the Profiler traces that I can tell. How would one explicitly create batch inserts with a collection of 200 tags being historized every 10 seconds?

If you're using tag history you wouldn't, this is something the history system would do.

I could have sworn it already did that... and I'm having some weird deja vu from an old thread about this where the profiler just doesn't make it clear that a batch is happening, but I could be completely misremembering.

1 Like

Apparently this is a known issue with MSSQL and the historian specifically. We do attempt to use batched queries on the backend, but (according to the ticket) the MSSQL JDBC driver rejects them because we use datetime instead of datetime2 as the column type.

Changing your translator settings might fix this particular issue (on future historian data tables we automatically create), though might also cause any number of other issues.

2 Likes

Ignition’s tstamp column in sqlth_1_data is a bigint, not a datetime or datetime2. Where are the datetime & datetime2 columns being defined or referenced?

1 Like

Not the t_stamp, the datevalue column.

Apparently even though it's null in the inserts, that's enough to break the optimization in the driver.

Again, according to the ticket :person_shrugging:

It has been a really long time since MS introduced the datetime2 datetimeoffset column type, that doesn't lose the true point-in-time, necessary to properly restore the correct java.util.Date in all connection timezones.

This default in the translator should be changed. (And for PostgreSQL, changed from timestamp to timestamptz.)

Ideally, the MariaDB translator would be changed to timestamp(3) too, but docs would need to strongly suggest the use of the explicit_defaults_for_timestamp setting, to avoid the unfortunate automatic default behavior. (That the setting exists is an admission that the original behavior was ridiculous.)

2 Likes

Thanks Phil for the pointers. I tried changing the datetime column in the translator to datetime2, disabled the driver then re-enabled. No apparent change (i.e. no batched inserts) shown in the profiler.

The translator value would only take effect when creating new tables. For existing tables, you will need to alter the columns yourself. Depending on the connection time zone, you may need to:

  • add a new column of datetime2,
  • populate it with values from the original column, applying the right time zone,
  • alter the original column to datetime2,
  • copy from the new column to the original,
  • then drop the temporary column.

{ Be sure to disable the historian connection during steps 3 & 4. }

datetime2column types in MSSQL aren’t timezone aware - for that you use datetimeoffset.

Actually sqlth_1_data in our POC database is a view with instead-of triggers, not a table. I can’t imagine this is complicating things :grinning_face: FWIW I did change the column type in the view to a datetime2.

1 Like

Hmm. Yes, datetimeoffset is the correct type. I think I used to know that, but don't use MS SQL enough. (Not unless a client firmly demands it.....)

I tried changing the datetime column in the translator to datetimeoffset, changed the view return type to datetimeoffset as well, disabled the driver then re-enabled. No apparent change (i.e. no batched inserts) shown in the profiler.

If you haven't altered the existing tables yourself, then they are still datetime, and you won't see a difference for inserts to those.

If you've redefined sqlt_data_.... to be a view, and are relying on insert triggers to actually move the data to real tables, then all bets are off.

Per this, any bulk copy can only target actual tables, not views:

Also, if this note applies to SQL Server and not just "Azure SQL DW", which I suspect it does: Bulk copy API for batch insert in JDBC - JDBC Driver for SQL Server | Microsoft Learn.

Then datetime2 is the only one that has a chance?

1 Like

That article does say something about logging an error before going to the fallback (which I assume is happening) Is this error log available somewhere? I’m not seeing anything in the Diagnostics→Logs dashboard.

FWIW, we’re running an 8.3 project, not an 8.2