I am currently getting an error in my Store and Forward on Ignition 8.1.36.
Error
Violation of PRIMARY KEY constraint 'PK__sqlt_dat__BE126DD1EA310A01'. Cannot insert duplicate key in object 'dbo.sqlt_data_1_2025_01'. The duplicate key value is (6367, 1738359922827).
I am currently trying to correlate the tagID to a tag path so I can troubleshoot the offending tags.
I am able to query the DB and get the relating info for the tag, but I cant seem to correlate the SQL tagID to a historian path in ignition.
Can anyone help with this? Running MSSQL. Currently I delete the quarantined records and they return shortly after.
I have de same probleme, I have this error java.sql.BatchUpdateException: Violation of PRIMARY KEY constraint 'PK__sqlt_dat, The tags that cause problems are the opc tag, collected via a KEPServerEX.
The duplicate key will not magically go away. Use the S&F retry to make sure anything that can succeed in a batch does so, then discard anything left.
(There is a longstanding race condition that can cause this on connection breakage/re-establishment, and the resulting duplicates have to be discarded.)
I’ve seen this happen regularly in cases of intermittent comms between Ignition gateways. A remote (Edge?) gateway is able to send records to the Historian, but the receipt confirmation does not make it back to the remote gateway, so it sends the data again. Luckily, primary key constraint prevents duplicate entries in the DB.
This error means that something is trying to insert duplicate records into the historian: There's already a record for that combination of tagid/timestamp.
Now, what do you want us to help you with ?
Finding where the duplicates come from ?
Getting rid of them ?
Find where the error is coming from.
The tags that cause problems are the opc tag, collected via a KEPServerEX.
The database, it's in another machine.
The history is not manual, there is no script or another part for historization.
So ultimately I believe the errors I was receiving were caused by the race condition that pturmel was describing. I have done two things to resolve the problem and it has not yet re-occurred.
First was to improve the reliability of one of my OPC servers KepServerEX. I had some ongoing network and licensing issues with it that were causing instability. This was likely the actual fix, as when the instable KepServer would come back online it would produce the issue.
I also used the following SQL query to look up the offending tag in the database.
SELECT * from sqlth_te
WHERE id=;
Be careful with the SQL query if you haven't done it before.
Once I found the offending tag I verified it's configuration and I also renamed it so that it would start new history on that tag.
I can't rename the tags, however I think I have the same behavior with kep as I don't have KEP license in dev environment, on the other hand in production I have the same behavior but I have the kep license.
pturmel Just to confirm something I got from one of your previous posts.
If I have a failed batch in store and forward and maybe there is only one duplicate ID in the batch it will fail the entire batch. This may include tags that do not have duplicates. But if I use the retry feature it will insert all of the tags that it can without duplicates and leave only the tags that have duplicate conflicts? I noticed this issue because it was affecting other tags in the provider that did not have conflicts. It seems they were getting caught up in the failed S&F batch.
Ha99 I would recommend that you use the SQL query I shared to determine which tag is causing you problems.
SELECT * from sqlth_te
WHERE id=(First set of numbers from the error);
Verify that this tags configuration is good, then right click on the tag and restart history. Once you have verified it, you will need to go to the store and forward page. Retry the quarantined batch, and then delete what remains.
The query can either be done through the named queries section in your designer or through whatever SQL management tool you have. Note this query is designed for MSSQL. It may be different if you are using a different DB.
I check the id in the sqlth_te, I delete any record from the sql sqlt_data_ table using the xml export file using a specific script, I try again and it's ok, but after 2 hours or more I have other errors
I'm pretty sure the "one part of batch failed, the whole batch goes to quarantine" thing was a bug that we fixed at some point in the 8.1.x lifecycle, but I'm not up to date on all things historian.
Thanks, I just want to understand how the store and forward quarantine works. I have some Kepservers that will always have an unstable connection as they are part of mobile machinery and are often disconnected and powered off and on. Currently running 8.1.36 but plan on upgrading to the latest stable branch in the next few months.