This is a new set up and we are not dependent on the data yet, so I have exported examples of this data, then deleted it, and it comes back in a few hours. It is always the same tagid β1474β. I have not paid attention to if/whether the t_stamp is the same.
How do I correlate tagid 1474 to a tag name?
Am I right in thinking that I can look for the row rumber in sqlth_te βidβ table and find that same row number in the table named βtagpathβ?
If so, I will disable history logging on that tag temporarily to see if that clears up my issue.
Okay, I stopped logging that tag yesterday afternoon and there are no more quarantined items. Now, I guess I will turn it back on long enough to see what I may have wrong in my settings that created the issue.
I really need to take a class on SQL, because the way I correlated the tagpath to the tagid was not nearly as simple as your SELECT statement. Thanks Kevin!
Okay, I think this is solved. There was nothing wrong with any of my logging settings. I turned it back on 3 hours ago, and there have been no more quarantined items. By now, it would have happened based on my experience over the last several days. I am going to assume the root cause was all the other tag changes I was making that must have corrupted something.
How would a tag change more than once within a given millisecond (in a way that Ignition can be expected to know about)?
Ignition doesn't run on realtime hardware. Optimistically, your best scan rate for most OPC tags in Ignition is going to be in the hundred millisecond range. Very optimistically, tens of milliseconds.
I don't know and I would like to investigate but I need my production up. Fact is:
postgres table is setup by ignition with unique constrain (as should be)
Ignition is trying to insert a record which breaks the constraint
postgres generates error
over 3 million records in queue (many lost due to overflow) affects production (and yes we are looking at generating alert when this happens again).
I'd rather clean up duplicate record than add lost data.
You can try it. But don't be surprised if Ignition's history queries are borked by the duplicate rows. Anything analog that depends on interpolation/compression will be negatively impacted.
I have extremely low confidence that you will able to continue to use the historian after making such changes to your table.
You might be able to drop the primary key constraint altogether, but performance will be abysmal as a result. And, next month, Ignition will just make a new partition with a primary key again anyways.
Well, thats what I want Ignition to do. Is there a way to force ignition to create table as I suggested.
I don't think performance will be impacted @PGriffith as in my proposal still have we still have the same index. Only difference is the pkey unique constraint.
Secondly, we see intermittent duplicates, but when it does the records are backed up. So again I doubt any major impact on historian.
You've been answered: no, you're likely wrong, and you're likely wrong, respectively.
Alternatives:
Consider finding the source of your duplicates, like the OP did on this topic.
Consider adding an On Insert Trigger to your tables to intercept the conflicts and discard the duplicates. You might even schedule an event to add the trigger programmatically on the day after a new partition is created.