PostGRES SQL Quarantined Items

Running version 8.1.3 with PostGres 13.2.1 and I am seeing (repeatedly) what I think is one tag causing issues with historical data being quarantined:

ERROR: duplicate key value violates unique constraint "sqlth_1_data_pkey" Detail: Key (tagid, t_stamp)=(1474, 1618781907679) already exists.

This is a new set up and we are not dependent on the data yet, so I have exported examples of this data, then deleted it, and it comes back in a few hours. It is always the same tagid β€œ1474”. I have not paid attention to if/whether the t_stamp is the same.

How do I correlate tagid 1474 to a tag name?

Am I right in thinking that I can look for the row rumber in sqlth_te β€œid” table and find that same row number in the table named β€œtagpath”?

If so, I will disable history logging on that tag temporarily to see if that clears up my issue.

Yes, you should be able to SELECT tagpath FROM sqlth_te WHERE id=1474

1 Like

Okay, I stopped logging that tag yesterday afternoon and there are no more quarantined items. Now, I guess I will turn it back on long enough to see what I may have wrong in my settings that created the issue.

I really need to take a class on SQL, because the way I correlated the tagpath to the tagid was not nearly as simple as your SELECT statement. Thanks Kevin!

Okay, I think this is solved. There was nothing wrong with any of my logging settings. I turned it back on 3 hours ago, and there have been no more quarantined items. By now, it would have happened based on my experience over the last several days. I am going to assume the root cause was all the other tag changes I was making that must have corrupted something.

is there a way to have ignition create pkey on tagid & t_stamp without unique?

It IS possible that a tag may change twice within same timestamp (unless I go into the rabbit hole to find out why and theoretically it should now).

How would a tag change more than once within a given millisecond (in a way that Ignition can be expected to know about)?

Ignition doesn't run on realtime hardware. Optimistically, your best scan rate for most OPC tags in Ignition is going to be in the hundred millisecond range. Very optimistically, tens of milliseconds.

1 Like

No, primary keys are unique by definition, in all SQL brands. It is the whole point of a primary key--uniquely identify the row.

Sorry you are right, let me rephrase my question.

Is there a way I can tell ignition to create an id column using that as primary key and index on tagid and t_stamp?

e.g. instead of:

CREATE TABLE public.sqlt_data_3_2022_10 (
	tagid int4 NOT NULL,
	intvalue int8 NULL,
	floatvalue float8 NULL,
	stringvalue varchar(255) NULL,
	datevalue timestamp NULL,
	dataintegrity int4 NULL,
	t_stamp int8 NOT NULL,
	CONSTRAINT sqlt_data_3_2022_10_pkey PRIMARY KEY (tagid, t_stamp)
);
CREATE INDEX sqlt_data_3_2022_10t_stampndx ON public.sqlt_data_3_2022_10 USING btree (t_stamp);

use this:

CREATE TABLE public.sqlt_data_3_2022_10 (
	id int4 NOT NULL DEFAULT nextval('sqlt_data_3_2022_10_id_seq'::regclass),
        tagid int4 NOT NULL,
	intvalue int8 NULL,
	floatvalue float8 NULL,
	stringvalue varchar(255) NULL,
	datevalue timestamp NULL,
	dataintegrity int4 NULL,
	t_stamp int8 NOT NULL,
	CONSTRAINT sqlt_data_3_2022_10_pkey PRIMARY KEY (id)
);
CREATE INDEX sqlt_data_3_2022_11_tagid_t_stamp_idx ON public.sqlt_data_3_2022_11 USING btree (tagid, t_stamp);
CREATE INDEX sqlt_data_3_2022_10t_stampndx ON public.sqlt_data_3_2022_10 USING btree (t_stamp);

I don't know and I would like to investigate but I need my production up. Fact is:
postgres table is setup by ignition with unique constrain (as should be)
Ignition is trying to insert a record which breaks the constraint
postgres generates error

over 3 million records in queue (many lost due to overflow) affects production (and yes we are looking at generating alert when this happens again).

I'd rather clean up duplicate record than add lost data.

You can try it. But don't be surprised if Ignition's history queries are borked by the duplicate rows. Anything analog that depends on interpolation/compression will be negatively impacted.

I have extremely low confidence that you will able to continue to use the historian after making such changes to your table.

You might be able to drop the primary key constraint altogether, but performance will be abysmal as a result. And, next month, Ignition will just make a new partition with a primary key again anyways.

Well, thats what I want Ignition to do. Is there a way to force ignition to create table as I suggested.

I don't think performance will be impacted @PGriffith as in my proposal still have we still have the same index. Only difference is the pkey unique constraint.

Secondly, we see intermittent duplicates, but when it does the records are backed up. So again I doubt any major impact on historian.

You've been answered: no, you're likely wrong, and you're likely wrong, respectively.

Alternatives:

  • Consider finding the source of your duplicates, like the OP did on this topic.

  • Consider adding an On Insert Trigger to your tables to intercept the conflicts and discard the duplicates. You might even schedule an event to add the trigger programmatically on the day after a new partition is created.