Writing Historical Data to Database - Problem with 7.3

I have a script which imports data from CSV files and inserts it directly into the History Partition tables.
Since we upgraded to 7.3.3 at the start of February, this functionality has stopped working.
My script is still inserting the data into the partitions correctly, but the trend displays are not showing it.
Trend displays for historical data generated by Ignition itself are working fine.

I know there were some changes to how historical data is stored as part of a recent release, in relation to pre-processing and so on. We haven’t enabled pre-processing, but has the handling of historical data changed behind the scenes?

One thing I did notice, that may be of interest, is that in the sqlth_partition table, the partition for February has a blocksize of “0”, where all other partitions have a blocksize of “NULL”.
It just so happens, that the last data I display is that with a timestamp before the beginning of the February partition.

Update:

It appears that tags can now have two different ids in the “sqlth_te” table.
I had assumed that they could only appear once in this table.
This table now also has two new columns, “created” and “retired”
Presumably when adding data, I need to use the “id” relating to my “tagpath” where retired is not NULL.

Hi,

That’s right, 7.3 introduced the “created” and “retired” columns in order to improve performance with tags that have been modified over time (and in theory, deleted). The system has always supported multiple ids for a tag path, because some properties stored in the sqlth_te table dictate how the data should be read, and can’t just be updated (data type, for example). However, with no way of knowing which tag id was valid for a given period of time, it had to use all of them when querying.

Anyhow, as it relates to you: I suspect that the system marked your tags as retired as part of the upgrade procedure. You should just be able to set them back to NULL, or, if you want, insert new entries for your paths.

The only other change was the “querymode”, which used to be 0-discrete, 1-analog. The new definition is 1=analog, normal deadband and 3=analog, compressed (2 is the same as 1, but skipped for internal reasons).

The implication for your data is that 1 means the data should only be interpolated for 1 scan class cycle before the new value, and 3 means interpolate fully between one value and another. Since you’re driving it yourself, depending on the “scan class” that the tag is in, these might be equivalent (if the scan class rate is 0).

For the initial implementation of SQLTags History, we didn’t expect 3rd parties to drive their own data, which is why we didn’t publish a spec. However, since the database provides a great way to offer extensibility, we will publish one soon. Likely we’ll add the spec for both realtime and history to an appendix in the user manual.

Regards,

I have another issue relating to this previous question.
Tags are having their “retired” set to a timestamp value and a new tagid in the sqlth_te table is being created seemingly at random.
This is happening to many tags simultaneously, and again is causing problems for my imported historical data.
I can go in and reset all the “retired” values to NULL, but this has happened at least three times recently, and is becoming a nuisance.
What could cause this to happen? We haven’t upgraded recently.

Hi,

Look closely at the columns in the te table for both the old row and the new row- are they identical?

The retired column should only be set if there is a change in data type, scan class, or interpolation mode. If all of those are the same, then something else must be going on. As far as I can tell, the only possibility would be that as the system reads the retired time value from the db, instead of getting null, it gets something else. However, in that case, it seems like it would happen more often.

Also, can you identify any other events that happen around the same time (redundancy events, errors in the gateway, etc)?

Regards,