History data being quarantined as duplicate incorrectly


I was just wondering if anyone had encountered to this issue with script function system.tag.storeTagHistory

We use this twice to write locally and into remote provider:

system.tag.storeTagHistory(“localprovider”,“default”, HistoryPathsArray, HistoryValueArray, HistoryQualityArray, HistoryTimestampArray)

system.tag.storeTagHistory(“remoteprovider”,“default”, HistoryPathsArray, HistoryValueArray, HistoryQualityArray, HistoryTimestampArray)

Now what we are seeing is some odd behavior regarding quarantined data:

When looking at the Store and forward quarantined message it says that we have duplicate data. However, when we look into the historical database we cannot find this data anywhere. Also when attempting to retry the quarantined data again it fails with same message.

Is it possible running the function twice is causing a problem?

Did you get any help on this? I’ve built a script that imports CSV data into tag history using this function, and I’m seeing odd behavior. I’ll get some of the data in the file successfully logged in some files, and in others I’ll get it all. But no errors etc. Only thing I see is that some data is being quarantined as duplicates when this script is running.

My first thought was that this is a performance issue and I’m demanding too much of the historical system. The way my import works is to loop through a wide format csv file and create a python list for each tag in the file (1440 records per tag). So each time I run the function it’s being asked to store 1440 records at a time. I don’t feel like this should be overloading it, but maybe I’m too optimistic. I do have lots of other activity with this database and the historical provider from other processes, so maybe it’s all too much.

Curious to know where you go with this.


Update since release of 7.9.7, this is no longer happening.