system.tag.storeTagHistory Quality

Hi,
system.tag.storeTagHistory desn't work with quality value different to 192.
Someone has same problem?
data:

{   "timestamps": [
      {
        "$": [
          "ts",
          192,
          1738604325278
        ],
        "$ts": 1727128832000
      },
      {
        "$": [
          "ts",
          192,
          1738604325278
        ],
        "$ts": 1727128892000
      },
      {
        "$": [
          "ts",
          192,
          1738604325278
        ],
        "$ts": 1727128952000
      },
      {
        "$": [
          "ts",
          192,
          1738604325278
        ],
        "$ts": 1727129012000
      },
      {
        "$": [
          "ts",
          192,
          1738604325278
        ],
        "$ts": 1727129072000
      }
    ],
    "paths": [
      "folder2/path2/present_value",
      "folder2/path2/present_value",
      "folder2/path2/present_value",
      "folder2/path2/present_value",
      "folder2/path2/present_value"
    ],
    "values": [
      36.5,
      36.6,
      36.5,
      36.5,
      36.4
    ],
    "qualities": [
      203,
      203,
      203,
      203,
      203
    ]
  },
}

I don't see what the chunk of JSON you posted has to do with the storeTagHistory function.
What's the script you're using, and if the JSON is your evidence of it not working, where did it come from?

sorry, I explain better
I use system.tag.storeTagHistory in this way:

mydict= myDict= { "timestamps": 
	[{"$": ["ts", 192, 1738604325278 ], "$ts": 1727128832000 }
	, { "$": [ "ts", 192, 1738604325278 ], "$ts": 1727128892000 }
	, { "$": [ "ts", 192, 1738604325278 ], "$ts": 1727128952000 }
	, { "$": [ "ts", 192, 1738604325278 ], "$ts": 1727129012000 }
	, { "$": [ "ts", 192, 1738604325278 ], "$ts": 1727129072000 } ]
, "paths": [ "folder2/path2/present_value"
	, "folder2/path2/present_value"
	, "folder2/path2/present_value"
	, "folder2/path2/present_value"
	, "folder2/path2/present_value" ]
, "values": [ 36.5, 36.6, 36.5, 36.5, 36.4 ]
, "qualities": [ 203, 203, 203, 203, 203 ] 
}

system.tag.storeTagHistory("My History Provider"
                                              , "My Tag Provider"
                                              , mydict['paths']
                                              , mydict['values']
                                              , mydict['qualities']
                                              , mydict['timestamps']
)

Data insert is correct in db table, but quality column remains 0. It change only if I set quality to 192, but I want use a different value of good quality:

Good_Provisional	200	Good data that should not be considered valid long-term.
Good_Initial	    201	Indicates that the value is an initial/seed value for a system that is starting up.
Good_Overload	    202	​Represents good data that is being sampled slower than requested due to a resource limitation.
Good_Backfill	    203	Used to indicate good quality values that have arrived out of order. Different systems can choose to process them accordingly

Those aren't valid timestamps, so the function might be ignoring your extra inputs besides the paths and values.

I rewrote your code; this should be equivalent to what you were intending to do; does this give you the expected quality on output?

values = [36.5, 36.6, 36.5, 36.5, 36.4]
timestamps = [1727128832000, 1727128892000, 1727128952000, 1727129012000, 1727129072000]

dates = [system.date.fromMillis(ts) for ts in timestamps]
paths = ["folder2/path2/present_value"] * len(values)
qualities = [203] * len(values)

system.tag.storeTagHistory(
	"My History Provider", 
	"My Tag Provider", 
	paths,
	values, 
	qualities, 
	dates
)

Hi Paul,
you didn't understand the problem. My code works, stores data in DB. The problem is in quality code management. After running the script and the data is written to the DB, but if you see the data in the table you find that the quality column is 0 while from the script it should be 203.
In other words the quality parameter does not work.
If I specify a quality code, I should find it in the table.

The format of the timestamp in the json depends on the fact that I copied the field from a custom property of Perspective

Another problem is that the function returns nothing if the storage fails.

I ran my code, which, again, is sending timestamps the way the function expects them [1], and I get a unique dataintegrity value in the stored data:

Unfortunately, this is far from a useful test, because this was on an in development 8.3 with a variety of changes to the historian from where you're testing. I don't have time to set up the test in 8.1 to compare against; I would recommend you contact our support department officially - they can investigate properly and determine if this is in fact a bug, or something wrong with your script, or some expected behavior.

This is expected. The storeTagHistory function passes off data storage to the "store and forward" engine, which asynchronously stores data in the target historian. There's no way for it communicate whether that storage failed, because it could happen an indeterminate amount of time after the function was called.


  1. as java.util.Date objects, not the not-a-real-timestamp format Perspective has to use because the frontend doesn't have an equivalent to the Date type natively ↩︎

Actually, another note: the list of quality codes is different than that supported elsewhere in the software, for backwards compatibility reasons.

This is the list of values/codes that I would expect to work:

            case 0:
                return OPC_BAD_DATA;
            case 4:
                return OPC_CONFIG_ERROR;
            case 8:
                return OPC_NOT_CONNECTED;
            case 12:
                return OPC_DEVICE_FAILURE;
            case 16:
                return OPC_SENSOR_FAILURE;
            case 20:
                return OPC_BAD_SHOWING_LAST;
            case 24:
                return OPC_COMM_FAIL;
            case 28:
                return OPC_OUT_OF_SERVICE;
            case 32:
                return OPC_WAITING;
            case 64:
                return OPC_UNCERTAIN;
            case 68:
                return OPC_UNCERTAIN_SHOWING_LAST;
            case 80:
                return OPC_SENSOR_BAD;
            case 84:
                return OPC_LIMIT_EXCEEDED;
            case 88:
                return OPC_SUB_NORMAL;
            case 256:
                return OPC_UNKNOWN;
            case 192:
                return (flags & 0x01) > 0 ? GOOD_BACKFILL : GOOD_DATA;
            case 216:
                return OPC_GOOD_WITH_LOCAL_OVERRIDE;
            case 300:
                return CONFIG_ERROR;
            case 301:
                return COMM_ERROR;
            case 310:
                return EXPRESSION_EVAL_ERROR;
            case 311:
                return SQL_QUERY_ERROR;
            case 312:
                return DB_CONN_ERROR;
            case 320:
                return GOOD_PROVISIONAL;
            case 330:
                return TAG_EXEC_ERROR;
            case 340:
                return TYPE_CONVERSION_ERROR;
            case 403:
                return ACCESS_DENIED;
            case 404:
                return NOT_FOUND;
            case 405:
                return REFERENCE_NOT_FOUND;
            case 410:
                return DISABLED;
            case 500:
                return STALE;
            case 700:
                return WRITE_PENDING;
            case 900:
                return DEMO_EXPIRED;
            case 901:
                return GW_COMM_OFF;
            case 902:
                return TAG_LIMIT_EXCEEDED;
            case 1000:
                return AGGREGATE_NOT_FOUND;
3 Likes

Thanks, this is exactly what I find, there is a different mapping of the quality codes compared to the one in the current manual.
With the map you shared it works like where.

Thanks again

1 Like