Tag History binding "AsStored" with multiple tags contains wrong data

Hi, I have a question/bug report when using a Tag History binding with the following settings:

  • Return Format: Wide
  • Query Mode: AsStored
  • Time Range: Historical
  • Tags: Direct with multiple tag paths
  • Value Format: Document

Like so:

This results in the following sample data:

[
  {     "t_stamp": 1685620800000,     "cod_sol": null,    "cod_tot": 965    },
  {     "t_stamp": 1685620800000,     "cod_sol": 333,     "cod_tot": 965    },
  {     "t_stamp": 1685707200000,     "cod_sol": 333,     "cod_tot": 506    },
  {     "t_stamp": 1685707200000,     "cod_sol": 262,     "cod_tot": 506    },
  {     "t_stamp": 1685966400000,     "cod_sol": 262,     "cod_tot": 519    },
  {     "t_stamp": 1685966400000,     "cod_sol": 239,     "cod_tot": 519    },
  {     "t_stamp": 1686052800000,     "cod_sol": 239,     "cod_tot": 521    },
  {     "t_stamp": 1686052800000,     "cod_sol": 180,     "cod_tot": 521    },
  {     "t_stamp": 1686139200000,     "cod_sol": 180,     "cod_tot": 956    },
  {     "t_stamp": 1686139200000,     "cod_sol": 271,     "cod_tot": 956    },
  {     "t_stamp": 1686225600000,     "cod_sol": 271,     "cod_tot": 709    },
  {     "t_stamp": 1686225600000,     "cod_sol": 290,     "cod_tot": 709    },
  {     "t_stamp": 1686312000000,     "cod_sol": 290,     "cod_tot": 100    },
  {     "t_stamp": 1686312000000,     "cod_sol": 267,     "cod_tot": 100    },
  {     "t_stamp": 1686312001000,     "cod_sol": 267,     "cod_tot": 459    },
  {     "t_stamp": 1686571200000,     "cod_sol": 267,     "cod_tot": 483    },
  {     "t_stamp": 1686571200000,     "cod_sol": 220,     "cod_tot": 483    },
  {     "t_stamp": 1686657600000,     "cod_sol": 220,     "cod_tot": 966    },
  {     "t_stamp": 1686657600000,     "cod_sol": 427,     "cod_tot": 966    },
  {     "t_stamp": 1686744000000,     "cod_sol": 427,     "cod_tot": 268    },
  {     "t_stamp": 1686744000000,     "cod_sol": 200,     "cod_tot": 268    },
  {     "t_stamp": 1686830400000,     "cod_sol": 200,     "cod_tot": 468    },
  {     "t_stamp": 1686830400000,     "cod_sol": 275,     "cod_tot": 468    },
  {     "t_stamp": 1686916800000,     "cod_sol": 275,     "cod_tot": 421    },
  {     "t_stamp": 1686916800000,     "cod_sol": 354,     "cod_tot": 421    },
  {     "t_stamp": 1687176000000,     "cod_sol": 354,     "cod_tot": 249    },
  {     "t_stamp": 1687176000000,     "cod_sol": 225,     "cod_tot": 249    },
  {     "t_stamp": 1687262400000,     "cod_sol": 225,     "cod_tot": 0      },
  {     "t_stamp": 1687262400000,     "cod_sol": 0,       "cod_tot": 0      }
]

As you can see, for each t_stamp there are two rows. This is expected, and documented behavior: "AsStored: Will return the records as stored in the database. While querying data with this mode, multiple value changes at the same timestamp will result in multiple rows, one row for each unique value."

The problem lies in the way that Ignition carries over values from the previous timestamp into the next one. Let's take a look at the first four rows of the result set:

  {     "t_stamp": 1685620800000,     "cod_sol": null,    "cod_tot": 965    },
  {     "t_stamp": 1685620800000,     "cod_sol": 333,     "cod_tot": 965    },
  {     "t_stamp": 1685707200000,     "cod_sol": 333,     "cod_tot": 506    },
  {     "t_stamp": 1685707200000,     "cod_sol": 262,     "cod_tot": 506    },
  1. First row only contains a value for cod_tot for timestamp 1685620800000.
  2. Second row contains a value for cod_sol AND the cod_tot value from the previous row. Since the previous row represents the same timestamp in this case, this is ok.
  3. Third row represents a new timestamp, and contains a new value for cod_tot. It also carried over the old value from the previous row for cod_sol. This is not ok, since cod_sol does not have value "333" stored in the database for t_stamp "1685707200000"!
  4. Fourth row now contains the correct value for both cod_sol and cod_tot.

When this data is plotted as discrete points on a timeseries chart, we are showing incorrect data as you can see here:

Am I missing some configuration or setting that can prevent this behaviour? Or is this simply a bug in the Tag History binding in this particular setup?

1 Like

It is an impossible-to-solve constraint when you ask for actual sample timestamps (as-stored) in a wide format. Samples almost never have the exact same to-the-millisecond timestamp, so the historian must either sparsely populate the return or duplicate values. Sparse results would show nulls (missing) for where the real data isn't missing, so that isn't really an option.

Just don't use wide format. Reprocess the results of narrow with a transform to get the separate series.

Thanks for the quick reply @pturmel. I understand that for high-frequency sensors the timestamps will indeed almost never line up. But I don't understand that the way to deal with it is to create incorrect data? At least I'd rather have sparsely populated correct data, than densely populated incorrect data...
In my use case, I'm working with lab data that was manually sampled and which are recorded with a fixed timestamp for practical reasons, so all of these datapoints will line up perfectly.

It also seems like the Time Series Chart is perfectly able to deal with this sparse data. I've manually reconstructed what the data set would look like in this way and updated my sample, and now the plot is 100% correct.

Are there other scenario's where having this sparse data would lead to issues?
I'm hesitant to go for the Tall return set with a transformation solution, because it is another transform script that colleagues have to know about and will inevitably forget to use at some time in the future :slight_smile: .

It is the "least incorrect" data possible in the wide format. Your graphic shows the data being duplicated into the timestamps for other points' timestamps before the point of interest has fresh data.

There's no other way to do this with wide format and as-stored. I mean, what do you think should be in the result set where you are showing "incorrect"?

Don't use wide format with as-stored. It is that simple.

Your graphic shows the data being duplicated into the timestamps for other points' timestamps before the point of interest has fresh data.

This is not true though. Check out the first four rows again:

  {     "t_stamp": 1685620800000,     "cod_sol": null,    "cod_tot": 965    },
  {     "t_stamp": 1685620800000,     "cod_sol": 333,     "cod_tot": 965    },
  {     "t_stamp": 1685707200000,     "cod_sol": 333,     "cod_tot": 506    },
  {     "t_stamp": 1685707200000,     "cod_sol": 262,     "cod_tot": 506    },

Row 1 & 2 have the same timestamp (T1) and row 3 & 4 have the same timestamp (T2). So when the value for cod_sol from row 2 (T1) gets duplicated to row 3 (T2), it is incorrect because it does have fresh data, as we can see in row 4 (T2).

There's no other way to do this with wide format and as-stored. I mean, what do you think should be in the result set where you are showing "incorrect"?

I would think that the sparse result set is the only correct way. If the data for different tags doesn't line up perfectly for a timestamp, so be it. I'm requesting the data as it is stored in the database, and that's currently not what I am receiving.

So using my example, I think there would be two correct types of "sparse" result sets:
Option 1: each row only contains one value

  {     "t_stamp": 1685620800000,     "cod_sol": null,    "cod_tot": 965    },
  {     "t_stamp": 1685620800000,     "cod_sol": 333,     "cod_tot": null    },
  {     "t_stamp": 1685707200000,     "cod_sol": null,     "cod_tot": 506    },
  {     "t_stamp": 1685707200000,     "cod_sol": 262,     "cod_tot": null    },

Option 2: values are only duplicated into the next row if the timestamp is equal

  {     "t_stamp": 1685620800000,     "cod_sol": null,    "cod_tot": 965    },
  {     "t_stamp": 1685620800000,     "cod_sol": 333,     "cod_tot": 965    },
  {     "t_stamp": 1685707200000,     "cod_sol": null,     "cod_tot": 506    },
  {     "t_stamp": 1685707200000,     "cod_sol": 262,     "cod_tot": 506    },

Perfect world option: single row per timestamp :wink:

  {     "t_stamp": 1685620800000,     "cod_sol": 333,     "cod_tot": 965    },
  {     "t_stamp": 1685707200000,     "cod_sol": 262,     "cod_tot": 506    },

I do see part of your problem, and that should be reported as a bug. But beyond that, my advice still stands. Mainly because nulls from the tag historian have a long-established behavior that signals a bad quality/missing data at a point for the graphs to break continuous lines. So, sparse is not an option.

Use narrow format and a transform.

1 Like

Thanks for bearing with me. Is there another place where I should report this as a bug?

I understand that it is not possible to break this long established behavior. So it seems like the only way out of this would be the "Perfect world option", where the AsStored method doesn't return multiple rows for duplicate timestamps. Which probably behaves like that due to some technical constraint if I had to guess (: .

Support. https://support.inductiveautomation.com/

Yeah, that's the part I think is a bug.

Well, it is documented to work that way:
Tag History Bindings in Perspective - Ignition User Manual 8.1 - Ignition Documentation says: "AsStored: Will return the records as stored in the database. While querying data with this mode, multiple value changes at the same timestamp will result in multiple rows, one row for each unique value. "

But the implementation of it doesn't seem to be fully correct.

1 Like