Ignition Edge: displaying tag history for multiple tags in a table results in multiple rows for identical timestamps

I am currently working on implementing and Ignition Edge instance which will eventually (hopefully) be used with our existing Ignition server in the future. However, I am experiencing a significant issue displaying tag history queries cleanly. Instead of displaying one row for multiple tags, the query results in multiple rows for identical timestamps (see example below). I also noticed that the entries sometimes show two different values for the same timestamp??

image

With our Ignition server we currently utilize transaction groups that don’t experience this issue (unfortunately Edge does not allow transaction groups).

I have two scenarios that I need to account for:

  1. Reviewing sets of manual operator entries
  2. Reviewing periodic sample data from OPC tags (sampled say once per minute)

To accomplish these, I set up separate scan classes and tag groups. The scan class for scenario 1 is set to be triggered by a boolean tag after an operator completed a set of manual entries. This allows the recording of all entries to be triggered at the same time. The recording seems to work fine for both. It’s just the displaying of the queries that is an issue.

Is there a something that I am missing with the tag history query that is causing this issue?
Is there a better way to display this data in Edge?

My only other solution would be to store all the values in a single string separated by a delimiter and then store history on this. However this would require parsing and reformatting all of the data for display. This would also have to be done at the Ignition server as well. It really feels like I’m missing something. Thanks for any help!

I ran across a similar issue with dumping charts data to an Excel file in full blown ignition. The fix was due to the aggregation mode, to wit:

  • On Change/Raw - An On Change query will return points as they were logged, and can be thought of as a "raw" query mode. This means that the results may not be evenly spaced. Also, it is important to note that every changed value will result in a row, and therefore if you are querying multiple tags and once, you may end up with more rows than you anticipated. For example, if tag A and tag B both change, you would end up with [[A0, B0],[A1, B0], [A1, B1]]. Note: If you want to essentially retrieve raw values, while coalescing them down into fewer rows, try using the Interval sample mode, with an interval set to your largest acceptable time between rows, and select "prevent interpolation" from the advanced settings.>
    Sample Size and Aggregation Mode - Ignition User Manual 7.9 - Ignition Documentation

I realize that this is not really an answer, but it might help explain what's happened and put you on the right track...

Just an update to this. I spent a significant amount of time on the phone with IA on this issue yesterday. As it turns out this is apparently the intended functionality. This would also apply to retrieving/rebuilding records from the store and forward data retrieved from each edge device at a server.

The boxed values in the image below shows the values that were expected to be recorded (aside from some rounding):
image

The actual values entered were as follows:
Timestamp: 03/03 04:41:20:41 values: 1.3, 2.3, 3.3
Timestamp: 03/03 04:41:53:595 values: 1.4, 2.4, 3.4
Timestamp: 03/03 07:42:05:565 values: 1.6, 2.6, 3.6

The recommended solution was to query each tag individually and then filter each entry based on the timestamp and build a new dataset combining the results.

I plan to try storing all of the values in a string tag (separated by a delimiter) with history enabled on that tag. I will then build a dataset from that history.

I feel for you. I use transaction groups or scripting to write anything important to my own history tables. This contributes to me not having much (any?) use for Edge yet. Consider using a minimalist install of Ignition (OPC server & drivers) and script the transactions you need to your remote DBs.

1 Like