How to get tag history to only return changes for one tag path, but return values for all tag paths at those times

I am trying to filter what information gets returned in a tag history lookup.

paths = [
	'[MQTT Engine]Edge Nodes/.../Calibration/Ph Calibrate PB',
	'[MQTT Engine]Edge Nodes/.../Calibration/Ph4 Captured Value',
	'[MQTT Engine]Edge Nodes/.../Calibration/Ph7 Captured Value'
]

startTime = system.date.parse('2023-02-01 22:10:00')

data = system.tag.queryTagHistory(paths = paths, startDate = startTime, columnNames = ["t_stamp", "pushButton", "pH4", "pH7"])

The script returns the following dataset:

|           t_stamp            | pushButton |   pH4   |   pH7   |
|------------------------------|------------|---------|---------|
| Tue Feb 07 22:42:35 UTC 2023 |    NULL    |  8546.0 |  NULL   |
| Tue Feb 07 22:42:35 UTC 2023 |    NULL    |  8546.0 | 12093.0 |
| Tue Feb 07 22:42:35 UTC 2023 |          0 |  8546.0 | 12093.0 |
| Wed Feb 08 22:10:21 UTC 2023 |          0 | 19113.0 | 12093.0 |
| Wed Feb 08 22:10:21 UTC 2023 |          0 | 19113.0 | 19115.0 |
| Wed Feb 08 22:10:33 UTC 2023 |          0 | 19115.0 | 19115.0 |
| Wed Feb 08 22:10:39 UTC 2023 |          1 | 19115.0 | 19115.0 |
| Wed Feb 08 22:10:40 UTC 2023 |          0 | 19115.0 | 19115.0 |
| Wed Feb 08 22:12:13 UTC 2023 |          1 | 19115.0 | 19115.0 |
| Wed Feb 08 22:12:13 UTC 2023 |          1 |  4000.0 | 19115.0 |
| Wed Feb 08 22:12:13 UTC 2023 |          1 |  4000.0 |  4000.0 |
| Wed Feb 08 22:12:14 UTC 2023 |          0 |  4000.0 |  4000.0 |
| Wed Feb 08 22:14:09 UTC 2023 |          0 | 17694.0 |  4000.0 |
| Wed Feb 08 22:14:09 UTC 2023 |          0 | 17694.0 | 17694.0 |
| Wed Feb 08 22:14:38 UTC 2023 |          0 |  4000.0 | 17694.0 |
| Wed Feb 08 22:14:38 UTC 2023 |          0 |  4000.0 |  4000.0 |
| Wed Feb 08 22:15:37 UTC 2023 |          0 | 20000.0 |  4000.0 |
.
.
.

But this returns thousands of rows, most of them I don't need.

If I just had the first tag path by itself, I get this dataset:

|           t_stamp            | pushButton |
|------------------------------|------------|
| Tue Feb 07 22:42:35 UTC 2023 |          0 |
| Wed Feb 08 22:10:39 UTC 2023 |          1 |
| Wed Feb 08 22:10:40 UTC 2023 |          0 |
| Wed Feb 08 22:12:13 UTC 2023 |          1 |
| Wed Feb 08 22:12:14 UTC 2023 |          0 |

What I want is the records that are returned in the second table mixed with the columns in the first.

Or better yet, I really only need the ones where there is a 1 in the first column.

|           t_stamp            | pushButton |   pH4   |   pH7   |
|------------------------------|------------|---------|---------|
| Wed Feb 08 22:10:39 UTC 2023 |          1 | 19115.0 | 19115.0 |
| Wed Feb 08 22:12:13 UTC 2023 |          1 |  4000.0 |  4000.0 |

It cannot do that. You should be using transaction groups (or the scripted equivalent) to record a whole row of sampled data when that one boolean trigger fires. The tag historian is not designed to do what you are asking. { This isn't really a Perspective question. }

2 Likes

You might be able to achieve what you're looking for with a Custom Tag History Aggregate.

https://docs.inductiveautomation.com/display/DOC81/Custom+Tag+History+Aggregates

It won't be trivial.

I agree with @pturmel that using a transaction group (or the scripted equivalent) is most likely a better solution.

These cannot correlate across multiple columns. So, no.

Theoretically you can script one query for the trigger bit, then make many queries for the other values in a tight window around the timestamps from each row of the first result. Ugly. (Probably necessary one time to backfill your wide table with the values from before you set it up.)

Just record into a wide table instead of the historian (or in addition to the historian).

Thanks for your response. Unfortunately, I have no control over how the data is saved, just how to process it. I guess I will just have to filter it after pulling it.

You should pass this on to whoever is in control of that, because the queries you are going to have to run in place of a proper solution are going to hammer that database.

1 Like