How to return MinMax aggregate results in custom historian?

I’m currently implementing the minmax aggregation for a custom historian, and I’ve run into an odd issue. I’m able to find the min and max aggregations individually without issue, but when I try to put them together, I’m only getting half of my results. I know that QueryEngine.doQueryRaw/doQueryAggregated requires me to call DatapointProcessor.onPointAvailable. In broad strokes, as I'm processing the retrieved data, I call onPointAvailable and providing all the arguments; if there's 300 results, onPointAvailable is getting called 300 times. For MinMax, because each window is meant to hold 2 points, I'm called onPointAvailable twice, so if there's 300 results for min and max queries, I'm calling onPointAvailable 600 times. What I'm finding is I'm only getting the mins (probably because I’m adding them first). Is there something I’m missing?

@Kurt.Larson Can you provide some guidance to this issue I’ve run into?

So I believe this actually identified a bug in the platform's query infrastructure.

When your historian returns multiple values with the same timestamp (which is correct for MinMax - one min and one max per window), the NonCalculatingResultNode class that handles native aggregations only returns the first value at each timestamp. The second value (your max) is being silently discarded. This is a bug in how the platform processes results from native aggregations.

To confirm and work around this, I have a couple of questions:

  1. Does your QueryEngine implementation include MinMax in its set of supported native aggregates? Specifically, are you returning LegacyAggregateAdapter.of(AggregationMode.MinMax) (or an aggregate with a matching ID) from your supported aggregates?
  2. If so, can you temporarily remove MinMax from your native aggregates list and test again?

When an aggregate isn't listed as natively supported, the platform will:

  1. Query your historian for raw data via QueryEngine.doQueryRaw()
  2. Perform the MinMax aggregation itself on the platform side

This fallback path doesn't have the same-timestamp limitation and should return all your results correctly.

For reference, the Core Historian (our new Historian implementation) doesn't support MinMax natively for this exact reason - it falls back to raw queries and lets the platform handle the aggregation.

So yeah, your implementation is correct. The bug is in how NonCalculatingResultNode.find() handles multiple values at the same timestamp. Until we address this in a future release, the workaround is to not advertise MinMax as a native aggregate, allowing the platform's aggregation logic to handle it instead.

I'll have this fixed for the 8.3.4 release. Let me know if removing the MinMax as a native aggregate resolves the issue (albeit temporarily).

Hi Kurt, thanks for getting back to me. For reference, I think this was also an issue in 8.1.

Does your QueryEngine implementation include MinMax in its set of supported native aggregates? Specifically, are you returning LegacyAggregateAdapter.of(AggregationMode.MinMax) (or an aggregate with a matching ID) from your supported aggregates?

I’m currently using an enum that implements AggregationType and returning the list of supported aggregations to my QueryEngine. Is that what you mean?

The reason why I’m trying to handle aggregation is because in 8.1 Ignition would have a hard time handling the volume of data we were returning, and so we performed aggregation on the database side of things. It does appear that removing the MinMax from the aggregates list results in Ignition’s default aggregator returning all the values.

Yeah, that also works.

The reason why I’m trying to handle aggregation is because in 8.1 Ignition would have a hard time handling the volume of data we were returning, and so we performed aggregation on the database side of things. It does appear that removing the MinMax from the aggregates list results in Ignition’s default aggregator returning all the values.

Yep, this has been around for a while. I've got a fix ready and am aiming to get it into a nightly build sometime next week. Will update here once it's available.

2 Likes