Historian and Sparkplug-B

I’m trying to understand the interaction between Ignition’s own data logging process and historical data delivered via Sparkplug MQTT.

Specifically, as I understand it, Ignition will sit and monitor the live value of a tag, looking for changes according to its deadband algorithm and the minimum and maximum sample times specified in the tag configuration. Any changes that make it through the processing will be stored in SQL. That all makes sense. But suppose the MQTT provider drops offline, and upon recovery, dumps a huge series of historical values into Ignition via Sparkplug, tagging each as historical via is_historical property on the metric.

The question is: Does this historical data get plonked straight into the SQL database without going through the processing that Ignition applies to data that it itself samples, or are the deadband and sample rate properties applied?

In other words, suppose a tag is configured with a deadband and a minimum sample time of 10 seconds. I understand that if MQTT is delivering live data, only values that meet these criteria will get put into SQL, so you’ll see a sample at most one every 10 seconds. But suppose MQTT then delivers 5 minutes worth of historical data that is sampled one per second and often does not exceed the deadband. Does this historical data go straight into SQL, or is subject to the sample processing as live data?

2 Likes

Perhaps @wes0johnson could answer to your question.

On your questions:

Does this historical data get plonked straight into the SQL database without going through the processing that Ignition applies to data that it itself samples, or are the deadband and sample rate properties applied?

By default this is how it works in that data is just written directly to the DB bypassing 'tag change events. There is an alternate configuration which will result in all data being flushed in order from the edge and tag change events will be called/triggered.

In other words, suppose a tag is configured with a deadband and a minimum sample time of 10 seconds. I understand that if MQTT is delivering live data, only values that meet these criteria will get put into SQL, so you’ll see a sample at most one every 10 seconds. But suppose MQTT then delivers 5 minutes worth of historical data that is sampled one per second and often does not exceed the deadband. Does this historical data go straight into SQL, or is subject to the sample processing as live data?

There are a a lot of factors to consider here... Generally we would recommend that deadbands be set on the OPC tag at the edge/Transmission side. In that scenario if history was being stored at the edge only values that crossed the deadband limits would ever be stored and forwarded (just as they would trigger tag change events). So, by doing this the data to be thrown out never even gets transmitted over MQTT.

3 Likes

Thanks for this. You say that there’s an alternative configuration. For completeness, can you explain how this mode is enabled? I agree BTW that the edge device should be applying the deadband etc. Just want to make sure I understand how it all fits together!

On the Transmission Transmitter config, there is a ‘In-Order History’ boolean. Change this to true. On the MQTT Engine side under the general tab, there is a ‘Store Historical Events’ boolean. Change this to false. This will use the alternate configuration that will trigger tag change scripts, alarms, transaction groups, etc on the MQTT Engine side. Keep in mind there are only two valid configurations for these parameters. Those are the default settings and those outlined here. Any other combination doesn’t make sense.

Forgive me, but I need a clue on where to find the Transmission Transmitter…

Gateway configuration home page ->configure->MQTT Transmission->Settings ->Transmitters tab

@wes0johnson, @Kevin.Herron

I don't think we can manage the deadbands on opcua tag in Ignition 8.0 ?

No, but in 8.0 Tag Groups have an option to configure the Queue Size for all Monitored Items in the group as well as the UA Subscription Publishing Interval.

Interesting, does it mean it is possible for a third party opcua server to push lots of timestamped data to the Ignition client with this ?
Some opcua server are connected to RTU with a protocol which transmit historical data in case of connections failure with the RTU. We want these realtime data délayed in time go through Ignition realtime alarm and historical traits.

Yes, and that’s how we intend to support DNP3 event buffering. I think alarm and history systems still need some small changes to accommodate the arrival of this bulk data, though.

1 Like

I am not able to record all Sparkplug historical data points to Ignition Historian.

This is my setup - A node.js program sends 50 SparkplugB messages (using sparkplug-client) - each message contains incremental value from 1 to 50; timestamp is set at 10 ms apart and metrics.isHistorical = true.

On the Ignition Gateway > MQTT Engine > “Store Historical Events” is enabled.

Tag History is set up for the metrics in question, with Historical Scanclass set to “Evaluate on Change”

After the Sparkplug message are done sending, upon checking sqlt_data_* database table, only one data point is being recorded.

Is there any other setting I am missing here?

Thanks for this info. Would this still work with a timer based transaction group?

It would not in all cases - it really depends on how your timing parameters are set up both in the Transmission side ‘flush rate’ as well as in the Transaction group poll rate. This is because during a flush, tags can be written very quickly to the tag on the Engine side as it is ‘catching up’. As a result, time based polling in a Transaction Group will miss tag events because they’re happening faster than at regular speed.

For this reason, we created ‘latch tags’, which allow you to sync events between MQTT Engine and a Transaction Group or Tag Change Script. See here for details: https://docs.chariot.io/display/CLD80/MQTT+Engine+Tag+Latching

1 Like