I’m trying to understand the interaction between Ignition’s own data logging process and historical data delivered via Sparkplug MQTT.
Specifically, as I understand it, Ignition will sit and monitor the live value of a tag, looking for changes according to its deadband algorithm and the minimum and maximum sample times specified in the tag configuration. Any changes that make it through the processing will be stored in SQL. That all makes sense. But suppose the MQTT provider drops offline, and upon recovery, dumps a huge series of historical values into Ignition via Sparkplug, tagging each as historical via is_historical property on the metric.
The question is: Does this historical data get plonked straight into the SQL database without going through the processing that Ignition applies to data that it itself samples, or are the deadband and sample rate properties applied?
In other words, suppose a tag is configured with a deadband and a minimum sample time of 10 seconds. I understand that if MQTT is delivering live data, only values that meet these criteria will get put into SQL, so you’ll see a sample at most one every 10 seconds. But suppose MQTT then delivers 5 minutes worth of historical data that is sampled one per second and often does not exceed the deadband. Does this historical data go straight into SQL, or is subject to the sample processing as live data?