MQTT Store & Forward

Good morning,

Thank you to those who have answered my beginner questions so far, they have been quite helpful in building my confidence with the platform.

The current dilemma I am facing:

I am passing an analog value from ignition edge to the ignition gateway through MQTT. Store & Forward is enabled.

In the designer, on the gateway side, the tag populates under "MQTT Engine", and then I have created a reference tag monitoring the source tag path. The history is enabled on this reference tag, and it properly records history, until there is a connection interruption to the edge node, and the history does not populate when connection is re-established.

The question is, would the reference tag "see" all the values coming in batches, after connection is re-established, and log it to the configured historian? Or would I have to set-up history on the original tag that the MQTT system is publishing ?

The latter option. Ref tags don't receive backfill data to log into history which is a big issue! And mqtt tags disappear from the tag provider if a connection is down long enough... And when they come back, historic configuration on the tags is lost, so you'll need to setup a script to add the history config again. It's a big pita

Thanks for your reply!

So I should be able to set up tag history on the original MQTT tag, and disable it on the reference tag, and still be able to use the reference tag in components such as a power chart, and it will pull the history from the original tag?

Also, additionally, if I loose the tag history configuration and require a script to reconfigure on it's own, will tagid change in the DB ?

:grimacing:Nope, you'll need to use the path to the mqtt tag

I'm 99% sure the old tag will be retired and a new row will be created, so yes, tag id will change. But it's the tagpath that is used to join history together, not the tag id, so you will still see retired tags' history

I'm successfully using reference tags with MQTT data on store/forward, but the catch is that all data must be flushed in chronological order, and the history settings require minimum time between samples set to 0.

See this article for details (there's others as well in the menu to get more info): MQTT History Back-Fill with Reference Tags - MQTT Modules for Ignition 8.x - Confluence

Here's another one that explains more as well: MQTT History - MQTT Modules for Ignition 8.x - Confluence

Have you tried the setting on the gateway to allow backfill out of order?

This is good to know though, I speak with 2nd hand experience only

Yeah, I tried it, and it didn't work for me, even though it says on 8.1.4 and newer you can flush it out of order. I could be doing something wrong, but I still feel more work needs to be done in coordination between Ignition and CirrusLink on the module and reference tags. I feel like when setting the minimum time between samples is set to 0, of course it's going to record everything as it's flushed even though I really only want slower data for history compared to the live data, I couldn't find a way around it besides having MQTT transmision on the edge poll OPC tags directly using a slower rate since I also can't use reference tags with a slower rate referencing the OPC tags, since reference tags don't respect the polling rate of the tag group they're assigned to.

Just wanted to find out if there has been any new development from your side regarding history via ref tag? Thanks for the tip about ref tags not abiding to tag group timing, you saved me a few hours of pointless testing!

Nothing new. I believe I'm going to have to go back to the edge device to have my MQTT tags point directly to the source with the desired update rate in order to get them to push over MQTT at that rate as well.

1 Like

I've successfully tested both out-of-order/direct-to-historian and in-order/flush (replay) S&F modes (on both MQTT Engine tags and reference tags) using 4.0.19 Transmission/Engine modules.

I'm now curious if the latter (replay) method will flush/replay the cached data all the way through Ignition's OPC UA server - and if an OPC client will successfully log those historical values (if it's writing to a CSV or DB, for example).

It seems it should work in theory, but testing thusfar has failed to validate that.

Any thoughts on the matter @wes0johnson?

for reference: Historian and Sparkplug-B - #5 by wes0johnson

Screenshot 2023-11-15 at 7.54.00 PM

I would guess it wouldn't because of the polling on the OPC UA side. Because Engine writes the values so fast, a polling application is almost surely going to miss some events. That same thread you linked included a comment from me about 'tag latching' at MQTT Engine. Could this work for you? MQTT Engine Tag Latching - MQTT Modules for Ignition 8.x - Confluence

I know it isn't OPC UA but it might provide a mechanism to do what you want. Certainly it would allow you to write to a CSV or another DB.

Thank you for the response!

Should the fact that the OPC UA client is subscribing (as opposed to polling) make a difference?

Perhaps I can mess with MQTT Engine tag group rate?

I’ll check into the Latched solution!

Bit more info:

I’m essentially trying to “tack on” OPC UA HA (Historical Access) server (or similar functionality) to an MQTT+Ignition environment. In other words, I want to be able to ensure an OPC UA Client will be able to receive/see all historical data that might be stored & forwarded

I’ve tested various scenarios and gotten close to what I need, but not quite all the way.

I'm not sure on this. OPC UA is not my area of expertise. Maybe @Kevin.Herron can answer this? I can say without latching MQTT Engine can/will write many historical tag events per millisecond.

1 Like

Also curious if the third-party "DataHub" I'm trialing is at fault here/not following spec:

Though I have the Primary Host ID configured (C2C) and the topic subscription set to spBv1.0/#, it's presenting itself to Chariot as only STATE/C2C. Presumably it should be spBv1.0/STATE/C2C, because the Transmitter with matching PHID refuses to connect to the broker.

I don't believe 'DataHub' has submitted a listing request to the Eclipse Sparkplug Working Group confirming they are compliant. The STATE topic was a fairly late change to the spec before it was ratified. So, it is quite possible some haven't updated yet. MQTT Engine and Transmission support both flavors.

1 Like

On the client side you need to make sure it requests the Queue Size for each Monitored Item is set to something large enough to accommodate a burst of values arriving.