MQTT Transmission - Negative Data Latency Value

I am doing a proof of concept with Cirrus Link’s MQTT Modules.

I have a Beckhoff PC on PC 1 updating (changing all tag values) for 10k tags every 50ms onto an Edge gateway. That is being passed Beckhoff OPC-UA server to Ignition Edge OPC-UA server. Then Transmission is publishing all data 50ms after value change to a different Ignition gateway.

PC 1 is connected to PC 2 via an unmanaged switch with no other devices on network.

My issue is that while testing this setup (I’d like to work up to 50k tags), I’m getting a negative value for “data latency” in the “Node Info” folder. I have tried searching the manual, but haven’t seen anything detailing what this value is. I’m assuming it is a measure of how much latency (in ms) since the Transmitter published the data and it was received by the Distributor/Engine module?

From logging the data coming through and running a report on it, it appears I’m getting values at 250ms intervals. My general concern is how old the values being received are.

Any feedback would be appreciated.

I’m a little confused by the description of your setup. It sounds like you’re using OPC UA to get these 10k tags into Ignition and then also publishing the values with MQTT Transmission module? And bringing it back in again via MQTT?

I apologize for not being clear. The tags are brought into an Ignition Edge server via opc ua on PC 1 and then Published to the MQTT distributor module on PC 2’S full Ignition gateway.

Does this mean you set the "Tag Pacing Period" to 50ms?

Yes, sir. The Transmission module Tag Pacing Period is 50ms.

I don’t know how that works internally, but maybe by being subscribed for data at 50ms also you are continually tripping that reset until some kind of internal buffer (either 5 values or 250ms of time) fills and publishes all the data it has, because otherwise if values kept changing too quickly a value would never be published.

Maybe time for @wes0johnson to chime in :slight_smile:

I appreciate your insight, Kevin.

The ‘Data Latency’ is a measure of the difference of the 'MQTT Engine message receipt time minus the ‘MQTT Transmission transmit time’. MQTT Engine uses the timestamp in the payload that represents the time that Transmission sent the message and subtracts that from it’s current system time when it receives the message. In order for this to show a reasonable value, the system clocks must be synced. So, I’d recommend using a time server to ensure both system’s clocks are valid.

With regard to timing, the way Transmission’s tag pacing period works is as follows:

  • If no ‘pending message’ exists for that device or Edge Node and a tag change event fires, it starts a pending message, puts the event in it, and starts a timer for whatever the tag pacing period is set to.
  • If a pending message exists and a tag change event fires, it puts the tag change event into the message
  • When the tag pacing period timer expires, it finalizes message construction with all aggregated tag change events, publishes it, and clears the pending message so the process can start over.

The tag counts you’re talking about and number of events is pretty high. Timing can be influenced by various factors. We take advantage of multiple processors/cores where possible but some of the things that take place must be synchronized and/or serialized to ensure ‘in-order message delivery’. But like with any application, at some point you may simply run out of CPU. We do have customers using tag counts higher than this but those systems run on some beefy hardware.

If you are running all of those tags/change events thought a single device or edge node, it forces more of that data to be run through a single thread more often. So, you may get better performance by splitting the tags up across multiple different edge nodes and/or devices. More information on how this can be done (and what I mean by edge nodes and devices) is here: https://docs.chariot.io/display/CLD80/MQTT+Transmission+Transmitters+and+Tag+Trees

@wes0johnson

Wes,

Thank you very much for the feedback. You bring up a great point about a time server and I had a sneaking suspicion this may be the issue for the negative value. And, for our applications, I don’t foresee needing to push 10K tags through one device; I was just interested in what the system would produce. I will read through the information you provided.

Either way, I love the modules and their ease of use. Thank you.

Ok a little off topic but this made me curious about what kind of value change throughput I could get over OPC UA.

Current setup is 25,000 tags @ 50ms sampling interval. I tried one configuration that was completely un-buffered and another with a small buffer.

No buffer:

Subscription Publishing Interval = 25.0
Monitored Item Queue Size = 1

mean rate: 498129.0333332324
one minute rate: 500563.8860153531
mean rate: 498134.81755341234
one minute rate: 500563.8860153531
mean rate: 498154.19951455
one minute rate: 500648.77597849176
mean rate: 498155.01349632116
one minute rate: 500648.77597849176

Small buffer:

Subscription Publishing Interval = 100.0
Monitored Item Queue Size = 5

mean rate: 498184.33714154206
one minute rate: 500213.9809772647
mean rate: 498305.0405567225
one minute rate: 500213.9809772647
mean rate: 498414.2424244875
one minute rate: 500213.9809772647
mean rate: 498416.8305390383
one minute rate: 500406.9632990893

Each eventually settled at the expected throughput of 500,000 value changes per second, though. Not bad. I’ll have to try it with more tags and see what happens.

2 Likes

Very nice. I need to read up more on the protocols themselves.

I played with this a little more.

I can get 1M/sec in configurations where the ideal throughput is 1M/sec.

I can get ~1.25M/sec with a single client in a configuration that would ideally yield 2M/sec (or an ideal of 1.33M/sec… seems to be a bottleneck somewhere around 1.25M/sec).

I can get ~1.95M/sec using two clients both configured for an ideal of 1M/sec.

2 Likes