I'm building a large IIoT project on Ignition using MQTT Engine for a client. The 4G router in each site is sending data to Ignition as generic MQTT telegrams (Not Sp.B.) and I'd like to vent some thoughts with you all.
When configuring each router with an unique topic it will create the same amounts of tags.
We use a script browsing the HiveMQ folder, that creates an UDT instance for each of these topics, data/message tag in this UDT is referring to the MQTT tag with an event script that process the telegram and writes to UDT tags and SQL.
One common configuration for all the routers would be preferred, but then the MQTT topic will be the same for all of the units. We are able to add the router serial number to the telegram, making each telegram referring to an unique identifier.
My question is basically: will Ignition and the MQTT engine handle 2000+ telegrams each second to the same topic/tag different than 2000+ MQTT tags sending a telegram each second?
I know using Gateway Tag Change Script is better than UDT Tag Change script (missing events).
idk much about tags or mqtt (in ignition)
but ive read a bit on the forum about missing events, so i think you want the multiple tags and you want the least amount of code in the onchange script and sent it (async) to a project script or something
Part of the reason for the existence of Sparkplug B is that plain MQTT doesn't carry enough information in a standard way to automate tag creation. Sparkplug was the technique to make this possible.
I'd be shopping for a new router brand, or pressuring the one you have to upgrade to the 21st century. That's where your "venting" should be directed.
I think the MQTT modules have an option to indicate the payload is JSON, and even when using a "Custom Namespace" instead of Sparkplug it will break that out into tags. Have you tried this yet, instead of trying to roll your own on the Ignition side?
Thanks for the replies. I will try the Cirrius forum as well.
As all the (future) 2000 devices are identical, meaning they all send data with the same structure and is easy to process with UDT and event scripts. This is not about creating tags, but how to best handle the large amount of traffic we can expect as the system grows.
JSON payload breaks it down to tags, but its still potentially one tag folder and the same raw data per time unit.
Okay, slightly off topic here, but why are you expecting a large amount of traffic?
The whole point of MQTT is an efficient transfer of only changing data. If the vendor has assembled and is publishing a single large payload that contains both static and dynamic data points they are doing MQTT wrong.
Instinctively I think you'd be better to use 2,000 separate tags, configuring each router to publish to its own distinct topic.
If you used a single tag, it would receive 2,000 JSON payloads per second (your words), and your script would have to execute faster than 0.5ms on average or you would start to fall behind.
We are only sending changes, so we are all good there.
Sure, 2000 units would be some years down the road, and that all 2000 of them has a new value to transmit the same second is not expected at all. I'm just thinking ahead, can we possibly reach a limit?
We could always limit the amount of messages to each topic by creating a new router configuration each year, quarter etc.
topic2024
serialno (unique identifier)
time
tag
value
The next year, we change the router config to topic2025 and just add it as a new trigger to the gateway event script handling the telegram.
I refer to a project script on the tags event handler. The server will still need to run the script the same amount of times. The difference is one event triggering the script 2000 times, or 2000 events triggering the script once each.
I suppose it might not matter as long as the rate of incoming changes is never steady enough that the script can't execute fast enough to keep up... It seems like creating a simulated load test wouldn't be too difficult.
This whole setup is just making me itchy/nervous because I recently helped support troubleshoot a "memory leak" that turned out to be a setup very similar to this, except they were trying publishing changes at some absurd rate like 1ms, and gateway tag change script queue was just filling up faster than the scripts could execute.