I have already seen some posts on this topic, but none of them answered my questions.
I want to monitor tag changes and react to them. This monitoring has to be robust (It is a heartbeat. I added some margin and can handle some lost events, but if I regularly loose events this will cause alarms all the time)
Now there are 2 ways to do this:
Tag Change scripts in Gateway Events
I really like this solution because all the scripts are in the project scope. They are all in one place,
easily manageable and can be reused for multiple tags. Script libraries from the project scope can
also be used. Value, quality and timestamp changes can be monitored with the same script.
Tag Event Scripts directly on the tag
Not really my preferred solution. Scripts are scattered all over the place, each tag has its own
script, not ideal for maintainability and documentation. Everything happens in the gateway scope.
To use custom script libraries they need to be placed in the gateway scripting project. Has a way to detect lost events.
These are the differences I found so far. However after implementing and testing both solutions I have the feeling that the tag event scripts are way more reliable than the gateway event scripts.
I have to mention that my Ignition Gateway is running in a VM and sometimes resources get scarce. (dev. environment, not production grade system)
When monitoring the tag changes with gateway event scripts I constantly miss change events, while when monitoring the changes with tag event scripts it seems to work fine.
So my question is: Is there any difference in the backend of Ignition how these events are handled / prioritized? Is it just a wrong conclusion or is the tag event system really more “robust” than the project scoped gateway event scripts?
There are differences, but your conclusions are the opposite of what you might find.
Both mechanisms will never miss a tag change event delivered by the tag system.
A gateway event tag change script has an unbounded queue to hold and execute these change events. You won’t ever miss events, but it’s technically possible to run your memory up if your script execution blocks for a long time, or indefinitely, as events continue to queue up.
Tag event scripts have a bounded queue, per-tag, size 5, and if an execution of a script on that tag takes too long it’s possible for the events to queue up and for the oldest changes to start getting discarded. This is what the
missedEvents flag is for.
Most of the time people “miss events” it’s because they don’t fully grasp that data from PLCs is usually polled, and unless you’re using something like DNP3, which can buffer and deliver multiple changes per point, you can simply miss value changes in between polls.
Those quote blocks look awfully familiar… (:
Thanks for this insight into how things work in the background, that was exactly what I was hoping for!
most of the time people “miss events” it’s because they don’t fully grasp that data from PLCs is usually polled
Actually my system is not yet connected to a PLC. For testing I have 2 oscillating signals which are generated with expression tags, both running with the same frequency. One increases a counter, the other one sets it back to 0. Theoretically the counter should never exceed 1, taking in account some desync / phase shift it can go higher. But when using gateway event scripts and with a heavy load on my cpu and memory, the counter regularly runs up to 5 or 6. When I monitor the tags (visually) they keep on oscillating but the reset script somehow does not get executed.
Do all of these tag change scripts share one queue or do they have individual queues? (this could explain this behaviour)
However after your explanation I can not imagine that I will encounter these problems on a dedicated IPC, having several VMs running is really pushing my laptop to its limit.
With gateway event tag change scripts, each has one queue, regardless of how many tags are assigned to it.
With tag event scripts, each tag has its own queue.
Ok, makes sense. Then I assume that the order in which elements from different queues are executed is not guaranteed and one queue can be emptied faster than another. That would match what I see, but also really only be a problem on a heavily loaded system.
Thanks, that helped.
Yeah, no guarantees about that.
Really, you can use whichever one is most convenient, and if you only take away one piece of advice it would be to check the
missedEvents flag if you use tag event scripts, even if only during debug/development or to log a warning.
Something I forgot to mention (for later readers of this thread) when I marked this as resolved:
The final solution for my problem is to use ONE single gateway event script for both tags involved in this heartbeat.
Like this all events end up in the same queue and the order in which they are processed is not changed. This is a lot more stable than using an individual script per tag.
Just to confirm, would the tag group (i.e. rate) have any bearing/impact on a tag and its "private" event queue to keep up with event bursts?
Only in that a tag group rate might influence how fast a tag is updated, e.g. with an OPC tag.