Does anyone know when the number of tag script events begin to affect the performance of the gateway? All of the alarms and many other events (Conveyor Start/Stop, etc.) will have a script that queries a database containing additional information about the tag event. The script will then write the additional data into a second database. I am pushing to have the database installed on the same machine as the Gateway, so I do not have to factor in communication latency. It would be unlikely, but not impossible, to have more than one tag change state at the same time.
I vaguely recall an IA person mentioning that the tag system uses a fixed-size pool of three threads to handle all tag events. If you are doing anything more than a trivial internal calculation that could occur on more than three tags at a time, you risk blocking. I would use system.util.invokeAsynchronous() to push the time-consuming operations onto their own thread.
FWIW, most people I know using Ignition regret putting their database on the same server as Ignition, unless they are carefully isolated VMs on a really beefy hypervisor.
I am that people
I am interested in this thread. I am currently designing a system that has a bunch of Tag Event Scripts in my datatypes that are calling named queries using the new named query through the store and forward system. I have timed my scripts and to get it into in the S&F system it is like 3-10ms, but I could have 100’s of these running at the same time during production. I did talk with IA and showed them my actual development system and they did not see an issue with what I was doing, but @pturmel kinda scared me that I could run into a blocking issue.
I have a try: except: with the named query call - should I be wrapping these in system.util.invokeAsynchronous()?
I wouldn’t go putting async invocations into your tag scripts until you’ve run into a real, measurable problem with the current system.
You can get yourself into real trouble doing this. It’s possible that your async invocations start to pile up and you lose any guarantees about order of execution.
Plus, the number of threads and the per-tag script execution queue size is tunable via system properties if there’s a real problem.
Kevin, Can you please share what tunable parameters you a referring to. While I can’t confirm this, I feel I have ran into this problem before with tag event scripts during a flood of alarms.
The properties are:
ignition.tags.scriptthreads (default 3)
ignition.tags.scriptqueuemaxsize (default 5)
Tag event scripts have a parameter you can check that indicates the queue has overflowed (
These would be set to custom values in ignition.conf under the “Java Additional Parameters” section. Setting it would look something like:
kind of piggybacking here,
Is there any concerns on putting lots of tags in Gateway Tag Change Scripts?
Is there a way to increase threads for Gateway Tag Change Scripts like you illustrated for Tag Event Scripts?
The “legacy” Tag Change Scripts, for better or worse, use a dynamically sized thread pool (per project), so there are no parameters to tune and you don’t need to worry about anything.
I find the “legacy” treatment of projects’ gateway tag change events to be a crime and a shame. They are superior in every important way to tag events. Yeah, you have to add tags to the subscription list. There could have been scripting methods exposed to automate that. Or allowed wildcards and/or filter criteria that could update via a TagStructureListener.
As for tags being outside the project system, I consider that a defect, not an advantage.