Each tag has a queue, size 5, that change events get put into and pulled out of for execution. If your script is taking too long to execute and changes are happening quickly this queue will overflow and that flag will get set.
When an overflow happens the oldest event (the head of the queue) is discarded to make room.
Hope it means tag change event on same tag not any of the tags where tag change is registered? If I have 1000 tags registered for tag change event trigger in gateway, assuming they all call the same function for processing (e.g. write to DB) , hope the events will be queued (assuming same tag doesn’t change too fast or chatter ).
I have UDP message OPC tags that when they change, a script runs. Originally i had that tag change script parse the message. That took too long.
Now that script writes the message to a memory tag and lets that memory tag do the parsing on the memory tag.
I have a counter to indirectly write to a different memory tag when the OPC tag changes. I have a range of 1 to 1000 tags available. When it hits 1000, it starts over. This is not going back to zero that fast.
The email of the missed events is on the memory tag and not the OPC tag. Its a bit strange how thats happening because the counter would have to go around twice for that to happened.
The problem we have is we have 100+ devices reporting UDP and sometimes those devices can report very fast and at the same time. So i thought since i got the missed event flag that this was happening.
Sounds like a job for a direct jython UDP listener thread (using java’s networking classes) that pushes the raw packets into thread pool work queue of your own. Careful use of script module top level globals and function definitions can keep the parsing code efficient. (You really do not want your fast path to ever execute a def statement.)
I added this code below to check how long my tag change scripts are taking to parse the messages. I am storing those in a data set. This is running very very fast, i don’t see why i’m getting missed events with the tag change scripts running that fast.
Unless i get 5+ messages at the exact same time
start_time = time.time()
# Parsing code here
time.time() - start_time
That’s interesting. So if a script is running, the updates are still queued according to the scan class/tag group settings. I thought a long running script just delayed the scan class timing until it was done running.
But if the updates happen faster than the scan class, Ignition will never know about the change, and also won’t queue the changes. Is that correct?
Another tag script performance piece that may be affecting OP is that there is a dedicated thread pool size 3 that all tag change scripts are executed on. So in addition to having a per-tag queue of size 5 that events will sit in, there’s only ever 3 tag change scripts executing at once.
This is where you can really get into trouble with long running scripts (as @pturmel hinted), because if you have 3 long running scripts then nothing else is executing until at least one of them finishes.
Running 3 scripts at the same time is unfortunate. My scripts are running fast, less than 1, in low ms, and we will always have 50 to 100 + UDP devices sending messages all the time. The data comes in as a long string of text that i have to parse.
My chances of 3 tag change scripts running at the same time is probably very likely. If there are 5 in queue for multiple tags (30 +) at a time, each tag could be maxed out at 5 in the queue waiting for other 3 tags to finished executing their tag change scripts?
Any other feedback on handling this is much appreciated, even if we have to go about parsing this UDP message differently than we are.
Are you saying the scripts that are stored in the project library and not the shared script library?
I don’t think that’s what you mean because tags are gateway scoped and if i’m calling a project scope it won’t know where to find it.
I think this answers my question asked earlier. If I understand it correctly, the queue of 5 events is for a single tag, and there can be 3 such queues (which can be increased with some configuration settings) for 3 different tag changes near-simultaneously. I guess the three threads (if any) will be switched only after the queue of 5 (if any) in each is completed. Is my understanding correct?
No. Go to project scripting, gateway events. The tag change events there are part of the project and can use both project and shared script modules. They also allow a single event script to subscribe to many tags. If you search the forum you can find many examples of me touting gateway tag change event scripts and criticizing tag events.