Tag Change Missed Events

Are you saying testing whether that flag is true or not?

If so, i get an email every time that flag is true, if missEvents: send email,
and i have been getting alot of emails

That kind of suggests that your tag event script needs some tuning to run faster. What exactly is that script doing?

I have UDP message OPC tags that when they change, a script runs. Originally i had that tag change script parse the message. That took too long.
Now that script writes the message to a memory tag and lets that memory tag do the parsing on the memory tag.
I have a counter to indirectly write to a different memory tag when the OPC tag changes. I have a range of 1 to 1000 tags available. When it hits 1000, it starts over. This is not going back to zero that fast.

The email of the missed events is on the memory tag and not the OPC tag. Its a bit strange how thats happening because the counter would have to go around twice for that to happened.

The problem we have is we have 100+ devices reporting UDP and sometimes those devices can report very fast and at the same time. So i thought since i got the missed event flag that this was happening.

Sounds like a job for a direct jython UDP listener thread (using java’s networking classes) that pushes the raw packets into thread pool work queue of your own. Careful use of script module top level globals and function definitions can keep the parsing code efficient. (You really do not want your fast path to ever execute a def statement.)

I added this code below to check how long my tag change scripts are taking to parse the messages. I am storing those in a data set. This is running very very fast, i don’t see why i’m getting missed events with the tag change scripts running that fast.

Unless i get 5+ messages at the exact same time

import time
start_time = time.time()

# Parsing code here

time.time() - start_time

0.0090000629425
0.00899982452393
0.0149998664856
0.0160000324249
0.0090000629425
0.0160000324249

That's interesting. So if a script is running, the updates are still queued according to the scan class/tag group settings. I thought a long running script just delayed the scan class timing until it was done running.

But if the updates happen faster than the scan class, Ignition will never know about the change, and also won't queue the changes. Is that correct?

The only thing that might get missed are script invocations for in between values, but Ignition will still have had the value pass through the tag.

Another tag script performance piece that may be affecting OP is that there is a dedicated thread pool size 3 that all tag change scripts are executed on. So in addition to having a per-tag queue of size 5 that events will sit in, there’s only ever 3 tag change scripts executing at once.

This is where you can really get into trouble with long running scripts (as @pturmel hinted), because if you have 3 long running scripts then nothing else is executing until at least one of them finishes.

Running 3 scripts at the same time is unfortunate. My scripts are running fast, less than 1, in low ms, and we will always have 50 to 100 + UDP devices sending messages all the time. The data comes in as a long string of text that i have to parse.

My chances of 3 tag change scripts running at the same time is probably very likely. If there are 5 in queue for multiple tags (30 +) at a time, each tag could be maxed out at 5 in the queue waiting for other 3 tags to finished executing their tag change scripts?

Any other feedback on handling this is much appreciated, even if we have to go about parsing this UDP message differently than we are.

I think that could be possible, especially if you have any other tag change scripts that might be slower.

You can also influence this thread pool size with a setting:
-Dignition.tags.scriptthreads=N

You could also consider using the "legacy" project-based tag change scripts, which don't have any kind of queue or thread pool involved and doesn't drop events AFAIK.

These are my first choice in all cases (so far).

Are you saying the scripts that are stored in the project library and not the shared script library?
I don’t think that’s what you mean because tags are gateway scoped and if i’m calling a project scope it won’t know where to find it.

I think this answers my question asked earlier. If I understand it correctly, the queue of 5 events is for a single tag, and there can be 3 such queues (which can be increased with some configuration settings) for 3 different tag changes near-simultaneously. I guess the three threads (if any) will be switched only after the queue of 5 (if any) in each is completed. Is my understanding correct?

I would also like to know more about it.

No. Go to project scripting, gateway events. The tag change events there are part of the project and can use both project and shared script modules. They also allow a single event script to subscribe to many tags. If you search the forum you can find many examples of me touting gateway tag change event scripts and criticizing tag events.

I see what you’re talking about.

From the Gateway tag Change events, how do i detect missed events?

In fact I have been talking about these events only! So these events are not restricted to 3 threads and 5 event queues in each and there are no missed events! Is that correct? How does it work?

One more question I had on these event is how to register tags for these events thru scripts ? As I understand we can only register variables thru the scripting console. Is there a way to register tags thru scripts for these events?

(sorry for barging in this discussion, but it is interesting topic for me as well).

No.

1 Like

As Kevin noted, there's no queued event limits for those, so short of a gateway out-of-memory error, they don't miss. (And Gateway OOM breaks so much throughout the server that a missed event would be the least of your worries.)

Strictly speaking, project-based tag change events can miss when a project updates from a designer. You can tell that it updated, as scripting restarts and the event fires its “initialChange” again.

I may be kind of late to the party but since no one mentioned using system.util.invokeAsynchronous() to decouple data logging from the tag event queue. Even this approach will hit a wall if the transactional frequency is too “gregarious”.

@Kevin.Herron, @pturmel I would be curious to hear your thoughts.