I am uncertain if this is an issue with tag change events or tag groups, but i am seeing tag change events being missed.
My tag change events have multiple tags that open one popup window. The tag are set to be on for at least 1 second in the PLC. The tags are set to Direct Mode, Data Mode Subscribed, and at a rate of 350ms.
For one of these tags to change the operator has to hit a next button, which sometimes they do very rapidly, even though the PLC is latching those tags on for 1 second some how not all the events are firing and only some of the popups are being shown.
I can provide the change scripts and any other information if anyone here has an idea how to get to the bottom of my issue, thanks!
Are these Tag Value Changed events defined on the tags or are these Client Tag Change events (or Gateway Tag Change Events)? How many events do you have configured or how many tags are configured for the one event?
Also, take a look at the status page on the gateway for the device you are reading the tags from. What is your overload factor for the 350ms group? If its anything over 0% then you are not polling as fast as you think you are, and there is delay.
Are there any errors in the client logs? I'm assuming your change scripts are fairly simple, but to be able to rule them out, post an example one here, as preformatted text, see Wiki - how to post code on this forum.
The tag change events are Vision Client Tag Events and not defined on the tags themselves.
The status page has the overload at zero, though i have seen it up to 5%.
There are not any errors in the gateway Status tab > Logs, is that what you mean or is there somewhere else i should look?
Here is one of the tag change script:
if newValue.value:
window = "Admin SP"
system.nav.openWindow(window)
Edit:
I found the diagnostics tab (i always have the menu bar hidden and hadn't noticed that was there)
I don't see any errors in the logs, though i could have the log level incorrect for something.
I just did some more testing and when i really spam the next button it seems like its preventing all of the tags from being read, is it possible that sending that many tag writes out is swamping the reads?
Yes, and tag change events will not be sent from the gateway to Vision clients faster than the project poll rate (default 250ms). If your scripts are tying up the Vision foreground thread, you'll miss more.
I have the poll rate at 100ms and i cant imagine my scripts are tying up anything. Do you know how i could find out if its my scripts tying up the foreground thread? Any ideas to help mitigate this would be much appreciated, asking them to stop spamming the button hasn't worked.
Thanks for that idea, it helps quite a bit. All of my tags are Modbus tags and i suspect the addressing is not optimized. I have about 5100 tags and under ideal conditions i can get 350ms using the ignition Modbus driver. I know you have a Modbus driver could i achieve faster rates with it? If i am going to have to restructure my tag address do you have any tips for faster read rates? Thanks for you help!
After some more digging i think this is an issue with the Ignition Modbus driver.
I had a wire shark capture going while experiencing the issue and the traffic looked normal, but Ignition didn't see any tags change during the spam writes. It wasn't just the vision client tag change events not firing, it was the tags were not being read.
I tried adding a 2nd network card to the PLC and adding it as a 2nd device, then configuring the next button's tag to use the 2nd device. Everything worked normally.
I then removed the 2nd network card and configured the 2nd device to use the original IP address. Everything worked as expected.
It looks like with a single ignition device the Modbus driver ignores or misses the reads when heavy writes are occurring.
I did open a ticket, but i have not heard back after my latest findings. I will let everyone here know what support says.
I do want to try your driver, though the first time i tried to use it i had some problems with my existing connections. I'll make a better effort this time!
I'd use system.tag.query() to find all of them, and script the parsing and substitution with regexes. I'd use the same pass to add explicit unit numbers.
Except for UDT instances that have parameterized offsets. Fix that definition to fix all the instances at one go.
Support had me set the concurrent requests in the device settings to something higher than the default, which is 1. Changing the concurrent requests to 3 appears to fix the issue i was seeing. It also seems to have improved the performance, i can now drive my scan times down to 150ms without overload.
Apparently not all devices can handle more than one concurrent request so definitely test before changing the value from 1.