We have an issue where anytime we make a change in the PLC, all tag values lock up for a good 15-20 seconds. I read somewhere that controllers that this issue has been seen with controllers that are version 32 and below, so we upgraded the firmware to v34, but we are still having this issue. We also recently upgraded our network to so we could utilize the full 1Gb speeds from the processor ethernet port, but that didn't seem to make a big difference either.
In Ignition, we have the device connection set up with 8 max concurrent requests. I have 95% of the tags on a 2000-60000ms leased scan rate, the rest being historical tags at a direct 2000ms scan rate, as this was the only way to get the overload down to an acceptable level.
Is there anything else I can look at? This is a running facility and we have a lot of changes to make in the controller, so these constant interruptions are not ideal.
Does that mean that the only way to mitigate how long it is locked up for is to reduce the amount of tags we are subscribing? This system has been in place for years and we did not start noticing this pause until just recently when we expanded and tacked on probably an additional 30% the amount of tags we had originally.
Thank you, this is helpful. There is a considerable amount of unused tags we have racked up in the controller over the years. I was already planning on going through and cleaning all those out, hopefully that will help some.
You can set the loggers you find searching for "LogixBrowse" to DEBUG, and then edit/save the device connection, and it should log how long the browse operation took. Might be worth seeing if that's even significant before you set out to delete all the unused tags.
I started testing this out and so far it is working great. I have a quick question I hope you could answer. Am I correct in my inference that Load Factor is different than the Overload in the original IA driver? To me it looks like 100% Load Factor means that it is sampling at the desired speed.
IA changed their stats reporting pattern in v8.1.6, IIRC. Prior to that, 100% Load == Just barely keeping up. Less than 100% indicated some breathing room.
Since the change, IA's driver only reports OverLoad, where 0% is keeping up, possibly with unspecified breathing room, and any positive Overload is not keeping up.
My driver support includes v8.1.0 through v8.1.5, so I use the deprecated stats pattern.
Another option: If you require regular updates to PLC, but less-frequent updates (to add any new tags) to Ignition, you can also disable the Automatic Rebrowse (advanced props within the device config). If updates are required later-on, you can enable this check box momentarily (30 seconds) and disable again after the browse completes.
Out of curiosity since you know what all goes on behind the scenes, are you essentially able to keep the existing associations of what I assume are tags to IDs then asynchronously do the re-browse and update your "lookup table" then add/remove/update any subscribed references?
If so, it would be really great if the IA driver also did something like this to be less disruptive to comms/operations. I haven't noticed any issues, but I could see it causing problems on slower networks and/or larger projects.
Symbols are read by an instance ID assigned by the controller. Including whole UDTs when possible, and that means to reliably decode the blob of data you get the UDT must not have changed from your prior understanding of its definition.
Once one of the "magic attributes" changes the instance IDs can change and so can the UDT definitions.
I'm curious what he's doing also or if he's just accepting the possible race condition. I think there's a CRC when reading UDTs somewhere that might be usable to throw away the response instead of trying parse it, but not so for top level symbols/arrays...
Nothing stopping tag "Foo" from disappearing or being assigned a new instance ID whilst "Bar" gains its old instance ID though? If they're the same datatype you just applied Bar's value to Foo?
If things ever settle enough I'd love to revisit the Logix driver. It's pretty old at this point, and still written against the Driver API and adapted to the Device API.
I don't know if I'd consider this or not, we'll see. It'd be on the table along with trying to reverse the AOI encoding.
I think for most users it's not the pause and re-browse that is disruptive, it's the wholesale cancellation of the current request set, and the subsequent immediate execution of the entire new set. Especially disruptive when there are slower rate groups involved, because often those wouldn't have needed to execute yet.
My driver doesn't use request sets. It caches information it needs to perform optimization quickly, but builds new requests every time. That means adding or removing a few subscription items doesn't disrupt my polling. (Helpful for leased tags, too.)
If you have a program that takes information, logs it to a DB, and then sends back a 0, could it possibly stop that process and hang up if the PLC doesn't change that value? Or would there be a change event when the browse finished?