Yes. I’ve been working on tracking down what is using that 1000ms rate. That’s why I changed the scan class I was purposely using to a slightly different value of 1037. Some rogue tag is using a different rate. And I don’t think it’s a tag group (scan class) as I’ve changed them all slightly. It’s possibly a direct rate overriding the scan class. Maybe on an expression tag or something.
Take a look at the OPC Connections → Client page under Status. Its helped me before trying to track down what groups use what rates. Wont necessarily narrow it down to a particular tag, but at least to something you can search
Nope, not missing anything, they're all related. The calculation is actually (MeanQueueDuration / SamplingInterval) * 100
, which works out to the same thing.
Thank You!!! How have I not ever looked there before? Yes, this list several tags under my default tag group. Much easier method. Thanks again.
Still not sure what is being requested at the 1000ms interval as none of my groups here are using it.
I’d maybe check other providers outside of what you think should be used in case one accidentally got added to a different tag group or provider. Could search for the device name under all providers in the designer maybe? hard to say where it might be hiding unfortunately.
This is sounding all too familiar for me and how I ended up with a crap renamed tag provider, as I simply couldn’t find the cause of some tags polling slower than they should at 5000ms Instead of 1000ms after moving it from a test environment after some big changes (merged 3 gateways ino 1). Creating a new tag provider and copying all tags fixed it. I couldn’t set it back to the same name. This was in 8.1.5 though
Well. That’s not too comforting, but might at least be some validation that I’m not totally crazy. @grietveld’s suggestion was still helpful for me so I can at least get those of the default scan class under the same one and get rid of at least 1 of the 3 polling cycles.
If it all works after that, I don’t want to go too crazy searching for the one tag. I did check all the other providers in my gateway and didn’t see any tags point to that device. So good on that front at least.
Check transaction groups with direct OPC items.
So I don’t have anything in transaction groups going to that PLC. But your comment reminded me that I do have some SQL queries going on that write back to the PLC that are event driven. Not exactly sure if those are what are showing up as that 1000ms request rate or not. You’ll have to excuse all the crazy things going on in this project. Requests from this client are beyond anything I’ve been asked to do before, so it’s a bit of a mess.
Psssst!
Wow. For real?? We just recently got done overhauling all of our AOI's to work around this. Ooof. I think we may still wait for it to pass the beta stage, but may be interesting to bench test in a non-production system. Thanks for passing along the info nonetheless. It is still good news.
All testing is welcome.
The beta will end when neither I nor the peanut gallery finds any (more) show-stoppers.
Just a little better than the Logix driver for reading AOI tags directly...!
(Enip is Phil's driver)
Note Phil's uses load factor, logix uses overload where load factor approx = overload + 100
I wouldn't mind if you reposted the original graphics of those on the announcement post...
Will do, just to confirm though, do you mean that screenshot exactly? (the "original" graphics threw me)
They are still present in the alpha topic. I can repost them if you don't mind.
Sure, feel free to post them!
(edit: ah I see what you meant now)
This is just for CIP connections though right. Not OPC connections? Or is Ignitions native OPC server a CIP request that gets mapped into OPC? Sorry if this is a dumb question. I am just trying to understand this.
This thread veered off into specifics about accessing data in Logix processors via CIP connections, though most of our polling-based drivers will have the same kind of generic diagnostics about request load.
The data acquisition "system diagram" in Ignition most commonly looks like this:
Ignition OPC UA client <--> Ignition OPC UA server <--> PLC (via native protocol, implemented by driver modules)
Sometimes like this:
Ignition OPC UA client <--> 3rd Party OPC UA server (e.g. Kepware) <--> PLC (via native protocol)
and starting to become more common:
Ignition OPC UA client <--> PLC with embedded OPC UA server
The diagnostics discussed in this thread are provided by Ignition's driver modules and only relevant when they are being used.