I've got a FAT next week but as soon as it's over I'll give this a shot with the customer's system. I'm getting a little overload and was gonna look into all my tags and see if I missed any optimizations, but guessing a lot is from a bunch of standalone tags that I'm having to read. Will try to grab some stats from before and after. Total tags per PLC is sitting around 50k.
@nminchin: With IA’s driver.
@Kevin.Herron: Indeed, this (CIP Connection Size) is likely the case.
There are several differences between our AOIs/UDTs, so I'm not proclaiming a direct results comparison. However, I'm monitoring about 800 AOIs, reading only about 2k combined tags at 1sec from those AOIs. Regarding the low 'Monitored Item Count', one of those tags is a bit-encoded status word, where around 12-20 BOOLs are unpacked to their respective tags within Ignition. In general, all process variables and all alarms of every AOI are monitored at 1sec. This is on a L7x processor.
One of the obvious performance gains of @nminchin's setup comes with 'all of the other tags'. I've placed 'every other tag' of the AOI in a leased tag group, at much slower speeds. The request count for those tag groups is much higher. There are also a handful of tags which are 'Polled', which - I believe - are ignored in these subscription statistics. Combined aggregate stats:
In my case, these 'other tags' (analog scaling, motor/valve fail times, cause/effect logic, etc.) have no business being on 'fast scan' unless the popup is open, so I'm comfortable with the simplicity that this setup awards, and with the performance we're seeing.
In what cases would increasing the Ignition Device Connections make things slower? I have a very large number of AOIs that my tags are accessing. I just now stumbled upon this thread about how this is much slower for Ignition. I am working on modifying my AOIs to have an embedded UDT for Ignition to access. For now, I changed my tag groups to Leased instead of Direct, slowed down the polling, and increased the CIP Connection Size to 4000. All of these steps helped to reduce the tag overload. However, increasing the Device Connections higher than 2 actually increased the overload. Moving it back to 2 decreased it. A setting of 1 increased the overload. I can't think of any reason why this would be the case. My PLC should have plenty of communication overhead. I am using a 1756-L83. I need to enable the webpage in order to check for sure.
From the user manual:
Max Concurrent Requests: The number of requests that Ignition will try to send to the device at the same time. Increasing this number can sometimes help with your request throughput, however increasing this too much can overwhelm the device and hurt your communications with the device.
As for why >2 overwhelms an L83, I'm not sure... lots of IO and other network traffic to the PLC?
I had the same experience. 2 connections seems to be the sweet spot. I think with 1, the overload is probably due to Ignition not being able to send/process enough in a single connection, and with more than 2, the PLC may be getting overloaded. I suspect that with a PLC that isn't overloaded, you could probably increase the connection count, but you wouldn't need to if you're not overloaded.
The L8x family uses a separate comms processor. You can certainly bog it down separately from the logic processor(s).
When accessing AOIs, Ignition has to make individual requests for every member, which makes the comms processor work extraordinarily hard. (My alternate driver doesn't have this problem, as long as the entire AOI datatype is readable.)
{ There are also internal data types, particularly I/O data types, that are not completely readable. Both IA's driver and my driver have to access those member by member. PlantPax is notorious for including such as nested types, making those AOIs especially bad. }
Thank you. Any idea why changing the Tag Group would not update all of the subscriptions? I had my tag group set to Leased with 1000/500ms rates. I changed it to Direct at 1000ms. I have no other tag groups set to 500ms. But, I still see requests on the Device Status screen for "Sampled at 500ms".
If you go into the Status of your OPC Client, find the tag group in that list showing 500ms and see what tags it's trying to read at 500ms.
Still curious how you get around the EnableIn/EnableOut parameters that cannot be given external access
I always forget about this, I could have used this 6 years ago
They already have external read only access. See a screenshot of a filtered list of parameters on one of my AOIs.
And here's the quick client subscribed to an instance of both:
That was a flaw in my old analysis. See the edits on that topic. Those booleans are locked to readonly access, not none.
Oh, I guessing there's a bit more to it then than just setting all params to have external access then to get to read within a single request?
You have to violate the rules Rockwell publishes in its Data Access document.
I appears that the the “leased” tag group is still there. Two entries show up in the list for the tag group- one is the correct name and the other has “-leased” after it that doesn’t exist. I tried restarting the OPC-UA module and that didn’t help. I also tried deleting the “Default” tag group and creating a new one.
Tag Groups are defined per provider, so make sure you don't have other Tag Providers that still have a leased Tag Group.
Otherwise, restart Ignition.
It lists two for the same provider and group, but one has “-leased” added to the end of it. Evidently my pictures did not come through. I will restart Ignition.
Alex Ferrell |
Controls Engineering Manager |
Office +1 248-668-8016 |
Mobile +1 248-326-8114 |
77Z32 |
If it's still showing up, look at the details of that tag group and it will show all the tags currently using it.
Please edit your comment to remove the signature block. Signature blocks in posts/comments are against this forum's rules. When replying to this forum via email, make sure you do not include a signature block or other footer. If your email system adds a footer automatically, don't reply via email.