Look at the device diagnostics page to see the difference between the number of tags in a scan class for a device and the resulting number of requests needed to satisfy that scan class.
In the gateway status > logs area search for a logger by the name
ReadStructuredTagsRequest and turn the level to
Do an edit/save on the device, and then look for debug-level log entries like this:
Privilege violation reading struct '%s'; re-optimizing.
@Kevin.Herron I did the test and received no errors. I’m inferring that an error would mean that UDT’s could not be read as a whole and then would be read an element at a time. So this is not happening here. Correct?
Also, here’s the total points and requests.
If I’m interpreting @pturmel’s comment correctly, 6060 tags are being read with 401 requests. Which also implies that UDTs are being read as a single unit. Correct?
The issue remains that the PLC is swamped with comms requests.
PLC with device disabled.
PLC with device enabled.
When we were under the impression that elements were being read individually, we did a few things in an attempt to improve performance.
First, we removed any elements from the UDT that we did not need to display or historize. Based on my new understanding, this would not have any effect. Is that correct?
Second, we packed status bits into integers, again thinking fewer elements meant fewer transactions. Again the actual effect of this is probably negligible.
Lastly, we put elements that didn’t change often, such as configuration parameters in different, slower scan classes. Would this actually make performance worse, if those elements are now being polled separately?
The current situation is that we are trying to get 213 UDTs from the PLC at a rate of 1.5 seconds, and the PLC is not able to deliver this.
Consider raising the maximum request size in your driver’s advanced settings. Somewhere between 2000 and 4000 has been found to work well. Native Logix messaging with “Large Connections” uses 4002 bytes, fwiw.
If you still can’t get the performance you need, switching to the Ethernet/IP I/O protocol may make the difference. Your timing is good, as I have just completed release testing for my Ethernet/IP Class 1 Communications Module. Preview the module documentation here.
For those already interested in this module, you might be amused to hear that adding support for jython events kicked me in the tail through the summer. /-:
6060 tags in 401 requests, yes, but you can’t infer anything about UDTs being read from that. The lack of the debug message does point towards it though.
That being said, there was a bug fix in 7.9.4 related to UDTs and optimization, so if you’re not already on 7.9.4 you might see a performance increase by upgrading.
For the most part, yes. Unless you’re talking about removing them from the UDT in the controller, in which case making it smaller is always better. There’s a practical limit to this, of course: the driver won’t read the whole UDT if you’re only subscribed to e.g. 1 tag out of 20. If you’re not subscribed to all the tags in the UDT it tries to figure out whether it would be better to read the whole thing or just the individual tags.
If they’re part of a UDT that has other parts being read faster… yeah, it could be worse, if each group had enough tags in it to make the entire UDT be read.
The first thing to do right now would be upgrade to 7.9.4 if you haven’t already. From there, if these controllers support firmware v21 or higher, that would probably help as well.
Try playing around with the CPU Time Slice % in the Contrologix. We use a stripped down versions of the PlantPAX templates and we had one project where 1000s of tags were read from a Compactlogix and response time was around 3 seconds when a button was pressed on the Ignition Mimic screens. Upping the CPU time slice to 50% reduced response time of a button press to less than 1 second
PlantPAX PLC structure is recommended to be implemented such that there is no continuous task i.e. periodic tasks only… if this is the case, then changing the timeslice parameter may not improve PLC response time. However, its common to change this parameter from the default 10% up to 45/50%…more than that starts to deteriorate performace.
Here’s a link that explains this topic in a bit more detail. Cheers!
I’ve been doing this for all my Logix projects for ages. Never use a continuous task.
Sorry for slow reply. I’m in the middle of a startup. (unrelated to this issue)
Thanks @Curlyandshemp and @vitor.dasilva for the support. I appreciate it. I’m pretty sure our PLC programmer is using periodic tasks, but I am confirming that. And we will definitely play around with the time slice.
@Kevin.Herron thanks for your answers. The more we understand about the internal logic of the driver, the better decisions we can make. And we are running 7.9.4.
@pturmel I don’t see “maximum request size” in the driver config. There is a “CIP connection size” but if I increase that from the default of 500, communications stop. Also, I’d like to understand more about your driver. Maybe we can chat next week, when I’m back in my home office. Ping me at email@example.com
Sorry, connection size is what I meant. Not all Logix products support large connections, unfortunately. And it has to be supported by every network device in the connection path. /-:
more and more convinced that the L43 is the limiting factor here.
This is a bit off-topic, but could you tell us what was fixed? We’ve had performance issues with using UDT properties on popups in a few installations (certainly on touchscreens with embedded, slow CPUs). While using tagpaths and indirect bindings did work a lot better.
To us, it seemed like for every property binding to an UDT parameter, the entire UDT structure was read. While when using the indirect bindings, only the needed parameters were read. Is this related to the fixed bug?
This is the way UDT bindings work. Not a bug, just not optimal. Using indirect tag bindings, and NOT having a UDT property is the recommended path forward.
Yesterday we used RSlinx as an OPC server to test its performance relative to the Ignition driver. Dramatic improvement.
It appears that the difference is optimized packets. When running the Ignition driver, this was always 0.
At this point we are considering purchasing a RSLinx gateway license. And we are going to test @pturmel driver for comparison. Does anyone have any other ideas?
The “optimized packets” are proprietary and undocumented services that only RSLinx can use. If you guys don’t have an issue buying and using RSLinx then I’d recommend that route.
just like Rockwell to publish a protocol, and then create their own super wide back door.
This size is dependant on the Ethernet module though?
ENBT can only do 500, but EN2T & EN3T can do a lot more as I understand.
Where can I find an example of doing this?
How does it function then if you want to historise UDT elements?
It doesn’t matter that some ENET modules can handle higher connection sizes if the processor you’re connecting to behind it can’t. Like Phil said, every device in the path has to support the larger size.