CLX performance

Sorry for slow reply. I’m in the middle of a startup. (unrelated to this issue)

Thanks @Curlyandshemp and @vitor.dasilva for the support. I appreciate it. I’m pretty sure our PLC programmer is using periodic tasks, but I am confirming that. And we will definitely play around with the time slice.

@Kevin.Herron thanks for your answers. The more we understand about the internal logic of the driver, the better decisions we can make. And we are running 7.9.4.

@pturmel I don’t see “maximum request size” in the driver config. There is a “CIP connection size” but if I increase that from the default of 500, communications stop. Also, I’d like to understand more about your driver. Maybe we can chat next week, when I’m back in my home office. Ping me at hbales@stonetek.com

Sorry, connection size is what I meant. Not all Logix products support large connections, unfortunately. And it has to be supported by every network device in the connection path. /-:

1 Like

more and more convinced that the L43 is the limiting factor here.

This is a bit off-topic, but could you tell us what was fixed? We've had performance issues with using UDT properties on popups in a few installations (certainly on touchscreens with embedded, slow CPUs). While using tagpaths and indirect bindings did work a lot better.

To us, it seemed like for every property binding to an UDT parameter, the entire UDT structure was read. While when using the indirect bindings, only the needed parameters were read. Is this related to the fixed bug?

This is the way UDT bindings work. Not a bug, just not optimal. Using indirect tag bindings, and NOT having a UDT property is the recommended path forward.

1 Like

Yesterday we used RSlinx as an OPC server to test its performance relative to the Ignition driver. Dramatic improvement.

It appears that the difference is optimized packets. When running the Ignition driver, this was always 0.

At this point we are considering purchasing a RSLinx gateway license. And we are going to test @pturmel driver for comparison. Does anyone have any other ideas?

The "optimized packets" are proprietary and undocumented services that only RSLinx can use. If you guys don't have an issue buying and using RSLinx then I'd recommend that route.

1 Like

just like Rockwell to publish a protocol, and then create their own super wide back door.

This size is dependant on the Ethernet module though?
ENBT can only do 500, but EN2T & EN3T can do a lot more as I understand.

Where can I find an example of doing this?
How does it function then if you want to historise UDT elements?

It doesn't matter that some ENET modules can handle higher connection sizes if the processor you're connecting to behind it can't. Like Phil said, every device in the path has to support the larger size.

We fought this same battle for over a year. We have deep nested UDTs, which compound the problem. Using an L8x controller with a 1GB port helped a little - but the L8x also gave us new diagnostics to view the comms usage. The difference between RSLinx and Ignition/Kepware/Any other OPC are dramatic.
If you’re looking for one step better, use the FactoryTalk Gateway OPC Service. It has better performance and also allows for GUI operation without stopping the service, like RSLinx Classic.

@ErikG Can you elaborate a little more on the GUI operation for RSLinx Gateway and better performance you have seen with RSLinx Gateway. It is usually only used if it would be on a different server as the Ignition Gateway right? So why would there be performance bump compared to RSLinx OEM which is the same thing just needs to be installed on the local server.

EDIT: I just realized what you meant… FactoryTalk Gateway managed through possibly the FactoryTalk Admin Console? That would make sense.

Yep. I know Rockwell advertises ‘better throughput’ using the FTGateway/RSLinx Enterprise approach, but I’m not sure of the technical differences between it and RSLinx Classic.