I’m connecting to a very large V28 Logix Processor (30k+ tags) that we are trying to read relatively fast (ideally ~1s scan for most tags). I was wondering if there is a good way to optimize communications to a controller like this? What does the CIP connection size parameter do?
My condolences. I’ve done this with a v20 L72 using the old driver that could use the no-longer-supported physical memory access optimizations. I had to spread the tags among five driver instances, and configure the L72 with no continuous task. (One should always delete the continuous task anyways, but it is critical if the system will be subject to high messaging load.) Some testing with the v21+ driver at the time showed a 10x performance hit. Might be better now, but it’ll never be as fast as the old driver.
Switch protocols? This particular case was the inspiration for my Ethernet/IP Communications Suite driver. With a beefy server, you can configure ~ 12,000 DINT-sized tags using I/O connections, with an RPI in the vicinity of 100ms. With the premium version of the module, Ignition can consume from producer tags for even more traffic.
For controllers and network bridges that support it, allows packet sizes close to 64k, which greatly helps the new driver’s optimizer. Especially when reading lots of consecutive array elements in a large controller tag.
How big have you set that connection size parameter? 64k? We’ve got an L85 that we’re communicating with directly so I’m confident we’ll have the supported hardware.
I use 65500 in my own stack when selecting a large connection. I don’t know the precise support matrix for the maximum. It doesn’t say in the CIP spec, other than it is an unsigned 16-bit value.
None of our Logix devices supported a size greater than 2k during testing.
Interesting. Wireshark tells me a CompactLogix v20 initiating a large connection from a message instruction uses size=4002 (0x0fa2). Something to play with.
Could have been 4k. Memory is a fickle thing.
The connection failure response to an unsupported size is supposed to have the maximum supported size in the second word of the extended status. I’ll have to test against my kit here to see what I get.
Kevin was there a reason for the 512 default? What hardware are you guys testing against? CompactLogix with an EN2T? Have you done any testing on the L8 series?
The default of 500 is because that’s the (almost) max when using the regular Forward Open service. Anything larger requires using Large Forward Open, which in our testing we found was not supported on all controllers.
We tested an L8 briefly when they came out but we don’t currently have one in house.
I’ve had some success here and I’ll share my settings:
- L85 PLC with 30k tags 1-2s scan.
- 3 scan classes staggered at 1300, 1700, 2300ms.
- 2 Concurrent Requests (PLC was being overloaded more easily than Ignition…)
- CIP Connection Size at 2000. 4000+ seemed to overload the PLC but did drop overall request count. This might just be specific to this controller?
Ignition comms look great but PLC is just managing it at 70-80% usage. I had high hopes for the separate communications processor in the L85 but I’m a little disappointed by it now…
Kevin, that CIP Connection Size is an awesome parameter for tuning
Just a thought, depending on your application, you may only need some tags on demand.
I have an application where we have up to half a million DINT’s in the PLC (L85) storing motion profiles etc. For reading / writing I’m using the straight system.opc.* commands and addressing the memory in the PLC directly. I’ve had great results.
3-4.5 Connection size
The CIP connection size shall be no larger than 65511 bytes.
NOTE: The Forward_Open request limits the connection size to 511 bytes; however, the optional Ex_Forward_Open allows
larger connection sizes.
EtherNet/IP Adaptation of CIP Specification
June 5, 2001
I’m trying to work through this now too. A few hundred thousand tags being read by Ignition and several other OPC servers. The device doesn’t seem super thrilled right now.
I since learned that modern ethernet devices for Logix (en2t, etc.) support 4000.
That makes sense as to why Kepware caps CIP size at 4000 then.
Don’t do this… you’re piling on more and more load.
Connect one OPC server to it, and all other OPC clients to that one server.
Do you have any other information about what you were talking about before with the performance of the “old” driver vs the new driver? I was assuming that the new driver would provide improved performance.
I wish I didn’t need to do this and I have simplified things as much as possible. Unfortunately there are some things going on which require their own personal private OPC server. Fortunately the additional load from those servers are comparatively minimal since they don’t poll quickly.
There’s 2 drivers: the “legacy” driver that works with firmware v20.13ish and prior, and the new “Logix Driver” that works with any firmware version but is optimized for v21+.
The new driver is not faster than the old driver. The old driver reads tag values directly out of memory. Rockwell disabled this functionality starting in 20.15? or something like that. In 21+ they introduced a new API for reading tags based on instance IDs (instead of via symbolic names).
The new API is certainly faster than reading symbolic names but it’s nowhere near as fast as direct memory access was.
Fun note: if you’re stuck on 20.xx where xx=15+ you have to use the new driver but there’s no new API so it uses symbolic access, which gives you the absolute worst performance.
I can’t remember the exact 20.xx cut-off where direct memory access stops working. I know 20.11 works with the legacy driver.
I’m currently using 20.55 connected on the old driver. I was working on cleaning up my tag structure some to switch the newer driver (and had just started testing) in effort to improve read speed and performance. I guess that might be a bad idea…
I was seeing an occasional “red blink” with tags as viewed from the designer or on screens due to bad quality. This was similar with both the old driver and my early testing with the new driver. I had noticed an improvement in this condition with the new driver though when the CIP size was increased (tested at about 3,500 or so).