No, this particular fix hadn't been posted. But no reason not to:
v2.0.3.231012109
I suppose I should call that RC3.
I might have the structure member index fix done this afternoon, fwiw.
No, this particular fix hadn't been posted. But no reason not to:
v2.0.3.231012109
I suppose I should call that RC3.
I might have the structure member index fix done this afternoon, fwiw.
So, since our diagnostics are pretty different, and I was noticing a difference in the Class 3 MSG load reported by the controller diagnostics page, I hacked in some instrumentation I thought would be interesting.
I've installed a meter that captures every time a driver makes the DataItem::setValue
or DataItem::setQuality
callback to the server. Basically, it's value updates per second.
This is our driver, Connection Size 4000, Concurrent Requests 1:
This is yours, Connection Size 4000, Concurrent Requests 1:
This test set has no AOIs or other private structures. Our driver was measured with an unpublished change that improves array optimization based on last night's conversation.
Yes, makes sense, considering the incomplete optimization on my end.
Rather large request counts. Lots of individual tags?
I haven't looked that closely... this came from a customer setup.
Ok. Structure member indices implemented. I'm disappointed, as I got only modest improvement. For y'all's edification:
v2.0.3.231031928
In a wireshark capture, it is clear that the big multiple-request packets are executing faster, but something on my driver end is not reacting to the response packets in a timely manner. With some GC stalls mixed in. ):
Still screaming fast on AOIs, but regular stuff degrades at load more than I expected or hoped.
{ Edit: I woke up around 5am this morning having dreamed the cause of the problem. Stay tuned.... }
So, my dream-induced code change did have a noticeable impact, and was necessary for making further improvement. It was a pair of impossible-to-satisfy ordering constraints on my asynchronous response handlers, one for correctness and one for performance. The response handler call chain had to be reworked to allow both constraints.
However, the bigger problem was still there, and thanks to this change, was significantly more apparent under VisualVM's profiler. The culprit was my memory-stability SoftReference
-based buffer cache, which was suffering from contended synchronization sections. Two changes have brought that under control:
Eliminated some stack-trace-based "origin" tracking except when the SoftCache
logger is in Trace.
Eliminated caching entirely for byte arrays of 16 bytes or less, and for ByteBuffers of 24 bytes capacity or less. The cache lookup timing for these was substantially greater than Java's new instance creation time, negating the GC advantages. And those small byte arrays are in the hottest code path for response handling. Larger byte arrays and buffers were still wins.
With this updated code, the test case posted on Tuesday afternoon improves to this:
That case was running in parallel with my L81E PlantPax test case, which has been pruned to eliminate bad tags, yielding about 90k tags (134k tags total, all at 5000ms). With both running, the VMs performance page looked like this:
Anyways, this is very close to the desired performance in general, and is still a huge win with readable AOIs. This is a release:
v2.0.4.231051738 See update below
My module sales page is already updated. Fresh forum announcement topic to follow. This topic's lead post has been updated with the new status.
Fresh JavaDoc has also been uploaded. For those interested in some of the details.
@Kevin.Herron, you might find this class interesting: AutoBatchMsgProc
It is the key to my efficient packing of multiple requests.
This is a great post, Phil, and one we can certainly use to push this with our customers
Good things come in batches, I suppose. My subconscious obviously wasn't done with this....
I moved all of the read infrastructure's .setValue()
actions into Runnables on the driver's subscription thread pool, instead of leaving them in Netty's receive callchain.
Check out the resulting response time stat:
v2.0.4.231061846
My module sales page is already updated.
Bugfix for array writes (when composed of individual elements instead of the whole array):
v2.0.4.231142158
Sigh. Didn't catch all of the buggy cases yesterday.
This is much better:
v2.0.4.231151850
An early adopter has provided me with another good testcase (comms torture case), and I've noticed some improvement opportunities:
Improved driver start & restart times by reducing synchronization contention in the subscription add/remove code paths. (Quite noticeable when hundreds of thousands of items are subscribed.) I expect this to also help leased tag transitions, too.
Fixed a pathological case when downloading a new program to a PLC, where the PLC briefly delivers a successful browse with no content, then the driver wouldn't recheck. (I suspect this is a firmware flaw.) This is now caught in the one-minute recheck.
Added an advanced driver setting that controls the use of CIP's "Multiple Request" service. Simplifies examination of small wireshark captures, and provides a way to compare performance with and without this optimization (dramatic).
v2.0.5.231171552
Can this work with Logix Emulators at all by chance?
No idea. Does IA's driver work with the emulators? If so, mine should too.
Yeah it does, I'll give it a try. We have a pretty nasty load factor (> 1000%) going on with some emulations. Interested to see if it helps. Will post back here with results.
Mind sharing how you're getting Logix Emulators to work with IA drivers? Or are you doing OPC-DA?
I haven't been able to get either driver to work with Studio 5000 Emulate. If RSLinx Classic has gateway mode enabled, it will listen on port 44818, but will only identify itself, not expose the virtual backplane in any way. From some detailed research yesterday, this is to support unsolicited messaging to PLC topics, not any form of Emulate support.
When connecting from one RSLinx instance to another (separate VMs), the traffic does not use EtherNet/IP. If you don't need the logic to run, or are willing to write some jython to mimic the needed parts, my module's Host Driver is an option. Or you can use Rockwell's SoftLogix.
A bugfix and a couple minor enhancements:
Better fix for recovering from external disconnects,
Added a brief disconnect reason to the driver status string, and
Raised the limit on concurrency, per a user request.
v2.0.5.231191159
Hi Ryan,
Using OPC-UA with the Allen-Bradley Logix Driver (IA's version).
And simply pointing the IP address in the config to the PC IP address that the emulator is installed / running on.
Slot number refers to which slot you've downloaded to on the emulator.
I believe the code is on version 33 or thereabouts, which uses an entirely new emulator to the older one where you had to use DA. So on those older Emulators, yes you have to use DA. No way around that. Windows is pretty heavily against DA now, and its always been hard getting comms between two machines over DA. We've always tried to install Emulator on the same VM / machine as Ignition so that it minimizes the DA issues.
New emulator is called "Factory Talk Logix Echo Dashboard" .
These are the Ignition driver settings we use to connect to it.
Ahh, you're using the new Echo software. We haven't upgraded to that one yet, but it's nice to know it works with Ignition drivers. Thanks for the info
Side note: Am I the only one that has issues with the reply button not actually replying to the person?