Automation Professionals' EtherNet/IP Communication Suite V2

Yeah we have multiple devices configured on this DEV Gateway and they all use standard UDT structures with our Standard tag groups in them.
I will try it again tomorrow, is there anything specific you would want to see?
I can't send the PLC program for obvious reasons, but I can get logs and screenshots for you.

Note that if you are using the UDT definitions produced by my latest beta, they include a group parameter that propagates through the whole UDT, including nested types, except where overridden. That could help you test larger changes.

Well, I'm fine signing common NDAs, as long as they don't include outrageous demands. It is impossible to support this or any other module without working with actual cases. So, no, not an obvious reason.

{ For driver testing purposes, there doesn't need to be any program code in the PLC file. Just data and data types and AOI data types. }

1 Like

Another couple weeks, another beta:

v2.0.2.230611951

  • Fixed a command timeout issue that was crushing the connection at pseudo-random times. Request timeout in the advanced settings is currently non-functional and will likely go away.

  • Tweaked the soft memory caching for more stability and better diagnostics.

  • Tweaked the probe results export page to include more options. Includes "Live" LogixTypes.json options that disable member tags where a running connection has encountered non-readable leaf elements. May help tune behavior where not all non-readable AOIs can be fixed.

Testing results and sample files are very welcome.

1 Like

I tried out the latest beta version (I'm running Ignition 8.1.20) and have a couple questions:

  1. I created a host device and left the default tag configuration. I then created a generic client and pointed it at the host device as well as a "Allen-Bradley Logix Driver". When I used the OPC quick client to browse the device tags, the generic client showed all of the items but the Logix driver only shows "SlaveConfig" and "SlaveEcho". Is that the expected behavior?

image

  1. When I try to import an L5X file, I get log warnings that there isn't enough memory. I did some testing and it seems that if I change the for the tag to the format in the example XML, it works. I am doing something wrong when creating the L5X?

Tag from L5X file that doesn't work:

<Tag Name="Dint01" TagType="Base" DataType="DINT" Radix="Decimal" Constant="false" ExternalAccess="Read/Write">
<Data Format="L5K">
<![CDATA[0]]>
</Data>
<Data Format="Decorated">
<DataValue DataType="DINT" Radix="Decimal" Value="0"/>
</Data>
</Tag>

Edited version that does work:

<Tags>
<Tag Name="Dint01" TagType="Base" DataType="DINT" Radix="Decimal" Constant="false" ExternalAccess="Read/Write">
<Data>
    00 00 00 00
</Data>
</Tag>
  1. The host driver has always had some capabilities that the native logix driver doesn't understand. And there are now more such. My module's client driver does understand more, but it isn't really 1:1. (Yet, and not a high priority.)

  2. There are l5x items that aren't fully specified, and are expected to yield bogus results, or be discarded. But that snip looks like it should have worked. If you can share, I can take a look. (Trim it down if neccessary.)

DevTest.L5X.xml (3.2 KB)

Attached is a very simple program that just includes three tags. The DINT tag doesn't seem to work but the other two do.

Here is a screenshot of the error:

Hmm. It should be using the decorated format but isn't. Will fix.

Another couple weeks month, another beta a release candidate:

v2.0.3.230882157

{ Can you tell I spent time on vacation? }

Key items:

  • Boolean breakage fix.
  • Large packet slow response handling fix.
  • Further memory usage optimizations.
  • Regression fixes for Class 1 drivers.
  • Updated the manual. Including additional usage notes for Rockwell AOIs.

One of the rabbit holes I went down was some Function Block diagram code in a sample AOI from a tester. It turns out that some function block instructions have unreadable members--which makes it impossible to optimize the AOI they are embedded within. Ugh. Just say no to Function Block Diagrams (in AOIs).

The slow response handling issue was triggered by stalls on my L81E's class 3 comms processor under very high load. A couple seconds at a time on all concurrent buffers. Somewhat improved with firmware v33+ over v32, but still there. Prior to this fix, it would trigger a pathological disconnect/reconnect cycle.

If installing over prior betas, be sure to restart the gateway. There are changes to the localization files, and those don't update until restarted.

3 Likes

Actually, the decorated format is just tossed, and the data left zeros. I tweaked the beta to handle the other formats the same. You won't get the initial values, but the tag will be there.

Another week, another release candidate.

v2.0.3.230942043

I was hoping this would be a release, but the previously reported boolean problem was not entirely fixed by the week-old RC1. Common boolean aliases would work, but not all cases.

Please test.

1 Like

I've created a PLC compatibility matrix from the lab equipment I have, with some extrapolation to related models, based on public documentation. This will be in the module user manual's "Overview" section. To help users evaluate the options for their situation.

I'm considering making a similar chart for I/O device families.

3 Likes

Twiddling and tweaking this week, as there are no blocking bugs to work on. Been poking at one of my archived projects to quantify the kinds of performance gains to be had. This particular project had 23 different AOI definitions, all of which where developed before I had the idea to develop this driver. Pretty much all of their local tags were set to ExternalAccess="None".

Started by simply pointing my new driver at this project file, and using the Types and Tags JSON files to subscribe to every single tag exposed in the processor. I got this:


The tag report tool looking for all Bad/Error/Uncertain tags reported just over 5,800 out of the 46,290 shown.

I then took the Logix processor file and exported it as *.L5X, making sure the "Encode Protected Content" checkbox was OFF, and that no AOI was sealed. I then used a text editor to search for all instances of ExternalAccess="None" and substituted ExternalAccess="Read Only". Imported this modified file back into Logix Studio 5000 and downloaded to the processor.

This yielded the following:


I repeated the search with the tag report tool and found just over 1,000 Bad/Error/Uncertain tags out of the 46,290.

I then examined the remaining bad tags. Quite a few were caused by unreadable members of various I/O module data types, including nearly all configuration data types and and various data types for selected Point I/O modules. I edited the Ignition UDTs to omit the unreadable members.

A small number were instance tags for MESSAGE instructions. I deleted these instances. A bunch more were DEADTIME and LEAD_LAG instruction instance tags (Logix built-in instructions for function block diagrams). I entirely deleted these instances, too.

This eliminated all of the bad tags and yielded this:

I stopped optimizing at this point, though many of the remaining tags didn't have any particular purpose for being in Ignition. (Don't subscribe to everything from the OPC browser!)

Just for kicks, I disabled the device connection, renamed it, and configured IA's native driver in its place (same name), to pick up all of the tags and data types used at this point. It yielded this:


The tag report tool showed no bad tags, other than some extra diagnostics my driver offers.

Do keep in mind that 1% Overload is approximately the same as 101% Load.

5 Likes

{ Throughout the above tests, concurrency was 2 and connection size was 4000. Target was a CompactLogix 5380 with firmware 32.11.}

1 Like

Follow up:

I noticed that IA's driver's response times and request quantity didn't add up to any overload. Far from it, in fact. A comparison of wireshark captures shows that my driver was running through its requests in ~500 milliseconds, close to the stats shown in the capture. Of particular interest were the inter-request delays slowing my driver. Meanwhile, the actual time to complete the samples with IA's driver was ~1100 millis, close to double what an naive calculation with the shown stats would yield. But not even close to overloaded. (@Kevin.Herron?) And much lower, per request, than my driver. Hmm.

After some review, I determined I was using the wrong java concurrent queue, and hopping threads too much. After some refactoring, my driver was completing its requests in ~400ms. (And showing in correspondingly reduced mean response time.) Rather nice pickup.

But the overall response times were still not making sense. Closer review of the captures highlighted that IA's driver is doing a better job with the instance-based optimizations. Specifically, by using structure member indices instead of member names throughout.

Which means there's more performance to be squeezed out of my driver, at least falling back to accessing individual members of structures. This isn't going to block release, but does mean my to-do list has gotten longer.

Our driver under-reports the number of requests compared to what you'd see on Wireshark. It's reporting the number of Request objects the subscription set was optimized into, but that doesn't taken into account any time MultipleServicePacketService receives a partial response error and has to keep making round trips to the PLC to get the rest of the data. It does correctly measure the elapsed time of these requests, so the histogram should be accurate.

Other than that, and other than AOI support, it's possible your optimization strategy is better. Ours weighs both the expected request size and response size, including data, when doing its bin packing routine to figure out what tags go into what request. It's possible this isn't the best strategy and that the data shouldn't be considered when weighing response size, just that there's enough room for the overhead required for every requested tag, so that the partial response capability is leaned on more.

1 Like

I do, too. Oh, and yes, I count the continuation requests in my stats, where you aren't. :man_shrugging:

Anyways, consider the last pair of captures I took this afternoon (pruned to one five-second cycle per capture):

My driver: L320_20230412_1530_pruned.pcapng (143.2 KB)

Your driver:
L320_20230412_1532_pruned.pcapng (376.8 KB)

{Use a display filter of cip to get just the requests and responses...}

Note how my driver is assembling significantly larger request/response pairs, more than can be explained by the AOI issue. Your response time advantage is making up a good bit of the resulting difference, but poor packing is leaving a lot of performance on the table. My failure to use member indices likewise is leaving a lot of performance on my side of the table.

Something for both of us. /:

1 Like

Hmm, does your test dataset have a lot of arrays?

It looks like we're really phoning it in when it comes to arrays.

Driver was written in a hurry in response to the surprise of v21 firmware rendering the legacy driver non functional. It's seen other features and bug fixes over the years, but the optimization hasn't really been touched.

1 Like

It does. And a large fraction are conventional UDTs that your driver shouldn't struggle with. (This particular old project was the poster child for InOut referenced UDTs as AOIs parameters.)

Does the last driver you linked here have this update? I'm doing some measurements and I think your driver is hurting from this.