We're working on performance bench marking between ignitions native logix drivers and the Automation Professionals EtherNet/IP Communications Suite drivers. We've been noticing some apparent differences in available metrics that are making in difficult to make and apple to apples comparison between the two.
The first issue is the actual sampling interval. Ignition seems to include the queue duration in the actual sampling interval, while it seems like this is not the case for the Automation Professionals driver (there's also some inconsistencies between the expected item count and actual, all sampling groups should have 7000 items but for some reason 1002 has 1400):
E.G. we have an expected rate of 1001ms, a queue duration of 500ms, but still have an actual sampling interval that's only 1200. We though a couple potential reasons for this are either queue duration isn't included in the metric, or queuing behavior is different with the Automation Professionals drivers such that it doesn't wait the full expected rate before rejoining the queue.
With ignition, it seems like queue duration is included in the actual sampling interval:
The other metric we're seeing differences in is the overload factor. If we follow inductives formula for overload (100 * (Queue Duration/Ideal[manual says actual sampling interval but this seems incorrect]Sampling Interval)) we end up with 44% while in the tag we're getting 21.31 for sampling group 1002.
That's at least an objective measurement, though you'll want to try to factor in the tag/item quantity and rates when considering it.
Phil's driver will yield better performance if your test set includes AOIs. If not, probably mostly even? Might depend on array and structure size and quantities or something. There are implementation differences in how we "optimize" or package requests, but we're both operating against essentially no advice or documentation from Rockwell on what might perform best.
Yes, my driver supports all the way back to v8.1.0, so only reports on the Web UI through the older metrics system, showing "Load" instead of the new "Overload". Where my stats collection makes it possible, I try to generate the same diagnostics tags as the IA driver, but queue duration doesn't behave the same.
I also generate stats for non-subscription reads and all writes, on a one-minute basis.
For actualSamplingInterval, I report the time from start of a poll cycle until the last response is received. My driver does not spread polling out across the entire interval. The old "Load" UI handles the stats I give it in some opaque way that I don't understand. I recommend dividing my actualSamplingInterval by the scheduled interval to obtain a "load" estimate for my driver.
Your driver (IA's) doesn't provide any metric that can be used to determine any free bandwidth.
I couldn't help noticing the multiple paces differing only by a few milliseconds. That produces pathologically slow performance if any tags have any fragments at more than one pace. The best performance will always be when any given PLC top level tag (global or program) is polled at a single pace.
If I have two Device connections to the same PLC. Do Tag Groups with the same Rate go into the same Sample Group? Or are requests always kept separate per Device even if the Devices are accessing the same PLC?
With my driver, using multiple device connections offers no advantage over simply increasing the concurrency settings.
I gather that there's an advantage with the IA driver when trying to isolate direct tags from leased tags, to avoid interactions as leased tags move in and out of polling groups. I tend to avoid leased tags, so have insufficient direct experience with that to be sure.
I think there are some fundamental differences in how my driver optimizes versus the IA driver. My driver caches optimization information per item, while I suspect IA's driver caches optimization per pace. @Kevin.Herron ?