High CPU usage on 1756-ENBT

Okay, I meant the one in Ignition under Status > Connections > Devices, but actually one from the ENBT itself would also be good.

Also - what driver are you using and what firmware version is the PLC?

Allen Bradley Logix 21+. Processor is an L72 running firmware version 32.011.

That’s not 50,000 tags… right PLC, wrong system, or what?

This is a test system, we only have 2 of the 7 total PLCs connected. The rest of the device connections are disabled.

So did your original post mean 50,000 tags across 7 devices all going through the same ENBT?

They are all separate ethernet cards on the live system. The number of tags is still considerable though for the currently connected test PLC.

So do all the ENBTs on the live system have 100% cpu? Does this test ENBT have 100% cpu?

Yes, both the live system and test are at 100% CPU.

Well, the at least for the test system, the Ignition-side diagnostics are as good as it gets. Your 5000 tags are updating totally on time.

Those stats say you have 2 TCP connections and 4 MSG connections - what else is connected besides Ignition? Or do you have 2 device connections going through it?

yes there is 2 device connections through it. I have already tried disabling the other connection but the result is the same. For context, this is an upgrade to a citect system which had about 8000 tags total, except it made heavy use of bit addressing to save on the point count, so my system is at 50000 as it is referencing the booleans directly.

Sorry, I forgot to attach the other device screenshot

Well, that’s a total of 112+415 = 527 requests/ at the surface, not counting “compound” requests that don’t show up in the status, but happen any time the data being read is larger than ~500 bytes (so… a lot, if you’re reading structures or arrays).

The docs I can find for the ENBT say it’s only rated for 900 HMI/MSG/s, so… I’d say this is about what you can expect without getting a better comm module.

@pturmel how does this line up with your real world experience?

Pretty much everything in that 20000 monitored item device is in AOIs, but reading the individual items. Would it be beneficial to switch my tag group to leased 5000/1000? That should reduce the request rate correct?

Ooph, it will help a little, yes, but AOIs are worst-case scenario for driver performance because they don’t actually get read as a whole, even if you have made all the members externally accessible.

There’s some info about why we can’t access AOIs efficiently in this thread.

It seems that has improved things drastically. 12% CPU on the ENBT now.
Thanks for looking into it, much appreciated.

Leased tag groups can have a little bit of a disruptive effect that you should watch out for. Basically, any time the leases change, the subscriptions change, and the driver ends up having to re-optimize based on the new set of tags/rates. This causes a minor disruption in communications. Depending on how many tags and how good your communication stats are you have you may or may not notice this delay on your tags that belong to a direct tag group.

If it becomes a problem one thing you can do is make 2 separate device connections to the PLC and use one for direct tags and the other for leased tags.

Thanks, I’ll keep an eye out for those issues.

Meh. I don’t trust that 100% stat in the ENBT. Flaky junk. I have to admit that I jerk them out at first opportunity. The EN2T is much better.

Things to try in the meantime:

  • Run your concurrency up on your drivers. You have ~128 CIP connections to work with. Consider 12-ish for each of the seven CPUs.

  • Split UDTs out of your AOIs as described in the linked topic, if practical.

  • Anything else in the AOIs that is display-only or configuration or tuning, move to a leased group or scan class.

1 Like

And I would be remiss if I failed to mention the option to use class one connections–I/O or producer/consumer–via my module:

https://inductiveautomation.com/moduleshowcase/module/automation-professionals-llc-ethernetip-class-1-communications

Class one CIP connections seem to be a much lighter load on the PLC and comm cards even with blistering RPI settings.

1 Like

Only reason I went looking into this is because the citect driver that was also looking at the same PLC kept crashing. AFAIK this hasn’t occurred on the live system, but I am cautious of it. Thanks for your suggestions on how to improve things. Modifying the polling to be a bit smarter is probably the most practical option moving forward.