High CPU usage on 1756-ENBT

Has anyone experienced an issue with ignition and the 1756 ENBT having 100% CPU usage? I have a system with 50,000 tags polling once per second for everything, and the ENBT is at 100%. I tried changing the poll rate to 10,000 ms and still have the same result. A test with wireshark between the card and ignition produced about 1000 packets per second.

What version of Ignition are you using? Can you share a screenshot of the diagnostics page for that device?

8.1.9. I will try to get a screenshot but it has some trouble loading with the usage.

Okay, I meant the one in Ignition under Status > Connections > Devices, but actually one from the ENBT itself would also be good.

Also - what driver are you using and what firmware version is the PLC?

Allen Bradley Logix 21+. Processor is an L72 running firmware version 32.011.

That’s not 50,000 tags… right PLC, wrong system, or what?

This is a test system, we only have 2 of the 7 total PLCs connected. The rest of the device connections are disabled.

So did your original post mean 50,000 tags across 7 devices all going through the same ENBT?

They are all separate ethernet cards on the live system. The number of tags is still considerable though for the currently connected test PLC.

So do all the ENBTs on the live system have 100% cpu? Does this test ENBT have 100% cpu?

Yes, both the live system and test are at 100% CPU.

Well, the at least for the test system, the Ignition-side diagnostics are as good as it gets. Your 5000 tags are updating totally on time.

Those stats say you have 2 TCP connections and 4 MSG connections - what else is connected besides Ignition? Or do you have 2 device connections going through it?

yes there is 2 device connections through it. I have already tried disabling the other connection but the result is the same. For context, this is an upgrade to a citect system which had about 8000 tags total, except it made heavy use of bit addressing to save on the point count, so my system is at 50000 as it is referencing the booleans directly.

Sorry, I forgot to attach the other device screenshot

Well, that’s a total of 112+415 = 527 requests/ at the surface, not counting “compound” requests that don’t show up in the status, but happen any time the data being read is larger than ~500 bytes (so… a lot, if you’re reading structures or arrays).

The docs I can find for the ENBT say it’s only rated for 900 HMI/MSG/s, so… I’d say this is about what you can expect without getting a better comm module.

@pturmel how does this line up with your real world experience?

Pretty much everything in that 20000 monitored item device is in AOIs, but reading the individual items. Would it be beneficial to switch my tag group to leased 5000/1000? That should reduce the request rate correct?

Ooph, it will help a little, yes, but AOIs are worst-case scenario for driver performance because they don’t actually get read as a whole, even if you have made all the members externally accessible.

There’s some info about why we can’t access AOIs efficiently in this thread.

It seems that has improved things drastically. 12% CPU on the ENBT now.
Thanks for looking into it, much appreciated.

Leased tag groups can have a little bit of a disruptive effect that you should watch out for. Basically, any time the leases change, the subscriptions change, and the driver ends up having to re-optimize based on the new set of tags/rates. This causes a minor disruption in communications. Depending on how many tags and how good your communication stats are you have you may or may not notice this delay on your tags that belong to a direct tag group.

If it becomes a problem one thing you can do is make 2 separate device connections to the PLC and use one for direct tags and the other for leased tags.

Thanks, I’ll keep an eye out for those issues.