High CPU usage and Tag Group Configuration

I am new to work with ignition. Haven't been to any classes in person, have gone through the videos as it applies to work I need to do. A very OTJ situation. We have seen high CPU usage over 90% on the system I took over.

I have found the all the tags are set to the default tag group, so they are all polled at a rate of 200 milliseconds. This was kind of known by the existing controls engineers, but they are not ignition experts. They just know the all are polled that way.

Our transaction I am fairly standard, there is a handshake when set to a specific value that triggers the transaction. The data tags are pointing to datatbase columns. I see as a possibility to lower CPU is to set the data tags to a new Driven tag group that one-shots based on the handshake value.

Is there a good way to alleviate CPU usage? And anything else to look for? I did find in other posts make sure of the version, we are at 8.1

This is an aggressively fast update rate. There is a reason the default for the default tag group is 1000ms. Visit Status=>Devices=>Details for your PLC connections and show us what you have.

The tags that actually need that fast update pace should be in their own tag group for that purpose.

It also sounds like you are using Transaction Groups from the SQL Bridge module. Consider replacing all OPC tag references within your transaction groups with OPC Item references, then set the OPC mode to "read" instead of "subscribe". (And delete any of those tags from Ignition that you don't need in your UIs.) This will cause the transaction group to subscribe only to the trigger, and then one-shot the reads of the other items. No need for a driven tag group.

It is a mix of OPC tag references and OPC item references. I will check the mode setting, but not there right now at the factory. The other info you asked I will get next week. Thanks.

Is high CPU usage normal?

Varies. It is likely to cause trouble on an I/O server, though, if any native protocols are involved. Native request-response protocols work best when at least some CPU cores are always idle, so they can be dispatched immediately to handle an arriving packet. Depending on the number of cores available, as much as 90% idle might be needed for best performance.

I am new so not exactly sure some of the terms you use. This ignition takes data from an OPC tags on PLCs and saves to a database, and vise versa. So is that considered an I/O server, if not can give an example.

As for the cores available, one thing to point out is that it is in a VM guest on a VMware esxi server. There is another server, the database.

Yes, I consider any Ignition instance that is communicating with PLCs to be an I/O server. The issues with I/O server idle time/latency is most applicable to branded native protocols, like Modbus, EtherNet/IP, Siemens S7, MELSEC, et cetera. Such protocols are "request-response". Less an issue with OPC connections to PLCs for (subscribed item), or indirect connection via MQTT, as such traffic is "report by exception" and not so latency sensitive.

{ Using polling in direct OPC-PLC connections to evade monitored item count limits makes those items latency-sensitive, though. }

IA recommends a frontend/backend architecture for heavy workloads, where I/O traffic is located close to the PLCs in "backend" servers, and tag data is propagated to "frontend" servers for user interfaces and analytics.

My condolences. VMware is notorious for stealing the idle time from workloads like Ignition (latency sensitive but not CPU hogs) and handing it to workloads like databases (definitely CPU hogs) with negative consequences for Ignition.

Unless you take great care to configure VMware (or other hypervisor) to prevent this, you should not run Ignition I/O servers in a VM.