System Overhead Time Slice Missing? Debugging High Load Factor

If you’re looking at the newer diagnostics I’d just be trying to reduce Overload Factor and/or the Queue Delay. Nothing else really matters, you’re just getting to 4 via 3+1 instead of 2+2, if that makes sense :stuck_out_tongue:

1 Like

Still trying to chase down this issue and debug our comms (though it got a little harder now that I’m on a test bench instead of the actual line).

I’ve got a few questions to clarify some of the things I’ve been reading and testing.

  • We aren’t referencing AOI directly, but we do have external UDTs in the PLC that are referenced within AOIs. Is that going to cause the same issues as reading AOIs? Or with the tags being separate from the AOI are we safe there?

  • Are there some best practices or tricks to optimize the number of requests? I’m sure theres alot that goes into that, but at least a few common things we can look at to optimize this?

It shouldn’t matter if the UDT is used as a InOut reference on the AOI. As long as it’s not local tag IN the AOI.

Group your booleans together in the UDT. Internally in the UDT Rockwell maps each boolean to a bool in a SINT. Ignition reads the SINT not the boolean (If I understand it properly). So if you have 7 booleans in sequence, then Ignition will read one tag not 7. If those 7 booleans as scattered throughout the UDT then ignition will read 7 different tags.

1 Like

So bringing up my older thread since we have a very confusing breakthrough on this. We have been trying to reduce the Class 3 Comms load on the PLC and improving response time in ignition. We tried adding connections, adjusting comms size, adding and removing tags, and slowing scan classes. Nothing made a significant change once we made after we removed the one AOI we looking at.

We ran out of ideas, so we started doing things that didn’t make sense… We changed our concurrent connections down to 1 connection to the PLC, and suddenly our Class 3 Comms on the PLC dropped to roughly 50-65 percent instead of being maxed out. We sped our scan classes back up and ignition still has a queue time, but the PLC seems to be much happier.

Does anyone have an idea on why this may have helped? Do these newer processors with dedicated communication processors not handle concurrent connections well? For reference we are running an L84 with version 33 firmware.

This isn’t surprising - you are demanding less of the PLC comms when you lower the concurrent connections. The trade off is you are going to be polling data at a slower rate.

It doesn’t really matter if the comms usage is maxed at 100% all the time on the controllers with a dedicated CPU for comms unless you have other applications that are being impacted by it.

Yeah we were using these for some HMIs, and we were seeing significant delays. Plus the customer was going to be adding more tag polling for their MES system.

We had done a ton of cleanup actions and things to reduce the load on the PLC, but weren’t seeing that much improvement…until we noticed the customer had redundancy set to Warm as well. So the PLC was under extra load compared to any of our local testing in our office. So our small changes we made in ignition were compounded by the extra polling of the redundant gateway it seems.

Yes, you were doubling the load. Only cold redundancy mode keeps the backup gateway from communicating with the devices.

1 Like

Yeah was something we overlooked since we didn’t configure the redundancy. Will definitely be checking that for issues going forward.

Your situation sounds tailor-made for the base version of my Ethernet/IP module. If you can add a little to the PLC code and I/O tree, you can move your most demanding comms to highly-efficient I/O packets.

1 Like