I was poking around the different status pages on the gateway and noticed the load factor for all my devices is anywhere from 105% - 275% for “Scheduled at 1000ms”. There are close to 40,000 tags in my project and a majority of them run at a second scan class but I don’t see any performance problems or log entries.
The details for heaviest load (currently at 266%) are:
It means your tags aren’t really updating at 1s. As long as the pace is within the stale timeout of the scan class, it won’t generate an error.
You should be concerned. Corrective action varies by PLC brand and model to actually change the performance of the links.
But the number one item to address is whether you actually need 40,000 tags. If you’ve been dragging and dropping everything from your OPC browser to your SQLtags tree, you’re doing it wrong. Only OPC items that are actually participating in your UI windows and control scheme should be subscribed as SQLtags. For items that are rarely needed, consider using the system.opc.* script functions to read/write on demand. Consider using the “read” OPC mode for slow transaction groups.
To answer your first question, the load factor lets you know how overloaded your device is. Based on the mean response time, this single scan class is making 2.66 times as many requests per second as the device can handle, never mind what other scan classes are doing. There’s pretty much only three ways to reduce that load – find a way to make the device handle requests faster (isn’t going to happen), make fewer requests (ie fewer tags), or make requests less often (ie slower scan classes).
That might be too pessimistic. Some drivers have some advanced settings that help in some cases, and sometimes the PLC itself can be adjusted to help. And I've had some success in a few cases splitting the load among more than one driver instance pointing at the same PLC.
If this is a CompactLogix or ControlLogix and you’re using a continuous task then check what the System Overhead Time Slice is currently set to. Moving from 20% to 40% or 50% can help a lot. If the CPU/comms usage for your Logix isn’t currently at or near max, then increasing the concurrent requests setting in the driver can help as well. The default is 2 and anything up to 8 is usually reasonable depending on how many tags you have.
For sure. But I find that if I tell people it’s not going to happen, they’ll do the right thing and first think about whether they really need all their tags scanning at the rate they have them. When they’ve straightened that out, and are desperate, they’ll get their device to handle more requests and then feel awesome because they did something “impossible”.
Thanks for the responses. I have never done the drag-and-drop thing and most come from instances of data types. I recently moved about a quarter of them to leased scan classes. Based on your recommendations I’ll be taking a much closer look at the others.
To make sure I understand correctly, this isn’t necessarily by “Scan Class” but rather scheduled intervals. We originally set up multiple scan class by machine or function but each would be a duplicate in all but name. The though was group them and then we could tweak the scan class settings by categories of tags (think IO or a type of machine that runs slower than others).
On a semi-related note it appears tags that use a leased scan class don’t appear in the gateway scanclasses tab. My project has 38,165 tags but the sum here is 30,484.
I have also experienced some random delays in my tag updates by seeing the Uncertain / Unknown quality overlay.
Adding to Kevin’s post, if you are using Periodic Tasks and have the time (period) set to a value close to the actual scan time of the task, then that task will consume much of the CPU, leaving little for processing systems tasks like communications.
I typically only use Periodic Tasks and set the time to 2-3x the actual scan time of that task so it is only using roughly 30-50% CPU. Then I set the System Overhead Time Slice to 40-50% like Kevin mentioned and also select the Reserve for System Tasks option.
I’ve tried the optimizations listed on this page and my device load did drop from over 100% to somewhere between 25-45%, however I notice the load factor go up sometimes as I am pressing buttons quickly to about 50-70%. I have a total of around 7000 tags, with my leased scan class being 1 second at fast and 10 seconds at slow. So at one time about half of my tags would be read at 1 second.
Can anyone recommend anything else that can be done to decrease the device load to as low as possible? Would splitting scan classes help? Thanks.
Wish I’d seen this suggestion a long time ago. I have an L71 which is doing very little work and even polling 130 tags at 500ms was unusable.
I changed the system overhead time slice from 20% to 50% and my device load went from 300% to 17%
Not only that, but studio 5000 online edits are much faster as well.
Thanks for the advice.
Bringing this thread back to life because we recently set up a new Ignition server (new server hardware) to replace the current server. We have been running the new server in parallel with the old server and out of the 23 PLCs connected, only 1 LogixDriver has a high load (2 other LogixDriver PLCs occasionally have an overlay…NONE of the SLC or Micrologix have ever experienced an overlay issue). Over 300% and causes red overlays every 2-3 seconds…quite annoying. The strangest thing is that the old server isn’t seeing this overlay…but also has a similar load factor showing.
So, not having access to the PLC myself, is there anything I can do from an Ignition point of view? Or does this absolutely require calling in the guy who set up the PLC to check the system overhead time slice thing?
Your mean response times are astonishingly bad. Logix processors usually respond in single-digit milliseconds. For starters, make sure the PLC does not have a continuous task – convert it to a periodic task with the longest practical interval for your process.
Hi Mbhagotra… was browsing around and noticed your post. For me, using a prime number based timing scheme seemed to really get the most out of my throughput and efficiency. For instance, if you have scan classes set up at 500ms, 1000ms, 2000ms and 10000ms, then you are causing these to frequently fire simultaneously because they are multiples of each other. I have used primes, like 401, 463, 773, 1123, ect. This assures that there will be very infrequent polling overlaps.
I just wanted to bring this back to life real quick because I have some questions regarding the newer L80 processors. I currently have 8,963 tags and I am using the Gb port on the front of the PLC solely for my Ignition server. My switch says I am hovering right around 2-2.1Mbps on the port for the PLC. My load factor is at 95%. Why is this so? Why am I not able to use at least half the bandwidth available to me? If this sounds like I don’t know what I am talking about, its because I don’t. That is why I am here. My IO comms on the L83E is only around 4-4.6% actual vs theoretical. The only thing I see is the MSG Class 3 at 97%. My CPU is idle 74% of the time… Is there anyway to allow more time for comms or to adjust this in any way to get more out of the PLC gigabit port?
The gigabit port is not the bottleneck. A 100mbit port would not be a bottleneck.
The bottleneck is the PLC.
Bumping concurrency and the CIP connection size can help but ultimately all of this is class 3 comms and the PLC can only do so much of it, as you're seeing.