Device connection is overloading like crazy


Hi all,

I just wanted to see if it's normal for a device to overload this much for a tag count of roughly 2500. I'm not sure how the overload is calculated, but there are devices with more tag counts (>10k) but they have lower overload value which is a bit confusing for me..

All tags assigned to this device are set on a Leased tag group with Leased rate 1000ms and rate at 5000ms and everything else at default. PLC model is AB 5069-L340.

Many thanks in advance!

Is this device on another network in another country? Does it have other systems already polling it?

The response time is pretty bad.

3 Likes

Hi @Kevin.Herron ,

No, the PLC is sitting right in front of me believe it or not. There's only single gateway if that's what you're asking for but the PLC is running one of our machines with other peripheral devices. I'm playing around with with some of the settings on the device like max concurrent requests and they seem to help out a bit but the overload is still at a crazy level..

You might try changing the system overhead timeslice in the PLC to allocate more time for comms.

1 Like

I don't think the 5069 controllers have overhead timeslices. If I had to guess, they're using PlantPAx AOI blocks, they're using all atomic tags with no UDTs, or they made their own AOIs and they don't have all the tags set to at least read only for external access.

1 Like

@Kevin.Herron @michael.flagler
Thanks for the input!
As michael mentioned, I don't think there is an option to adjust overhead timeslices on 5069.

None of the AOI's are used on Ignition side (at least not on the client facing application) and everything we pull from the PLC are defined in UDT although they do have a very complex nested structure and all atomic tags are set to read/write for external access.

I'm also seeing a weird outcome where when I make changes to the parameters on connected device, the response time varies from going very fast initially and declining over time to the point where it's almost unusable. I tried restarting the gateway but the symptom continues :frowning:

When you say theyr'e complex nested UDTs, how big (in bytes) is your base UDT? Are you reading all/most tags in these UDTs or just some? What do your tag groups look like? Are you doing direct on everything or using leased tags? I've found doing all direct is usually faster because it doesn't add extra load/processing of changing poll rates of tags, but that's just my opinion.

Have you tried testing Phil's driver to see if you get better results? It's a drop-in replacement for the Ignition driver.

1 Like

Try temporarily increasing the poll rate for your 5sec tag group (to 30sec? 300sec?) to determine how much that tag group is contributing.

1 Like

@michael.flagler
Hi Michael,
Thank you for the response, apologies it took a while for me to get back, i've been away from my desk for the last few days.
One of the bigger UDT is about 150kb in size but there are few other biggish UDTs routed through the same device.
All tags are assigned to the same tag group on leased mode with rate: 5000ms and leased rate:1000ms.
Yes, I've noticed that it takes a moment for the tags values to change when they are on leased mode, but putting them on direct makes everything unusable as the overload goes through the roof.
I'm just not sure how the overload is calculated, I thought reducing the tag count would help but it looks like the monitored item count is not the main factor:

Hi @Chris_Bingham ,
All tags are on the same tag group but divided across multiple devices. I'm not quiet sure what you meant when you said 'how much that tag group is contributing'. Did you mean try increasing the rate to 30sec or 300sec on tag group?

Look at your mean response times. Those indicate that the requests the driver is making to retrieve the desired items are absurdly complex compared to the working install.

A healthy L8x family PLC will answer trivial requests in under a millisecond, and complex requests in 30-ish milliseconds.

The L7x PLCs are a good bit slower, and can be impacted by ladder task execution (L8x has a dedicated comms processor).

I thought the L3xx series compact logix also had a separate comms processor, but I'm not sure.

Please share more about your setup, including the network layout and distances involved.

1 Like

Based on rockwell's manual 1756-RM100 page 105, yes? They also don't have an option for dedicating a time slice for comms.

Unlike 5370 controllers, which share the main core between application code and communications, the 5380 controllers run communications asynchronously from the user application.

5380 on right:
image

2 Likes

I just noticed this. Very large UDTs will create odd statistics, and may not be counted correctly by the Logix driver. You should avoid UDTs much larger than the connection buffer size.

It is vitally important that everything in a single tag in the PLC be scanned at the same pace by the tag groups, or you will get multiple requests for the same UDT to satisfy different parts of it.

You should read all of this topic and the cross linked topics:

3 Likes

If you're not reading all the data in your UDTs from Ignition, then I would recommend making a separate UDT or splitting out the data you do need for Ignition/HMI use to a separate UDT independent from the larger UDT. Yes, this can be some work, but you have extra large UDTs it sounds like and if you don't need all that data, the complexity is killing your comms.

I like to have separate UDTs for the various data. For instance all my 1000ms data is in a "Data" UDT for each type of device/equipment. The data UDT does have a few nested UDTs, but I read all of this at 1000ms direct. I have a separate Meta UDT with strings/etc that I read at 60000ms since they're not critical to keep live and they rarely if ever change.

3 Likes

Thank you everyone, I'm learning very useful information from this thread.

I'll try to upload more details about our setup here (although the network portion is dead simple, PLC - Unmanaged switch - Gateway and that's it), but something like below bothers me a bit. I've assigned a single UDT to a device and the overload is still crazy for the number of tags it's monitoring. I've gone through the thread @pturmel mentioned and tried applying different settings but they all perform more or less to the same level (edit, increasing the CIP connection size reduces the overload but the tags refresh veeeeery slow):

I'll try digging through the UDT and see if I can find AOIs or wrong external access.

An observation…your monitored item count varies drastically in each of your screenshots. From a low of 497 to a high of 61,876. Seems like there were some additional changes made between screenshots, or the driver is restarting the connection process, and we’re not observing anything at steady state.
You may have a lot of cleanup to do, but I recommend you start by slowing everything down until you can reach a steady state. Change your leased tag group to direct, with a slow poll (10-30 seconds). Turn off Automatic Rebrowse. Update your CIP connection size (increase to 1000, 2000, or 4000). Wait until everything is at steady state for several poll cycles (a few minutes, if it ever comes), then send another screenshot.

While I probably know the answer to this, make sure your web page is enabled on the processor and check the tasks to see what your class 3 comms loading is. I'm guessing it's essentially 95%+ but would be curious.

Definitely set CIP connection size to 4000 and set maximum requests back to 2 (it seems to be the sweet spot in my experience).