Tag Scan Class

I am stuck between a rock and a hard place.

If I set my scan class any lower than 1.750 seconds I am getting flashes of bad quality on all my tags (~12k). This basically renders the program inoperable with such slow reaction time. Is there any fix to this that a rookie is overlooking?

That usually means you are overloading the communication capacity of your device.
One rookie mistake is to drag the entire OPC browser tree for their device over to the SQLtags tree. Don’t do that. Only make SQLtags for the data you actually need in your application. While you are at it, organize your SQLtags in folders that make sense for display with indirect binding, not the folders that exist in your device.
If you have large numbers of tags that are only used for display on specific windows in your app, place them in their own scan class and set it to “leased” with a very long slow update pace. That way they’ll have minimal impact on your communications when no one is looking at them.
If you have OPC addresses that are only needed in response to specific events, use the system.opc.* scripting functions to access them without creating SQLtags.
Also, some drivers have tunables for requests in flight and similar that can affect performance. You should look at those, too. After pruning your SQLtags, of course.

I did the pruning and some tweaking of the scan classes. I have the default at 1.5 sec, the alarms at 2 sec and buttons at 1 sec. My bad quality issue seemed to be fixed by decreasing the state timeout to 5 sec from the default 10sec. My issue now is that I have crazy slow response time on the tags (~5 secs) and can’t figure out why.

You’ll have to share more about the hardware, the path from Ignition to that hardware, and the driver settings you are using. Also a snapshot from the driver diagnostics page showing # of requests and throughput, and max latency.

Using an AB CLXL72S. Straight connection between PLC and computer.

I didn’t have these problems when I used default tag provider but I was requested to move the tags to their own provider (see attached image)

aggregate-request-count
Value 225
aggregate-throughput
50thPercentile 0.203894
75thPercentile 2001.8381259999999
95thPercentile 2004.5927499999998
98thPercentile 2009.919657
999thPercentile 2023.514882
99thPercentile 2015.80043
Count 394204
DurationUnit milliseconds
FifteenMinuteRate 5.156855842051245
FiveMinuteRate 2.5041998588238807
Max 2026.6800309999999
Mean 701.8953554711902
MeanRate 62.8986778907878
Min 0.029762999999999998
OneMinuteRate 2.8894393587364706
RateUnit events/second
StdDev 955.6471113741225
request-count@1000
Value 225
request-throughput@1000ms
50thPercentile 0.185078
75thPercentile 2001.8251269999998
95thPercentile 2004.579751
98thPercentile 2009.898446
999thPercentile 2023.5042779999999
99thPercentile 2015.7764829999999
Count 391658
DurationUnit milliseconds
FifteenMinuteRate 5.236052556008645
FiveMinuteRate 2.4918277742355044
Max 2024.2808539999999
Mean 701.8012412680978
MeanRate 62.54848791910034
Min 0.035579
OneMinuteRate 2.879830286905486
RateUnit events/second
StdDev 955.61436010261
request-throughput@5000ms
50thPercentile 0.022577999999999997
75thPercentile 2003.6088579999998
95thPercentile 2003.6088579999998
98thPercentile 2003.6088579999998
999thPercentile 2003.6088579999998
99thPercentile 2003.6088579999998

request-throughput@5000ms
50thPercentile 2000.89905
75thPercentile 2000.89905
95thPercentile 2003.6088579999998
98thPercentile 2003.6088579999998
999thPercentile 2003.6088579999998
99thPercentile 2003.6088579999998
Count 517
DurationUnit milliseconds
FifteenMinuteRate 0.0128474821167348
FiveMinuteRate 0.010013944881484754
Max 2006.686771
Mean 1713.353800575647
MeanRate 0.0817610637134741
Min 0.018473
OneMinuteRate 0.010761924255221168
RateUnit events/second
StdDev 702.1469716657641
tag-reads
Count 10372949
FifteenMinuteRate 73.10271515555353
FiveMinuteRate 0.03333190262165781
MeanRate 1640.40274137879
OneMinuteRate 1.8409656636921597E-22
RateUnit events/second

unscheduled-request-throughput
50thPercentile 10.192998
75thPercentile 14.505564999999999
95thPercentile 32.442437
98thPercentile 47.125554
999thPercentile 114.679867
99thPercentile 59.448836
Count 1586
DurationUnit milliseconds
FifteenMinuteRate 0.04805479264335188
FiveMinuteRate 5.464939557181041E-6
Max 313.185898
Mean 13.346384972391089
MeanRate 0.2520403096115104
Min 2.816203
OneMinuteRate 1.2902806468475438E-27
RateUnit events/second
StdDev 11.770903302255155

[quote=“Jonathan Henebrey”]I am stuck between a rock and a hard place.

If I set my scan class any lower than 1.750 seconds I am getting flashes of bad quality on all my tags (~12k). This basically renders the program inoperable with such slow reaction time. Is there any fix to this that a rookie is overlooking?[/quote]

Play around with the contrologic/compactlogic CPU overhead time slice % in CPU properties. I have one project where increasing % from 20% to 50% increased plc tag update reaction time from 5 seconds to < 0.5 seconds.
Overall Task scan time only increased by 5ms doing this.

1 Like

[quote=“Jonathan Henebrey”]Using an AB CLXL72S. Straight connection between PLC and computer.[/quote]Ok. An L7x processor has gobs of bandwidth.[quote=“Jonathan Henebrey”]I didn’t have these problems when I used default tag provider but I was requested to move the tags to their own provider[/quote]Your image is for the tag status page. I asked for the driver diagnostics page. (Link on the right hand side of the Configure => OPC-UA => Devices page.)
In the meantime, though, look at the last execution time for your alarms scan class. 19 seconds is pathological. Did you select the Database Driving Provider? Could you not just use another instance of the Standard Tag Provider? The database providers run tag operations through a database write queue table – it can be a significant bottleneck.

[quote=“Curlyandshemp”]Play around with the contrologic/compactlogic CPU overhead time slice % in CPU properties. I have one project where increasing % from 20% to 50% increased plc tag update reaction time from 5 seconds to < 0.5 seconds.
Overall Task scan time only increased by 5ms doing this.[/quote]I would avoid messing with that time slice. You can get all of it’s benefits and more by simply converting your Continuous Task into a Periodic Task at a pace that makes sense for your process. Then all of the remaining time automatically is available for comms without endangering your system latency. I’ve always found that predictable control system latency is preferable to lowest possible scan-to-scan latency.

I bumped up the time slice from 20% to 40% a while ago with a small notable performance improvement. I agree with pturmel that this isn’t the best approach. By increasing the client memory to the max 4GB I have noticed a large improvement. Still feel like I am bottle-necking somewhere.

I am using a standard tag provider.

The alarm tags were driven off a summation fault status tag to try reduce the required amount of tags to be scanned. I am not using that anymore.

Hmm. Huge numbers of requests are timing out at 2000ms instead of completing. How many simultaneous requests are you allowing? Please show your driver config page.
I wonder if the fact that it’s an L7xS GuardLogix instead of a L7x ControlLogix is playing a part. I don’t have L7xS in my lab here to experiment with.

I’ve got most of the driver properties set to default. so 2.

It’s not a fair comparison, but I created a small project with factory talk ME to check if the PLC is the bottle neck. I can safely say that it is not.

Increase the max concurrent requests. I suggest 8 or 10. (I think the max is 12, but don’t quote me on that.) The problem is that with only 2 requests at a time, and hundreds of requests needed per scan, round-trip travel and process time suddenly becomes significant.
If it’s still not enough, you can create multiple drivers to the same processor to get more concurrency.

I thought that would be the fix. But it still has a slow reaction time (~5 sec) between when a button is pressed and the PLC sees the tag change. Here are the diagnostics when I changed the max concurrent request to 10.

What firmware version is this device?

V21.11

Two remaining possibilities:

  1. you have other systems, PLCs, etc… communicating with this device in addition to Ignition and there’s not enough time to service all the comms requested of it.
  2. your program(s) are executing too slow and there’s not enough time left to service all the comms.