OPC communication with Rockwell PLC

Hey guys, I am stuck in an issue please help me out.
I am using Rockwell CLX PLC and Ignition Vision for its HMI. Data through OPC. I am reading and writing data through OPC and added all the PLC tags in tag provider. The total tags would be around 25k. The HMI is very slow in updating values from the PLC, it takes around 10 seconds or sometimes more to update the values. Also, Ignition software is getting very slow. Do anyone know what's the issue? Or there is some tag limitation for OPC tag provider? Or any configuration setting?

This is a common mistake that overloads drivers. Only create Ignition tags for the OPC items (PLC tags) that you actually need.

If, after you prune down to actually used tags, you still have issues, you may need to restructure some data in your PLC to help external access. See this topic and the topics its links:

5 Likes

Also, depending on structure, you may not need to read all the tags at the same rates, so create various scan groups for the various update rates you need.

Best way to check performance is by looking at your gateway web pages under Status -- Connections -- Devices and look at the details for your device. I always shoot for single-digit overloads. The smaller the better (strive for 0% overload ideally).

4 Likes

@pturmel @michael.flagler Thanks for Quick reply. Will implement these and check. Also, can you please tell is there any tag limitation on Ignition OPC Driver for Rockwell? Also, after I sorted out the tags and added only tags which to be used by Ignition, if I have some latency how to deduce this latency?

No limits.

Starting by sharing a screenshot of your Ignition device's diagnostics page. And perhaps the PLC own (web) diagnostic that shows Class3 Msg utilization.

1 Like

Not "ideally". It should always be zero. Any non-zero value means your PLC comms are swamped.

1 Like

What I mean by ideally is that the number will fluctuate especially right after a download. Things take a bit for that average to come down and I've noticed on mine that on a project I'm on now, I'm hitting over 40k tags on a single processor and see it go up to 1-2% frequently, but it's not enough lag that it causes any issues. On a 1000ms scan rate, 1020ms isn't a dealbreaker (at least in oil and gas where thing's dont move that fast on these big plants).

1 Like

Fair enough. I suppose I should qualify the statement to "any persistent non-zero overload value".

2 Likes

One other recommendation I'll make is on the device configuration, be sure to leave the maximum concurrent requests at 2 (seems to be the sweet spot for me), but under advanced, change the connection size to 4000 bytes instead of the default of 500 bytes unless you're talking to an old processor through like an old ENBT card.

@pturmel also has a great driver that is much faster than the built-in driver, but that all depends on if this is Ignition Edge (custom modules like his can't be used on Edge). While it is an additional cost, if you're not able to change the structure of the PLC to get faster comms, his driver may help.

@pturmel I'd b curious about the 4000 byte connection size and how it relates to MTU size of ethernet considering this is larger than the default MTU of 1500 bytes. Do packets get fragmented, or do both ends automatically know to break up the request into multiple packets?

1 Like

The OS on both ends fragments efficiently and streams based on the negotiated TCP window size (typically much larger). So, for devices that can do 4000, almost always a win.

Since the current EtherNet/IP spec requires errors for oversize to report what size is supported, I chose to default to 4k, and gracefully fall back, instead of requiring a deliberate change. Some older devices to not make that report, so I still have the adjustment available. (And some devices simply don't comply with the spec.)

We've seen performance gains up to ~6-8 concurrent requests. I think it depends on your hardware, tag make up, existing comms load, etc... It's best to experiment.

This. And if a high-latency network path is involved, expect to benefit from more concurrency.

1 Like


Whoa! So a 3.5-second scan target becomes a 20+ second actual polling pace. Prune your tag list first. Really. If that 100k+ item count is after pruning, you probably need to try my EtherNet/IP driver module in place of the IA driver.

2 Likes

will explore EtherNet/IP driver module

Yeah, you can try bumping up the 500 byte connection size to 4000, but it's still not going to get to 0 with that many tags. Also, do you need them all at 3500ms? Are tags structured in UDTs or arrays? Is this using PlantPAx?

As an example, here's my most recent project (missing from the screenshot is my 30000ms scan rate for string data) which isn't using Phil's driver, but due to structure of tags, etc it performs well:

Although my ethernet comms of my processor is pushed to its limit:
image

My tags are in UDT, addon instructions, and modified PlantPax library. I need it less than 500ms. I am using 1756-L74 PLC with redundancy.

You need my driver.

Not going to happen for 100k monitored items. You need to prune. Split tags into separate tag groups for the ones you need fast versus slow. For really fast tags, you may need class1 comms instead of class3 (or both).

My module's user manual has a great deal of advice for this kind of problem.