Are tagBlocking reads affected by TagGroup updates?

I'm having some issues with synchronising tag reads.

The PLC has a number of arrays that store parts information, most importantly the part id. There are 4 arrays, each for different queues the parts are in. These queues have lengths of 50, 20, 20, and 2 parts. Parts move in and out of arrays all the time.

I need to read all part ids from these array tags in one hit and combine them together to produce a complete list of parts. I add a column to show each part's location, based on the array.

I'm having an issue occasionally where there are two parts with the same ID in two of the sequential tables. What I'm presuming is happening is that I'm reading one array's tags, then the next one, and in that time, the PLC has moved the parts in the arrays. I am doing the reads of each array separately which I'm hoping is the issue. However, my question is: if I read all 4 arrays in one readBlocking, am I guaranteed to get the values of the tags all at once? Or can the tag group update these tags in the middle of my read and achieve the same result as I'm seeing?

Otherwise, what's my best course of action to get around this?

You cannot naturally synchronize across multiple PLC tags. At the very least, you will need to make a single datatype with the four tables, and make a single tag with that.

You cannot synchronize across multiple tag read requests. The data types involved must be entirely readable and fit within the communication buffer for a single read.

You cannot synchronize subscribed tags¹. Any .readBlocking() operation can get started and the subscriptions can deliver fresh data in the middle of it. Use a single system.opc.readValues(). Use wireshark to confirm that a single read tag service packet is produced each time.

¹ My Modbus and new EtherNet/IP modules support a special OPC Item Path of [device]@barrier which delivers a timestamp marking the completion of subscription deliveries for the subscription pace with which it is included. A tag change event monitoring this timestamp can safely gather the rest of the values with readBlocking. Which might suffice for you (instead of the OPC read) if you make the other two adjustments.

1 Like

If you cannot make the PLC tag adjustments, your only other option is to add PLC code to provide a "quiet time" where the PLC doesn't change the data. The quiet time must extend until Ignition echos the trigger or otherwise reliably signals that it has read everything.

1 Like

What's to stop the next "wave" of OPC subscription updates from being applied in the middle of the readBlocking call? You could end up reading a mix of old/new values.

There's nothing to stop another batch from coming in, true, but that would only apply if something held up the tag change event script. Possible, but then you have bigger problems. For any subscription pace in the hundreds of milliseconds or slower, a tag change event whose first operation is a readBlocking across all of the tags will effectively capture the equivalent of system.opc.readValues() across that set.

I would say the closest thing you can get to an "atomic" update of more than one tag is to issue a system.opc.readValues to a server.

What a server does with that, whether the underlying protocol can fit all those reads into a single request, and whether all those tags were updated atomically in the PLC and it can guarantee comms are serviced atomically between scans or at the start or end of a scan... unknown.

If you want a guaranteed transaction you need logic on both sides to indicate ready/done.

Concur.

My choice of @barrier was to explicitly mimic the use of that term in modern CPU memory access ordering. It isn't any kind of lock or pause, just a marker at the end of each subscription cycle. If the cycle is overloaded, the window of opportunity is short. (The duration of my abbreviated optimization recheck in my modules. A tag change event with a dedicated thread should still easily beat it.)

If you miss the window, it can be detected by checking the QV timestamps against the barrier timestamp.

Thanks guys, I'll see if I can get some handshaking with the PLC sorted, but we're approaching the memory limit in the PLC already, so this may not be a possibility.

If this isn't possible, then I will go down the route of using a grouped opc read, but I'll do some validation to check for duplicate sequential parts across the parts arrays along with checking them against the original parts list in case the duplicate parts are actually supposed to be there (very fringe case, but not impossible). I think this will solve my issue :crossed_fingers:

Curious about using Wireshark to make the single read check. I haven't much experience with Wireshark unfortunately :confused:
What am I looking for? Here's what I think captures the opc.readValues function call and return:

Where was this capture taken and what kind of PLC/driver is this?

Taken from my laptop, filtered to the Ignition GW's IP address. PLC is Rockwell Logix. I probably should have said that in my original reply!

You need to run the capture on the Gateway. You're only capturing traffic between your laptop and the Gateway right now, not between the Gateway and PLC.

Lol of course :man_facepalming: It's too late here...

We won't be able to tell you looking at a screenshot if it was one request or not. Even knowing the tags and IPs involved it will be difficult to pick out the read request from the normal subscription polling traffic.

What if I disable all other tags? Or at least set theTag Group rate to something stupidly large?

Disabling everything would be the best. You'll still see some traffic, but it won't be the tag read service, so it should be enough.

1 Like

Ok, so I have a capture, it's too long to post in a screenshot without know exactly where it is. Not exactly sure what I'm looking for though. Can I filter for a tag or something?

Do the Ignition Logix loggers help at all?

I think I found the TCP stream??

At least, I can find one of the tag values in there (it's a string :/)

Can you post the actual capture somewhere?

PM'd you

1 Like