Indirect Tag Delay

Hi

Vision Ignition 8.0.4 Ubuntu

We have lots of similar equipment and design Vision window for it using Indirect Tag.

Opening this window we see all tags in status OPC_Waiting, see below

During 30-45 sec it became normal with the right values.

This delay is too big for our project, we can’t find reasons why it happend.

Please help to solve

Thanks

Are you using leased scan classes? Are the device connections involved overloaded?

No, we are using “Direct” scan class with long rate at this moment (60 sec)

We have one common window for ~50 units. While opening this window gets special code of unit as custom parameter. This code is used for every variable on window in “indirect tag” property.

We suppose what we have a long delay in “redirecting” indirect tags. But how we can influence on it?

Hmmm. Sounds pretty normal. Is the client local to the gateway? Just wondering if there is a latency impact. New tag subscriptions make a round trip to the gateway.

Also, is the stale timeout in the tag group shorter than the rate?

We performed the following experiment.
We created an empty window with one custom parameter.
Then we added a text field and a button that, when pressed, sets the parameter value from the text field.
Finally, we added a value field with indirect tag, containing custom parameter.

After changing the parameter, the value was sometimes updated quickly (1-2 seconds), but more often was updated only after 30-40 seconds.

Client is not local to the gateway.
This is tag group parameters:

This can be explained by your tag group settings: "Read after write" and "Optimistic writes" are both false. So you won't get feedback from your writes until the next scheduled read. Which can be up to 60 seconds. I would recommend using read after write for such a slow rate. (Naming it "quick" is rather amusing...)

If Ignition isn’t the only thing that will write to a given tag then I’d recommend not using the “Read after write” setting because there’s a somewhat dangerous race condition and it’s more likely to occur when using slow scan classes / subscriptions.

Imagine, if you will, a boolean tag in a 30 second scan class.

  1. driver polls value=false, next poll in 30s
  2. user writes value=true to tag, read-after-write confirms value=true, ignition tag now set to true
  3. some seconds later, before the next poll, an external source (script, PLC logic, whatever) sets the value to false again
  4. driver eventually polls again, reads value=false, no change since last poll, so no data change is sent to client

The tag is now false in the PLC but true in Ignition.

You can work around this by using OPC Read Mode instead of Subscribe Mode or by ensuring that multiple systems will not write to that tag. Or just simply not using Read-after-write.

Some time in the future we will introduce additional OPC UA settings Tag Groups to allow you to switch the subscription’s data change trigger from Status/Value to Status/Value/Timestamp, which would also alleviate the issue, at the cost of increased traffic and performance overhead between UA client and server.

I meant that the delay of 30-40 seconds does not occur when updating the tag value, but when updating the path to the tag with the “indirect tag” property

Whoa! Drivers don't cache the last read value for the poll comparison of the same subscribed node? I would call that a bug.

1 Like

The last value of polls for subscribed items is kept (up in the OPC stack, not the driver), but any independent reads occurring have no state associated with them.

This scenario only started happening in 8.0 (at least with the Ignition OPC UA server).

In 7.9 the read would also update the subscribed item (as opposed to be cached and compared later), but this actually lead to a spurious sampling and subsequent reporting of value changes and was non-compliant behavior. An OPC UA server can’t samples values faster than a client request them, only slower.

Caching all read values would still require me to set the cached value before the polled value and result in an extra sample flowing through the OPC monitored item and data change trigger.

I’m curious now if this behavior occurs with other OPC servers. Might test it out today.

Ah, this explains why I haven't pulled my hair out over this yet. It seems a subscription should maintain two values: the last one reported to the subscriber, and the last one actually received from the device. The latter would the one updated by a direct read. A too-soon report to the subscriber could then be suppressed per the spec without losing the fact that a report is needed at the next poll.

This would also lead to non-compliant behavior though.

From the perspective of a client, you’d receive a value of false at T=0, then for apparently no reason another value of false at T=30. If you saw this as a client, and your DataChangeTrigger was Status/Value (as it is in Ignition), not Status/Value/Timestamp, then it would appear the server was broken.

Sorry to hijack this @deforder. I don’t really have anything to add to your actual problem. Maybe call support and explain to or show them what’s going on.

1 Like

Eww. I would argue IA's own server needs to be non-compliant then, at least for the subscriptions from the same client that issued the out-of-band read. Or is there some other aspect of the UA spec that is supposed to mitigate this phenomenon? If not, I would say the spec is pathological here.

{ Grrr. Now I need to read the UA spec. }

My apologies, @deforder. This side issue is rather important.

1 Like

Well, as I mentioned earlier, the ability to set a DataChangeTrigger that includes timestamp changes would mitigate this.

This “bug” is really interested to me, because in my view every one of the component systems (drivers, opc, ignition tags) is acting correctly, but only when viewed as a composite system the undesirable behavior emerges.

In a way, this is just a natural consequence of the fact that in a polling-based system you don’t get to see the value changes that occur in between polls, no matter how important they are to you.