Delay in tag update for tags that are using driven type scan class

IPv6 can be disabled via Ignition.conf file:

wrapper.java.additional.8=-Djava.net.preferIPv4Stack=true

Then restart the gateway, check with netstat -anp

Itā€™s extremely unlikely that your issue has anything to do with listening on IPv6.

We just restarted the system and will be monitoring it over the next couple of hours. FWIW, I am attaching a thread dump that I just downloaded after the system restart with everything working fine. Iā€™d appreciate a quick review to let me know if thereā€™s anything that stands out in this as it stands now.ignition-otto-motors_thread_dump20210813-131745.txt (116.3 KB)

What stands out to me in that dump is that you are on version 8.0.9.

8.1.3 included a change for something else that could be at play here:

Use separate batches for writes to different OPC servers to prevent a slow-responding server from blocking an unrelated write to a fast-responding server.

Interesting. Is there a link that you can share so I can better understand the update/fix.

Whatā€™s the best way to update the VM now from 8.0.9 to 8.1.3 and are there any risks to be aware of?

Itā€™s in the release notes for 8.1.3, there isnā€™t any more detail to it than what I shared though.

Upgrades are always supposed to be trouble freeā€¦ just run the upgrader. Youā€™d want to upgrade to 8.1.9 at this point though. Thereā€™s not usually a reason to upgrade to anything but the current if youā€™re doing an upgrade.

Thereā€™s always risk though. Lots of people use staging/dev environments for this kind of thing. You could also call support and have them walk you through it. Generally if something goes wrong itā€™s easy to roll back. You can take your own gateway backup before you start and the installer should also make one before it proceeds.

Thanks. Will go through the release notes and keep this in mind.

Posting another thread dump as we got busier in the process. The data and timing still look good though.ignition-otto-motors_thread_dump20210813-155222.txt (114.9 KB)

@Kevin.Herron We just got the issue to occur on the live system. Here are 2 thread dumps from the issue. I see lots of Threads that show (Waiting_ Or (Timed_Waiting) next to them. Not sure what they mean or what exactly Iā€™m looking for. Could you have a look and let me know if youā€™re able to deciper what the issue could be?ignition-otto-motors_thread_dump20210813-231000_issue.txt (125.2 KB)
ignition-otto-motors_thread_dump20210813-231507_issue.txt (118.9 KB)

Both dumps show project tag change scripts blocked on synchronous writes, and one of the dumps also shows a tag provider thread waiting for an OPC write to finish.

It looks like youā€™re using project tag change scripts, so the original loggers we turned on wonā€™t reflect this, but itā€™s a similar problem.

Project tag change scripts execute synchronously and serially. If you have 20 tags all triggering the same script, and all 20 change, then it will execute 20 times in a row, not 20 times all at once, and if each execution is slow the subsequent executions will be waiting.

If you have multiple tag change scripts defined, and tags are being written to that belong to multiple OPC servers, then the enhancement in the release note I mentioned could also still be a factor.

2 Likes

Thanks Kevin. By project tag change scripts, I assume you mean gateway event scripts that are monitoring many different project tags. I see that in the thread dumps as well but wasnā€™t sure if that was it. We do have 2 gateway event scripts, each monitoring over 100 tags at once since it does the same thing at a different location. So to resolve this, if I split up the event scripts to monitor fewer tags, and create 50 event scripts, where each only monitors say 5 tags, I assume this will help as each script gets a separate thread. However, would this change have any other consequences that we should be aware of? Additional to splitting the scripts up, do you still feel the update to 8.1.9 would help with this as well?

Coming to the tag provider thread, Iā€™m not sure what youā€™re referring to. When I search for tag-provider in the dump, I see some results but not much detail. Are you just looking at the WAITING status for the thread?

It's this thread:

Thread [provider-default-batch-1] id=67, (RUNNABLE)
    owns synchronizer: java.util.concurrent.ThreadPoolExecutor$Worker@510383bb
    kotlin.coroutines.jvm.internal.ContinuationImpl.<init>(ContinuationImpl.kt:102)
    com.inductiveautomation.ignition.gateway.opcua.util.ManagedObject$get$1.<init>(ManagedObject.kt)
    com.inductiveautomation.ignition.gateway.opcua.util.ManagedObject.get(ManagedObject.kt)
    com.inductiveautomation.ignition.gateway.opcua.client.connection.OpcUaConnectionWrite$writeBlocking$1.invokeSuspend(OpcUaConnectionWrite.kt:63)
    kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
    kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:271)
    kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:79)
    kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:54)
    kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source)
    kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:36)
    kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source)
    com.inductiveautomation.ignition.gateway.opcua.client.connection.OpcUaConnectionWrite$DefaultImpls.writeBlocking(OpcUaConnectionWrite.kt:62)
    com.inductiveautomation.ignition.gateway.opcua.client.connection.OpcUaConnectionWrite$DefaultImpls.write(OpcUaConnectionWrite.kt:53)
    com.inductiveautomation.ignition.gateway.opcua.client.connection.OpcUaConnection.write(OpcUaConnection.kt:35)
    app//com.inductiveautomation.ignition.gateway.opc.OpcConnectionManagerImpl$ConnectionWrapper.write(OpcConnectionManagerImpl.java:620)
    app//com.inductiveautomation.ignition.gateway.opc.OpcConnectionManagerImpl.lambda$write$12(OpcConnectionManagerImpl.java:385)
    app//com.inductiveautomation.ignition.gateway.opc.OpcConnectionManagerImpl$$Lambda$1033/0x0000000800ece840.map(Unknown Source)
    app//com.inductiveautomation.ignition.gateway.util.GroupMapCollate.lambda$groupMapCollate$0(GroupMapCollate.java:25)
    app//com.inductiveautomation.ignition.gateway.util.GroupMapCollate$$Lambda$1034/0x0000000800ecec40.map(Unknown Source)
    app//com.inductiveautomation.ignition.gateway.util.GroupMapCollate.lambda$groupMapCollateIndexed$5(GroupMapCollate.java:53)
    app//com.inductiveautomation.ignition.gateway.util.GroupMapCollate$$Lambda$1019/0x0000000800ec5840.apply(Unknown Source)
    java.base@11.0.5/java.util.stream.ReferencePipeline$3$1.accept(Unknown Source)
    java.base@11.0.5/java.util.HashMap$EntrySpliterator.forEachRemaining(Unknown Source)
    java.base@11.0.5/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
    java.base@11.0.5/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
    java.base@11.0.5/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source)
    java.base@11.0.5/java.util.stream.AbstractPipeline.evaluate(Unknown Source)
    java.base@11.0.5/java.util.stream.ReferencePipeline.collect(Unknown Source)
    app//com.inductiveautomation.ignition.common.util.Futures.sequence(Futures.java:26)
    app//com.inductiveautomation.ignition.gateway.util.GroupMapCollate.groupMapCollateIndexed(GroupMapCollate.java:62)
    app//com.inductiveautomation.ignition.gateway.util.GroupMapCollate.groupMapCollate(GroupMapCollate.java:28)
    app//com.inductiveautomation.ignition.gateway.opc.OpcConnectionManagerImpl.write(OpcConnectionManagerImpl.java:378)
    app//com.inductiveautomation.ignition.gateway.tags.actors.factories.value.opc.OpcActorFactory$OpcWriteBatch.execute(OpcActorFactory.java:1022)
    app//com.inductiveautomation.ignition.gateway.tags.evaluation.BatchContextImpl$OpController.run(BatchContextImpl.java:175)
    java.base@11.0.5/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
    java.base@11.0.5/java.util.concurrent.FutureTask.run(Unknown Source)
    java.base@11.0.5/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
    java.base@11.0.5/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    java.base@11.0.5/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    java.base@11.0.5/java.lang.Thread.run(Unknown Source)

Prior to 8.1.3 there's only a single global batch that writes to OPC tags are executed in, regardless of what server they belong to. We changed it so the batching is "keyed" per OPC connection instead, so a slow responding server can't hold up write batches destined for a different connection all together.

You mentioned you have tags coming multiple OPC connections, which is why I keep thinking this could be part of the issue.

Yes, this is correct.

What is actually queued here? Is it the entire script, or just the syncrhonous write?

Thank you!

A ā€œtaskā€ with all the necessary info required to eventually execute the script and tell it about what changed. So if you need to think of it as the entire script thatā€™s probably close enough.

1 Like

The change was implemented to the system today to split up the gateway scripts. Things seem to be good so far. The thread dumps downloaded a couple times during the day do not show any queuing of the tag change scripts.

We have some follow up questions about how do we actually know how many tags is too many for 1 gateway script to handle? Is there limit to no. of tags per thread per second? Can this be calculated or is it purely experimental?

Itā€™s not really about how many tags you have configured to trigger the script, itā€™s more about how often the script will be triggered and how long the execution of that script takes.

So basically, yes, it needs to be determined empirically.

Hi Aiyer,
Are these tags mapped through to the Fleet Manager operator and subscribers on your Otto Fleet Manager VM? You have to be very careful when writing to Fleet Manager. If you write a value to a tag that is invalid (from Fleet Managerā€™s perspective), you get errors after a delay. I also found I had to use the UUID tags rather than the ID tags (e.g. WatchInterlockSubscriberUUID), as the ID tags were a bit buggy.
Or, maybe Iā€™m talking about something completely different than the problem you are experiencingā€¦?

Hey Alistair,

The issue here is very specific to the implementation of Appliance Proxy at this facility where we have over 40 places and 14 robots in the system. I understand what youā€™re referring to about the IAPI tags, although I canā€™t say Iā€™m 100% aware of the symptoms youā€™re describing. Feel free to email me directly at my Clearpath email if you are still seeing the issues you described in your comment.

Hi @Kevin.Herron , when you say "project tag change scripts" do you mean gateway scripts?

Yes, these: Gateway Event Scripts - Ignition User Manual 8.1 - Ignition Documentation

Each project has their own set of them.

1 Like