Remote Third Party OPC Servers

Hello all,

I’ve just run into a curious issue - Has anybody else encountered this problem?

I have UaGateway installed to bridge Ignition with an OPC DA server containing about 23000 nodes. The connection was successfull, and everything looks good - I’m able to connect on Ignition, and browse with the designer.

The problem comes when I try to add tags to Ignition from this server… Adding 1 or 2 tags works with no problem, but when I add hundreds of tags at once, they all show up with null values and bad or unknown quality, and any subsequent tag I add shows up the same way.

The only out of place message on the Ignition console is a Timeout error:

WARN	6:07:07 PM	OpcUaConnection$UaSubscriptionChangeListener	Failed to create 441 items on subscription 'Scanclass 'Default[default]'': UaException: status=Bad_Timeout, message=request timed out after 120000ms
 	
java.util.concurrent.ExecutionException: UaException: status=Bad_Timeout, message=request timed out after 120000ms
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at com.inductiveautomation.xopc2.client.OpcUaConnection$UaSubscriptionChangeListener.synchronizeSubscriptions(OpcUaConnection.java:1017)
at com.inductiveautomation.xopc2.client.OpcUaConnection$UaSubscriptionChangeListener.synchronizeSubscriptions(OpcUaConnection.java:858)
at org.eclipse.milo.opcua.stack.core.util.ExecutionQueue$PollAndExecute.run(ExecutionQueue.java:107)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: UaException: status=Bad_Timeout, message=request timed out after 120000ms
at org.eclipse.milo.opcua.stack.client.UaTcpStackClient.lambda$scheduleRequestTimeout$10(UaTcpStackClient.java:257)
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:581)
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:655)
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:367)
... 1 more

Does anyone have any ideas?

Thanks for your help.

If you’re on 7.9.1 you could increase the request timeout and/or lower the “max per operation” setting. Both of these can be found in the advanced category on the OPC UA connection settings page.

Lowering the max per operation without adjusting the request timeout is probably better.

It sounds like the DA server on the other side is extremely slow, so it may take a while until all your tags are fully subscribed.

edit: I see now this is posted in the 7.8 forum. I think those settings exist in 7.8 as well…

Thanks Kevin, those settings are indeed present.

I reduced the “max per operation” setting to 100, and it looks to have solved the problem… Can you give me any clarification on exactly what the max per operation represents, and its link to the request timeout?

I’d also love to know why the default max per operation was 8192, I imagine that has some significance.

Thanks again for the help :slight_smile:

It’s the max number of “items” that will be asked for or sent in any request from the UA client for that connection. In your case, it seems creating more than a certain amount of monitored items at a time would take longer than 2 minutes (the default request timeout). The client will batch things into multiple requests using the max per operation as the partition if the number of items exceeds it.

8192 as the default is completely arbitrary, it just seemed low enough that most low end servers could handle it without returning Bad_TooManyOperations.

Got it, Thanks for the info.