Ignition OPC UA Client connection to Ignition OPC UA Server failed

I just got called for an “Ignition server down.” It wasn’t down, but may as well have been for production’s purposes due to Ignition’s OPC UA Client refusing to connect to Ignition’s OPC UA Server.

Error alternated between these two:

UaException: status=Bad_ConnectionClosed, message=connection closed
at org.eclipse.milo.opcua.stack.client.transport.uasc.UascClientMessageHandler.channelInactive(UascClientMessageHandler.java:154)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:241)
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:389)
	at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:354)
	at io.netty.handler.codec.ByteToMessageCodec.channelInactive(ByteToMessageCodec.java:118)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:241)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1405)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
	at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:901)
	at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:819)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:497)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at java.base/java.lang.Thread.run(Unknown Source)
8.1.9 (b2021080617)
Azul Systems, Inc. 11.0.11
UaException: status=Bad_InternalError, message=unsupported protocol: null
	at org.eclipse.milo.opcua.stack.client.DiscoveryClient.getEndpoints(DiscoveryClient.java:189)
	at com.inductiveautomation.ignition.gateway.opcua.client.ManagedClientKt.initialize(ManagedClient.kt:84)
	at com.inductiveautomation.ignition.gateway.opcua.client.ManagedClient$create$1.invokeSuspend(ManagedClient.kt:63)
	at com.inductiveautomation.ignition.gateway.opcua.client.ManagedClient$create$1.invoke(ManagedClient.kt)
	at com.inductiveautomation.ignition.gateway.opcua.util.Managed$get$deferred$1.invokeSuspend(Managed.kt:40)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
	at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
	at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)
8.1.9 (b2021080617)
Azul Systems, Inc. 11.0.11

Logs have a bunch of these:

These two posts pointed me in the right direction (add “/discovery” to end of discovery url):

Thanks @Kevin.Herron!

The failure was a little surprising as this Ignition server has been running 8.1.9 for months with no recent config changes and this problem popped up today. It also persisted through OPC UA module and gateway restarts. This is on a server upgraded through many versions since 7.9.x so config was as created in 7.9.x and upgraded by later version installers.

If you’re updating this on a server that’s been upgraded from earlier versions and use the endpoint configuration wizard to add discovery to the discovery url, beware the wizard will automatically rename your server “Ignition OPC UA Server” (without the dash between OPC and UA like in older versions). If you don’t catch this and your server previously had the dash in this name, your tags will be broken due to an OPC server name mismatch.

1 Like

I doubt this started happening “out of the blue”. Someone isn’t fessing up to changing a server setting or running through the endpoint discovery or something. Even if it wasn’t intentional.

1 Like

Yes, the server was restarted before I got to it. I suspect something was changed weeks ago and only took effect with a server restart yesterday. I’m not sure why it was restarted, but it doesn’t look like this problem was recurring before that restart.

One difference I noticed between this server and other servers that also lacked the “discovery” suffix on internal OPC UA server-client connection is this the None security policy was removed from this one (I’ve since removed it and added discovery suffix on all the others). If that change wouldn’t take effect till restart, I’m going to guess it was the root cause.

I think that’s what happened. If the None security policy is not in use the discovery URL must have the /discovery suffix because that will be the only unsecured endpoint left that discovery can be done at. Security policy changes don’t take effect until restart, so… sounds like what happened.

1 Like