Unable to connect to OPC server - ignition in docker container

I’m having issues trying to set up an OPC Connection.

I have Ignition 8.1.2 running on Linux Mint 20 in a docker container. I have port 55105 exposed. The OPC server is Siemens SimaticNET OPC Server V16 - S7OPT on port 55105 running on a VirtualBox VM on the same host as the docker container.

$ docker ps
CONTAINER ID   IMAGE                     COMMAND                  CREATED         STATUS                            PORTS                                                                            NAMES
e1d5cf8f36a4   kcollins/ignition:8.1.2   "tini -g -- /usr/loc…"   3 seconds ago   Up 2 seconds (health: starting)   8000/tcp, 8043/tcp, 0.0.0.0:8088->8088/tcp, 8060/tcp, 0.0.0.0:55105->55105/tcp   compose_gateway_1
b2ed66907495   postgres:13.1             "docker-entrypoint.s…"   2 hours ago     Up 2 seconds                      0.0.0.0:5432->5432/tcp                                                           compose_db_1

for some reason the OPC server won’t respond to the endpoint URL. It seems like the discovery URL is responding though.




I tried to use the substitute URL but the resulting endpoints are not correct. They are using port 4840 instead of 55105 and the security policies are not matching what’s configured on the server.

Any ideas?

What do the final settings look like if you edit and skip to advanced?

The server returns those endpoints you see in the final screenshot, what you see in that list is not influenced by any substitutions or changes you make to the discovery URL.

Try setting the “Host Override” to “10.0.1.156:55105”?

What doesn’t work about the current configuration?

UaException: status=Bad_SecurityChecksFailed, message=An error occurred verifying security.
at org.eclipse.milo.opcua.stack.client.transport.uasc.UascClientAcknowledgeHandler.onError(UascClientAcknowledgeHandler.java:258)
at org.eclipse.milo.opcua.stack.client.transport.uasc.UascClientAcknowledgeHandler.decode(UascClientAcknowledgeHandler.java:167)
at io.netty.handler.codec.ByteToMessageCodec$1.decode(ByteToMessageCodec.java:42)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:498)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:437)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.base/java.lang.Thread.run(Unknown Source)

8.1.2 (b2021020311)
Azul Systems, Inc. 11.0.9

Ok, that means you’re able to connect but the server is closing it down with Bad_SecurityChecksFailed.

Usually this means you need to go figure out how to mark the Ignition client certificate as trusted in the server’s configuration. You will also have to do the same in the Ignition gateway under OPC UA > Security > Client tab for the server’s certificate

Usually once a client tries connecting to the OPC server I see that client in the server config application and can mark that certificate as trusted. But that’s not happening. And all of this config stuff is behaving weirdly. I think it’s a networking issue with the container. I’ve set up connections to this OPC server several times before and have never seen anything like this. This is the first time attempting from a container though.

If I try using 10.0.1.156:55105 in the initial endpoint URL box it times out.

Not sure where port 55105 is coming from then because your server sure is responding on 4840 right now.

Getting this message means you are definitely connecting to something before it rejects you for security reasons.

One thing I'll mention here is that there should not be a need to publish port 55105 on the Ignition container--you don't need to do that in order to make that outbound connection to the Windows VM running SimaticNET. Are both of these VM's using a bridged-mode network configuration? The default is usually a NAT configuration, and for VirtualBox I believe this means machines end up on a 192.168.56.0/24 network.

But alas, you're connecting to something...

Also, another thing I’ll mention is that you could also adjust your compose definition to use host networking if you want to help eliminate the extra jump from the container network being created, though it should be able to work this way as long as your Linux Mint host can hit that other VM… (with this, you’d not need to publish any ports)

Port 55105 is the port it should be connecting on.

I’m using a host only network 10.0.1.128/25. All other network operations are fine. And yes, it is connecting to something.

Are you sure by checking the box you’re actually using 55105 and not the OPC UA default of 4840?

Yes.

From another client (OPC Scout)
Screenshot from 2021-03-05 21-08-32

These are the correct endpoints.

What happens if you start the discovery process with “opc.tcp://10.0.1.156:55105” instead of what you had, which defaulted to 4840.

I was trying to do that but I couldn’t get it working. I’m not super familiar with the compose instructions and syntax yet.

My compose file:

version: '3.1'
services:
  gateway:
    image: kcollins/ignition:8.1.2  # You can change `latest` to a specific version, e.g. `8.0.5`
    ports:
      - "8088:8088"
  #    - "55105:55105"
    stop_grace_period: 30s
    secrets:
      - gateway-password
    volumes:
      # - ./gateway_backup.gwbk:/restore.gwbk
      - gateway_data:/var/lib/ignition/data
    logging:
      driver: "json-file"
      options:
        max-size: "200k"
        max-file: "10"
    environment:
      GATEWAY_SYSTEM_NAME: DockerGW
      GATEWAY_ADMIN_USERNAME: david
      GATEWAY_ADMIN_PASSWORD_FILE: /run/secrets/gateway-password

  db:
    image: postgres:13.1
    ports:
      # Note that the 5432 port doesn't need to be published here for the gateway container to connect,
      # only for external connectivity to the database.
      - "5432:5432"
    volumes:
     - db_data:/var/lib/postgresql/data
     - ./db-init:/docker-entrypoint-initdb.d
    logging:
      driver: "json-file"
      options:
        max-size: "200k"
        max-file: "10"
    secrets:
      - postgres-password
    environment:
      # See https://hub.docker.com/_/postgres/ for more information
      POSTGRES_PASSWORD_FILE: /run/secrets/postgres-password
      POSTGRES_DB: ignition
      POSTGRES_USER: ignition

secrets:
  postgres-password:
    file: ./secrets/POSTGRES_PASSWORD
  gateway-password:
    file: ./secrets/GATEWAY_PASSWORD

volumes:
  gateway_data:
  db_data:

UaException: status=Bad_Timeout, message=io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.1.156:55105

Well… I give up. It’s either networking problems or security problems on the server side. Don’t know what else to try :confused: