Websocket connection closed unexpectedly. code=1006

Hey,

Our gateway log is being polluted with messages like this:

Websocket connection closed unexpectedly. code=1006, reason=Session Closed, codeMeaning=Normal Closure, codeDescription=Reserved. Indicates that a connection was closed abnormally (that is, with no close frame being sent) when a status code is expected. 

We've got about two dozen Datalogic Skorpio X5 Android barcode scanners running Perspective. I suspect these messages appear when the scanners are "awoken" (trigger is pressed to activate the screen), although I haven't been able to consistently reproduce this. The logs don't mention any kind of client ID or anything so I can't even discern clients from one another.

Wi-Fi coverage is excellent (full bars everywhere) so I don't think that's the issue.

I could hide these messages in the logs by setting the filter to "Error" for the "Perspective.WebSocketChannel" logger, but there are other messages produced by this logger that I do want to be able to see.

Is there anything we can do about this, or will we just have to live with it? There are no other adverse effects as far as I can tell.

Running 8.1.36, excerpt from wrapper log below.

wrapper.log
INFO   | jvm 1    | 2024/02/28 11:37:18 | W [P.WebSocketChannel            ] [10:37:18]: Websocket connection errored out. Keeping session open. 
INFO   | jvm 1    | 2024/02/28 11:37:18 | java.nio.channels.ClosedChannelException: null
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.websocket.core.internal.WebSocketSessionState.onEof(WebSocketSessionState.java:169)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.websocket.core.internal.WebSocketCoreSession.onEof(WebSocketCoreSession.java:253)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.websocket.core.internal.WebSocketConnection.fillAndParse(WebSocketConnection.java:482)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.websocket.core.internal.WebSocketConnection.onFillable(WebSocketConnection.java:340)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:314)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:100)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:558)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:379)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:146)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:100)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.io.SelectableChannelEndPoint$1.run(SelectableChannelEndPoint.java:53)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.runTask(AdaptiveExecutionStrategy.java:416)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.consumeTask(AdaptiveExecutionStrategy.java:385)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.tryProduce(AdaptiveExecutionStrategy.java:272)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.lambda$new$0(AdaptiveExecutionStrategy.java:140)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:411)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:969)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.doRunJob(QueuedThreadPool.java:1194)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1149)
INFO   | jvm 1    | 2024/02/28 11:37:18 | 	at java.base/java.lang.Thread.run(Unknown Source)
INFO   | jvm 1    | 2024/02/28 11:37:18 | W [P.WebSocketChannel            ] [10:37:18]: Websocket connection closed unexpectedly. code=1006, reason=Session Closed, codeMeaning=Normal Closure, codeDescription=Reserved. Indicates that a connection was closed abnormally (that is, with no close frame being sent) when a status code is expected.  
INFO   | jvm 1    | 2024/02/28 11:37:18 | I [p.ClientSession               ] [10:37:18]: WebSocket disconnected from session. session-project=Scanners

No, these would be from the scanner going to sleep. You probably cannot fix this, as the connection really is being closed.

Did you find any solution to your problem, I am facing the same issue.

I still have the same issue. In the meantime, have you found a solution?

Yes, i found the solution.

It was due to the firewall configuration. Precisely, the IPS (Intrusion Prevention Systems) that blocked the connection although the port was authorised.

Is this a Fortinet firewall? If so, do you have any instructions that I could pass on to IT staff to manage this situation?

No, that's all the feedback I received from the IT staff.

In addition to the disconnection between the server and the client itself, you may also need to pay attention to whether ws is disconnected due to the existence of intermediate proxy forwarding.

For example, after using the nginx reverse proxy, an error is reported after a certain period of time after connecting to sokcet, which is caused by an inappropriate timeout setting.

e.g.

location / {
            proxy_pass http://localhost:8088;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            # Setting the timeout
            proxy_read_timeout 86400s;
            proxy_send_timeout 86400s;
            # Preventing Nginx from caching WebSockets Unsure of the need for the
            #proxy_buffering off;

Setting the proxy_read_timeout time to be greater than the heartbeat time of the server can solve this disconnection problem

Did you solve your issue with Fortinet? I am having similar issues and am using Fortinet. Thank you

We ran into this issue as well on a Meraki MX appliance. Temporarily disabling IDS/IPS stopped most of the WebSocket drops, but we couldn’t leave it like that due to security issues.

Our network doesn’t have Layer 3 switches yet, so all inter-VLAN traffic was being routed through the firewall. To work around that, we moved most of our HMI stations onto the same VLAN as the Ignition gateway (which they probably should have been anyways), and added a secondary NIC to the gateway so it could sit on both our PLC VLAN and the HMI VLAN. That allowed the PLC/HMI traffic to bypass IDS/IPS inspection entirely.

These changes helped the timeouts and disconnects on our stations disappear

We’re planning to add a Layer 3 switch soon so internal routing happens on the switch instead of going through the firewall.

Hope this helps