Hey, I’m using Ignition 8.0.4 with Kepserver EX 6.6. I have 2 different machine having the same exact setup in term of version and configuration only different IP. One machine is working just fine and the other is always getting faulted then reconnect rinse and repeat. Here is the result im getting for the faulted code.
java.lang.Exception: session inactive: id=NodeId{ns=1, id=d2a6326e-e44f-4d6e-8dbe-fa0acf6dce47} name=ignition[Ignition-Ignition-cta]_KepServerEX/UA Corrosion_1614287155977
at com.inductiveautomation.ignition.gateway.opcua.client.connection.OpcUaConnection$MiloSessionActivityListener.onSessionInactive(OpcUaConnection.kt:287)
at org.eclipse.milo.opcua.sdk.client.session.SessionFsmFactory.lambda$null$31(SessionFsmFactory.java:557)
at java.base/java.util.concurrent.CopyOnWriteArrayList.forEach(Unknown Source)
at org.eclipse.milo.opcua.sdk.client.session.SessionFsmFactory.lambda$configureActiveState$32(SessionFsmFactory.java:557)
at com.digitalpetri.strictmachine.dsl.ActionBuilder$PredicatedTransitionAction.execute(ActionBuilder.java:76)
at com.digitalpetri.strictmachine.StrictMachine$PollAndEvaluate.lambda$run$0(StrictMachine.java:207)
at java.base/java.util.ArrayList.forEach(Unknown Source)
at com.digitalpetri.strictmachine.StrictMachine$PollAndEvaluate.run(StrictMachine.java:198)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
8.0.4 (b2019091612)
Azul Systems, Inc. 11.0.4
Anyone have an Idea what could be my problem. I’m only trying to transfer 9 tags from the machine that his faulted.
Thanks for your help.
Can you upload your gateway logs as well?
Usually the connection faults because KSE stops responding to the “keep alive” requests and a reconnect is forced, but it could be something else.
SessionFsm |
25Feb2021 16:18:01 |
[52] Keep Alive failureCount=1 exceeds failuresAllowed=0 |
UascClientMessageHandler |
25Feb2021 16:17:36 |
No pending request with requestId=743 for ServiceFault |
UascClientMessageHandler |
25Feb2021 16:17:36 |
No pending request with requestId=738 for ServiceFault |
UascClientMessageHandler |
25Feb2021 16:17:36 |
No pending request with requestId=742 for ServiceFault |
SessionFsm |
25Feb2021 16:17:06 |
[15] Keep Alive failureCount=2 exceeds failuresAllowed=1 |
SessionFsm |
25Feb2021 16:16:51 |
[52] Keep Alive failureCount=1 exceeds failuresAllowed=0 |
UascClientMessageHandler |
25Feb2021 16:16:26 |
No pending request with requestId=595 for ServiceFault |
UascClientMessageHandler |
25Feb2021 16:16:26 |
No pending request with requestId=594 for ServiceFault |
UascClientMessageHandler |
25Feb2021 16:16:26 |
No pending request with requestId=591 for ServiceFault |
SessionFsm |
25Feb2021 16:15:57 |
[15] Keep Alive failureCount=2 exceeds failuresAllowed=1 |
SessionFsm |
25Feb2021 16:15:42 |
[52] Keep Alive failureCount=1 exceeds failuresAllowed=0 |
UascClientMessageHandler |
25Feb2021 16:15:17 |
No pending request with requestId=447 for CreateSessionResponse |
UascClientMessageHandler |
25Feb2021 16:15:17 |
No pending request with requestId=444 for CreateSessionResponse |
UascClientMessageHandler |
25Feb2021 16:15:17 |
No pending request with requestId=446 for ServiceFault |
UascClientMessageHandler |
25Feb2021 16:15:17 |
No pending request with requestId=443 for ServiceFault |
UascClientMessageHandler |
25Feb2021 16:15:17 |
No pending request with requestId=445 for ServiceFault |
SessionFsm |
25Feb2021 16:13:57 |
[15] Keep Alive failureCount=2 exceeds failuresAllowed=1 |
SessionFsm |
25Feb2021 16:13:42 |
[52] Keep Alive failureCount=1 exceeds failuresAllowed=0 |
UascClientMessageHandler |
25Feb2021 16:13:01 |
No pending request with requestId=297 for ServiceFault |
UascClientMessageHandler |
25Feb2021 16:13:01 |
No pending request with requestId=295 for ServiceFault |
UascClientMessageHandler |
25Feb2021 16:13:01 |
No pending request with requestId=296 for ServiceFault |
SessionFsm |
25Feb2021 16:12:32 |
[15] Keep Alive failureCount=2 exceeds failuresAllowed=1 |
SessionFsm |
25Feb2021 16:12:17 |
[52] Keep Alive failureCount=1 exceeds failuresAllowed=0 |
its about the same errors for days of logs let me know if you need more hehe !
Seems like both your connections experience this lack of response from KSE at the same time but one is configured to allow more keep alive failures than the other.
So, maybe you can tweak those settings, but also there might be something about your network or environment causing this failure rather than KSE lagging or not responding for some reason, assuming those are 2 different servers.
Yeah 2 differents servers but they are on the same subnet and they are like twin. Ill try to tweak it see if logs changes.
The default keep-alive settings are: 1 failure allowed, 15,000ms interval, 10,000ms timeout.
Might be worth trying 2 allowed with the rest at default.
Might be interesting to capture with Wireshark on the Ignition server while this is all going on.