Modbus TCP Request Failed

Hey all,
Just wanted to delve deeper into this problem because I want to understand what might be happening. I have a PLC in the field that I am polling every 20,000ms for data using Modbus TCP. The Timeout is 2,000ms and the Stale rate is 20,000ms.

It works great. Problem is, the moment I increase the poll rate to 30,000ms or greater it starts disconnecting pretty consistently and throwing this error:

java.lang.Exception: RequestCycle stopped.
at com.inductiveautomation.xopc.driver.api.BasicRequestCycle$AddToQueue.run(BasicRequestCycle.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)

The log says: ReadHoldingRegistersReques 29Nov2018 10:06:26 Request failed. FailureType==DISCONNECTED

Then if I switch the reads back to 20,000 ms it will work perfectly. Any ideas on why this is happening? I would love to be able to read at 3-4 minute intervals to save on data.

Thanks!

1 Like

I’m not sure, but shouldn’t you change the stale rate to something greater than the polling rate?

I had to go do some reading on this. It seems the Stale Rate is not even used by the Internal Tag provider (OPC-UA) which I am running the tags through. Instead the Stale Multiplier is used which is a default of 5 times the fastest rate. This should go up automatically when I increase the poll rate to 30,000. At least that is my understanding. I will try increasing the stale rate when I increase the poll rate just in case and see.

Thanks for the response!

Edit: Upon testing this by changing the poll rate to 30,000 and the Stale rate to 150,000 the same thing happened with the disconnects just as expected.

1 Like

This is happening because the Modbus TCP spec suggests that slaves close the connection after a period of inactivity and the device you're talking to is doing so somewhere between 20 and 30 seconds.

Unfortunately our driver really doesn't handle this well - it always treats a disconnect as a bad event and immediately starts reconnecting.

1 Like

Thank you Kevin. So this is basically part of using Modbus TCP? I should use something like MQTT if I want to have reads that are further then x amount of time apart? (Where X is where ever that implementation of Modbus disconnects at?)

Is there a good work around you know of like sending KeepAlive pings or something?

-Josh

Honestly, our driver should just handle this better by not treating disconnects as catastrophic and setting bad quality when it happens and then not immediately reconnecting until the connection is needed again. Unfortunately this kind of fundamental change to the driver isn’t going to be able to happen until something like 8.1 timeframe.

As a workaround, you can keep a single tag polling at ~20s while the rest poll at the slower rate.

As another workaround, you can use Kepware instead, which handles this situation more gracefully.

Wonderful. Well please put the suggestion in for me, since we will be using Ignition when 8.1 does eventually come out. Thank you for the suggestions, I will make use of them.

-Josh