Socket comms persistent left-over thread (Jython-Netty-Client)

I have implemented a UDP listener in Ignition 8.1 scripting that spawns a listener thread using system.util.invokeAsynchronous.

As a test, I create a new socket, bind it to an IP/port, then close it after a time.

When I look in the Gateway/Status/Threads page, the Async thread that is created is disposed of properly after the time elapses.

There is also a “Jython-Netty-Client” thread that is created when the socket bind is called, but this thread persists even though I have disposed of the socket by calling close().

The socket itself seems to close properly after the time as it disappears when I check using netstat from the command line.

The “Jython-Netty-Client” threads seem to persist unless I re-start the gateway. I’m concerned that these will cause a memory leak over time if these threads are accumulating every time I create a socket.

Is there a way to close / dispose of a socket that will also clean up the associated Jython-Netty-Client thread?

Below are the details of a persistent Netty thread:

Thread [Jython-Netty-Client-0] id=383, (RUNNABLE) (native)

java.base@11.0.11/$SubSelector.poll0(Native Method)
java.base@11.0.11/$SubSelector.poll(Unknown Source)
java.base@11.0.11/ Source)
java.base@11.0.11/ Source)
java.base@11.0.11/ Source)
java.base@11.0.11/ Source)

Do they accumulate every time? This is part of the Jython codebase, not Ignition, but a quick look at it shows this thread is part of a global/singleton NioEventLoopGroup and you shouldn’t accumulate them.

Does appear to happen every time. If I restart the gateway all the “Jython-Netty-Client” threads disappear from the thread list. Each time I open a socket a new Netty thread is created which does not go away when I subsequently close the socket.

Can you open/close and see if the number of threads ever exceeds 10?

I was getting more than 10 of these threads during testing and generating unhandled exceptions, which may have been causing the threads to not clean up properly.

I have since added in additional exception handling, now when I close and open sockets the “Jython-Netty-Client” threads increment but never appears to exceed 10.

I’m happy it seems to be stable and won’t cause leaks now.

Out of interest is there some documented behaviour that caps these threads at 10?

No, not documented. It’s part of Jython’s internal implementation: jython/ at 0a58cc26566d2b2334e80b2b3f2f42f6c738db2d · jython/jython · GitHub

FWIW I’d generally recommend not using the Jython standard library networking APIs and instead coding against Java networking APIs.

1 Like

I confess the intricacies of socket comms are outside of my comfort zone and I’m more comfortable in Jython/Python than Java.

The Jython code appears functional & stable. The application is not very demanding just listening for ASCII data over UDP from a single device at slow intervals.

If I run into issues or need higher performance I’ll have to delve deeper into Java and the networking libraries.

I was having this same problem. I had a UDT that was getting the status of a printer using a command sent via a Jython socket. Over time it built up over 900 jython-netty-client threads. I have since refactored the code to use java sockets instead, which has solved the problem. Thanks @Kevin.Herron