Closing a TCP Socket from Jython

I am unable to shutdown (close) the socket after a communication abort on client side ! The socket remains open for 10 or 15 minutes and I am unable to restart the TCP communication on port 5000 of local machine with an async call. The socket status seen by command “netstat -ano | findstr :5000” gives on DOS prompt


I am unable to kill this PID from command line, and killing it from Task Manager (which I don’t want to do) and closing the designer stops the gateway and restarts it taking about 5 mins!

I am storing the server and the socket objects in global variables like system.util.getGlobals()[‘server’] and system.util.getGlobals()[‘socket’] in my gateway scripts, which I try to close on exception, but it doesn’t close this socket (or is it TCP server!). I have to wait till the socket automatically closed by the OS after some 10 or 15 mins, sometimes it doesn’t close at all! I am trying to build a robust mechanism so that either the socket restarts after a communication failure (interrupted manually or due to some communication error), or atleast it should free the port so that I can restart my script manually.

Don’t use python’s socket module. Use Java’s sockets. Use the SO_REUSEADDR socket option.

Thanks for your reply , yes I am indeed using javas libraries from python wrapper . Let me try to enable the reuse option

I just checked my code , I have already enabled the SO_REUSEADDR option thru function
Also the socket state is shown as LISNTENING not in TIME_WAIT state in the dos netstat command:


That means the socket is still not closed in the server side. I am actually closing the socket as follows:

	if system.util.getGlobals()['socket']!= None:
		print "closing socket"
except Exception as e:
	print "Exception closing socket",str(e)

Don’t know what’s going on!

You have to set that socket option before you bind the listening address.

yes its before the bind call. The code goes something like this.

	system.util.getGlobals()['server'] = project.socketscript.ServerSocket() 
	system.util.getGlobals()['server'].bind(project.socketscript.InetSocketAddress("", port))
	print "Server Socket created",buffsize,reuseaddr,timeout,isbound,tostr,host
except Exception as e:
	print "Exception ServerSocket creation",str(e)
	if system.util.getGlobals()['server']!= None:

The socket option doesn’t affect any sockets with that address opened earlier without it–those still have to go through their long timeouts before the address will be freed up. The socket option only prevents the long timeout for the instance it is applied to.

The socket connection and communication goes well as expected between this server and a JS client in request response mode. The server waits on a read on socket from a client then responds to a request received from the client and the cycle goes on fine, however when I interrupt the client thru ctrl C and close the socket on read exception the socket remains open, till OS closes it! Its very frustrating to know whats happening !

Something doesn’t make sense. The actual traffic between your server and the client should be happening on the temporary port of the socket returned from the .accept() method of the ServerSocket(). Which makes each client connection independent of the listening port. (The thread handling the ServerSocket should spawn a new thread for this per-client Socket.)

Theoretically, you can make a singleton TCP connection with a non-ServerSocket, but I don’t know of any Java examples.

Oh forgot to mention that its not a generic socket server, its just to cater to a single client on a single port on an Aysyc invoke function. I may generalize it later , but singleton will suffice my present needs. I could have multiple instances of this function on separate async calls on different sockets if I have to handle multiple clients on different sockets. Going step by step there!

I don’t ever use TCP singleton connections, even for a singleton application, so I have no experience to guide you. Good luck.

No problem, you have already guided me! I think I also found a bug, let me see if it solves it! I will post the results after I try it. I am waiting for the socket to close right now!!

I found a bug which partially solves the problem. When I close the connection from server side on tag change event, closing the socket, the socket closes but remains in TIME_WAIT state for mandatory 4 minutes, then I can restart server. The client closes as the socket is closed from server side. There was a bug when tag change event resets the socket, I was directly setting the server and socket to value “None” rather than closing the server and socket with close() call. This solves the problem partially.

However when I kill the client while communication is ON , the server gets a com.inductiveautomation.ignition.common.script.JythonExecException exception and the trace back Connection reset exception which I am unable to capture in script and close the socket. The socket remains in LISTENING mode and takes 15 mints to close by OS. Hope I will be able to resolve it soon!

Separate problem with your syntax. See this:

1 Like

Yesss… It caught the exception! Thanks a lot Phil.


hi @Pramanj

It looks like I need to solve a similar problem in making Ignition act as a TCP message server that a software client can connect to and send messages to.

Would you be able to post or share your full solution with me?