Gateway timers stopped without any reason

I have a problem where the Gateway script stopped without any reason or logs. Just stalled!
Restarting the Gateway or the service has solve the problem.

I need to know why the software is doing that and what can I do to either monitor or make sure it does not occur again. We have two sites that did that for now.

Thanks for any help.

Take a thread dump next time it happens. What’s likely is that some script you’re executing off the timer has blocked, and we may be able to see what it’s blocked on in the thread dumps.

Is there a way to do this from Ignition or you need to do this from Windows.
For example, the timers stopped at 2 am, my investigation was at 9am, I do sleep!
So how do I get the thread dump event if this is 7 hours later.

There’s a link to download a thread dump from the thread viewer page in the gateway. It doesn’t matter if you do it 7 hours later, as long as you haven’t restarted the gateway yet. If the threads were stuck/blocked for some reason 7 hours ago they will still be blocked when you go to retrieve the thread dump.

and a lo-fi way of troubleshooting and monitoring this would be to put print statements at the beginning and end of these scripts running on the timer.

It will be useful to see if they always happen in pairs, or if you get one “start” printout and never see a “finished”.

1 Like

I have a third site that stopped also at midnight sharp. after months or year of data collection.
This time took a thread dump.
ThreadDump.txt (164.7 KB)

Is the “stuck” one called “Axium”?

This thread seems to waiting inside the redshift JDBC driver for a query to execute.

Daemon Thread [gateway-script-shared-timer-[Axium]] id=64, (TIMED_WAITING)
	owns monitor:
	waiting for: java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@5ca53be1
	sun.misc.Unsafe.park(Native Method)
	java.util.concurrent.locks.LockSupport.parkNanos(Unknown Source)
	java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(Unknown Source)
	java.util.concurrent.ArrayBlockingQueue.poll(Unknown Source) Source) Source) Source) Source) Source) Source) Source) Source) Source) Source)
	sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
	sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	java.lang.reflect.Method.invoke(Unknown Source)
	org.python.pycode._pyx2.f$0(<TimerScript:Axium/MissingMinutes @120,000ms >:1)
	org.python.pycode._pyx2.call_function(<TimerScript:Axium/MissingMinutes @120,000ms >)
	java.util.TimerThread.mainLoop(Unknown Source) Source)

Axium is the name of the project.
There is two gateway script.
The first one will generate a file with all the tags in it and copy this to the hard disk.
The second thread will verify in Redshift if the data is there (from the first timer).

Are you saying that the second timer has stopped the first timer that has nothing to do with it?
The reason for two timers is to make sure that everything is separate.
Why would it had stopped at midnight? The same happened with two other sites. We are now getting a failure of about 10% based on every sites installed.
Also if the second script that does the Query to the database has a lock or is stalling, shouldn’t this have a timeout where it release everything and send a failure to the request?
Still don’t get the fact that a second timer would have stopped every timer on the Gateway, it just doesn’t make any senses.

There’s no timeouts as far as script execution is concerned, but you can look to whatever documentation is available for the JDBC driver to see if there’s a connection property that lets you set a timeout. With some JDBC drivers the default timeout is 0, which is no timeout (or infinite timeout).

You’re using a shared timer (you can see in the thread name), which means yes, this timer script blocks the other ones from executing. Mark each timer script as dedicated to prevent this.