Ignition service restarted automatically. Upon further investigation it was seen that there were no visible spikes on server memory, CPU usage or network interruption. The wrapper logs at that particular instant suggest it might be automatic, with no additional info. How to proceed?
INFO | jvm 2 | 2024/11/25 22:52:12 | I [17:22:11]: Number of records synched: 4163
STATUS | wrapper | 2024/11/25 22:55:14 | JVM appears hung: Timed out waiting for signal from JVM. Restarting JVM.
STATUS | wrapper | 2024/11/25 22:55:14 | JVM received a signal SIGKILL (9).
STATUS | wrapper | 2024/11/25 22:55:14 | JVM process is gone.
STATUS | wrapper | 2024/11/25 22:55:14 | JVM exited after being requested to terminate.
STATUS | wrapper | 2024/11/25 22:55:20 | Reloading Wrapper configuration...
STATUS | wrapper | 2024/11/25 22:55:20 | JVM process is gone.
STATUS | wrapper | 2024/11/25 22:55:20 | Launching a JVM...
INFO | jvm 3 | 2024/11/25 22:55:21 | WrapperManager: Initializing...
Unfortunately, I've already had this happen to me, and the line STATUS | wrapper | 2024/11/25 22:55:14 | JVM received a signal SIGKILL (9).
is usually a giveaway that something else (another process or application) asked Ignition to terminate.
Check your syslog files in /var/logs (or whichever folder contains your OS log files depending on your distribution) at around the time the Ignition Gateway shut down.
The SIGKILL came from the wrapper because the JVM was hung. Something inside Ignition caused it to stop responding. This can be many things, but is often a sudden large request for heap memory that the JVM thinks it can GC to satisfy, but takes too long. That kind of event leaves no chance for Ignition to log its internal performance stats, and the OS wouldn't notice because Java doesn't exchange RAM with the OS after it reaches its max allowance.
Anyways, you may need a more thorough investigation of what was going on just before that event.