One of my previous customer has ignition (dont remember whih version), where they have to restart the server with ignition on because the server it self becomes slow (vm warer:ed server. They have alotted 8GB of ram and with weeks of logging the ignition never goes above 900MB of ram usage.
When the server slows down, they start to get production stop.
I know that ignition has the “recipy” part of the plc work data, the flow is:
PLC sends a type to ignition, this tag has an on trigger with script, that looks thru entered recipes in ingition.
Ignition takes the data for this part type, from a local MSSQL Datavase and put it into designated datablock in the plc, then sets a flag recipe is loaded and done.
PLC does work
rince repeat.
ATM i dont have access to the project but wonder what i should start to look at, it does not sound like memory leak since the memory usage is stable over weeks, except the server slows down.
I am not sure, if a scheduler to reboot the gateway would solve this (bandaid for a broken leg), or what best approach would be.
Any ideas?
Hi @pturmel.
When i get access to the server at the customers place, i will start by changing that flag for the gateway as a first fix.
When using gateway scripts, am i correct to assume that i dont really have to dispose objects and variables on my own, since java should GC is when needed?
Also, i was thinking to change init memory to 512mb ram and max to 2048, but do not think this will change anything since they allow the server to have 8GB ram and ignition uses just below 900 MB.
No, you don't have to dispose of objects. However, you must take care that stale versions of your jython objects don't get stuck in memory by references elsewhere. See this comment for some general advice:
In my experience, Ignition settles to lower RAM usage with G1GC than with the CMS default. But wherever it settles (peak of the sawtooth in a usage trend) should be encompassed by your initmemory. I'm not sure there's ever value in having init memory != max memory -- just let Ignition grab the memory you want it have up front and keep it out of the grubby hands of competing processes.
Hi, i will soon get access to the server, i was woundering, if there is some debug settings i perhaps should enable to trace possible slow downs happening, afaik the logging is at default setting for ignition.
But i am not sure if more logging would catch what ever causing the slow down because i do not know if its ignition, java or the server is self that causes the slow down.
Some of the settings I recommend for G1GC include turning on logging of the garbage collector itself. This is very valueable for identifying pause-the-world events.
I got access to the server today, and added
wrapper.java.additional.5=-XX:+UseG1GC
The service then failed to start.
STATUS | wrapper | 2018/09/19 10:07:13 | Launching a JVM...
INFO | jvm 1 | 2018/09/19 10:07:13 | Error: Could not create the Java Virtual Machine.
INFO | jvm 1 | 2018/09/19 10:07:13 | Error: A fatal exception has occurred. Program will exit.
INFO | jvm 1 | 2018/09/19 10:07:13 | Conflicting collector combinations in option list; please refer to the release notes for the combinations allowed
ERROR | wrapper | 2018/09/19 10:07:13 | JVM exited while loading the application.
STATUS | wrapper | 2018/09/19 10:07:18 | Reloading Wrapper configuration...
STATUS | wrapper | 2018/09/19 10:07:18 | Launching a JVM...
INFO | jvm 2 | 2018/09/19 10:07:18 | Error: Could not create the Java Virtual Machine.
INFO | jvm 2 | 2018/09/19 10:07:18 | Error: A fatal exception has occurred. Program will exit.
INFO | jvm 2 | 2018/09/19 10:07:18 | Conflicting collector combinations in option list; please refer to the release notes for the combinations allowed
ERROR | wrapper | 2018/09/19 10:07:18 | JVM exited while loading the application.
STATUS | wrapper | 2018/09/19 10:07:23 | Reloading Wrapper configuration...
STATUS | wrapper | 2018/09/19 10:07:23 | Launching a JVM...
INFO | jvm 3 | 2018/09/19 10:07:24 | Error: Could not create the Java Virtual Machine.
INFO | jvm 3 | 2018/09/19 10:07:24 | Error: A fatal exception has occurred. Program will exit.
INFO | jvm 3 | 2018/09/19 10:07:24 | Conflicting collector combinations in option list; please refer to the release notes for the combinations allowed
ERROR | wrapper | 2018/09/19 10:07:24 | JVM exited while loading the application.
STATUS | wrapper | 2018/09/19 10:07:28 | Reloading Wrapper configuration...
STATUS | wrapper | 2018/09/19 10:07:29 | Launching a JVM...
INFO | jvm 4 | 2018/09/19 10:07:29 | Error: Could not create the Java Virtual Machine.
INFO | jvm 4 | 2018/09/19 10:07:29 | Error: A fatal exception has occurred. Program will exit.
INFO | jvm 4 | 2018/09/19 10:07:29 | Conflicting collector combinations in option list; please refer to the release notes for the combinations allowed
ERROR | wrapper | 2018/09/19 10:07:29 | JVM exited while loading the application.
STATUS | wrapper | 2018/09/19 10:07:34 | Reloading Wrapper configuration...
STATUS | wrapper | 2018/09/19 10:07:34 | Launching a JVM...
INFO | jvm 5 | 2018/09/19 10:07:34 | Error: Could not create the Java Virtual Machine.
INFO | jvm 5 | 2018/09/19 10:07:34 | Error: A fatal exception has occurred. Program will exit.
INFO | jvm 5 | 2018/09/19 10:07:34 | Conflicting collector combinations in option list; please refer to the release notes for the combinations allowed
ERROR | wrapper | 2018/09/19 10:07:34 | JVM exited while loading the application.
FATAL | wrapper | 2018/09/19 10:07:34 | There were 5 failed launches in a row, each lasting less than 300 seconds. Giving up.
FATAL | wrapper | 2018/09/19 10:07:34 | There may be a configuration problem: please check the logs.
STATUS | wrapper | 2018/09/19 10:07:34 | <-- Wrapper Stopped
When i removed the G1GC line (commented out) the service started once again.
They are using Ignition 7.8.2.
Have you commented/removed the line: wrapper.java.additional.1=-XX:+UseConcMarkSweepGC
after adding ...+UseG1GC
more over I see that there are two lines with .5
This is my part of .conf. The commented line are what i had.
# Java Additional Parameters
# wrapper.java.additional.1=-XX:+UseConcMarkSweepGC
# wrapper.java.additional.2=-XX:+CMSClassUnloadingEnabled
wrapper.java.additional.1=-XX:+UseG1GC
wrapper.java.additional.2=-XX:MaxGCPauseMillis=100
wrapper.java.additional.3=-Ddata.dir=data
wrapper.java.additional.4=-Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES=false
#wrapper.java.additional.5=-Xdebug
#wrapper.java.additional.6=-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000
# Initial Java Heap Size (in MB)
wrapper.java.initmemory=1024
# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=4096
Hi.
No the .1 was left uncommented, and the second .5 is commented out.
I will test removing/replacing the .1 with G1GC locally and see if it works for me.
Not sure yet, i asked them to check the server for signs of slowdown when they have their weekly reboot of it, but before the slow down took up to a week before it was shown.
The server has now been running since 2018-09-29, and i just checked in on it, seems to run fine, altho since i changed the GC setting, the ram usage as slowly creeped up to around 2GB of ram usage over the week. Not sure if it will stop around there or it will keep go up.
Sounds like a memory leak. Switching to G1GC has probably just lengthened the time from boot to breakage. Unless it is some caching activity that will stabilize. If it keeps going up, you’ll want to get support involved.
The server has been running nonstop since 2018-09-29, the memory has not risen anymore since last check. So the new G1GC seems to have fixed what the old one did not do, or does it much better.