OPC-UA Server Connections Question?

I am noticing that lately the memory usage is going higher than usual. It used to be between 1GB to 2GB range, but suddenly a week ago it started oscillating like the picture below. I don’t recall any major changes other than a couple tags added for a new machine.

We are planning to restart the gateway this weekend to see if it goes back to its previous state.

Version: 7.9.6 (b2018012914)
Operating System: Windows Server 2012 R2 | amd64
Java Version: 1.8.0_152-b16

Doing some clicking around I noticed in the OPC Server Sessions that I don’t recall seeing before.
Does this looks normal? The number of Keep-Alive Publishes keeps growing and I don’t see any tags in the subscription list.

I don’t think there’s anything wrong here. The keep-alive notifications are normal, you probably have an empty scan class or something resulting in an empty subscription.

The memory usage is okay too. The JVM GC is adaptive and will change its behavior when it sees fit. You gave the JVM 4GB to work with, sometimes it will use more, sometimes it will use less. Unless you actually run out of memory and the gateway becomes unresponsive there’s no need for concern.


The screenshot shows that you are configured to use the Concurrent Mark & Sweep garbage collector. Consider reconfiguring to use G1GC.


We are planning to do that, we are currently testing G1GC in a non-production server.


Just as a follow up, yesterday morning the sever had to be rebooted, after that here are the performance stats.

This is how this ignition instance’s memory usage regularly looks like.I dont know what happened about a week ago that triggered the memory to behave different.

The CMS garbage collector is really bad about long-standing garbage if there isn’t sufficient memory pressure to force it out. Unfortunately, when it gets that pressure and finally cleans up, its algorithm produces unbearably long pauses. I don’t know how long you plan to test G1GC, but it is has been the default for new installs of Ignition for quite some time now. It is one the of the few things I don’t hesitate to change on an install, as the odds of long-term problems with CMS are near 100%. And so far for me and my clients, G1GC has been flawless.

1 Like

The graph you see is normal for the CMS garbage collector.

When you start the gateway, you give it a minimum and a maximum memory.

When started, Java will reserve the minimum memory, and start running the app. When it nearly hits the limit, the garbage collector will kick in, and try to clean up as much as possible.

In most cases, it can free enough memory, so Java doesn’t need to ask more memory from the OS. However, when not enough can be cleaned up (f.e. because some process still needs all the objects), Java will ask more memory to the OS. As long the OS can provide that, it’s not a big problem.

But after requesting more memory, the garbage collector will be under less pressure, and let the garbage build up more. If the amount of memory it needs to clean up is actually huge, this can cause noticeable pauzes in all threads.

In theory, Java could release memory back to the OS, but it doesn’t do so by default. So once you hit your max memory, you won’t return back to normal unless you restart the service.

G1GC is better as it does cleanup a lot more frequent, and doesn’t any pressure to do a cleanup. This usually results in shorter pauzes so a more responsive system. And it also gives more constant graphs.

We also use G1GC on all our installations for almost 3 years, and we haven’t seen performance problems related to memory since we switched (and no other problems were introduced either).