I am working on a module revolving around a realtime tag provider, and I'm using the 'default' tag provider for the sake of this post.
To test performance, I'm repeating cycles of 1) creating and writing to 4.5 million memory tags (with randomly generated paths and string values) into the tag provider, then 2) removing all the tags recursively, such that when a cycle is complete, the tag provider is empty. After each cycle, I wait for gateway memory use to decrease to below a threshold (e.g., 1024 MB, but it never gets there).
Every time I run a cycle, memory peaks while creating/writing and removing tags, then settles after the cycle completes. However, gateway memory use after every cycle when memory gets to a steady state progressively increases (see image below). After every cycle, I verify that all the tags were removed.
When the module starts, I am 1) subscribing to tag structure change events via the tag manager, 2) subscribing to all tags currently in the 'default' provider, and 3) unsubscribing when the module shuts down. Likewise, I am subscribing to tag structure changes via the tag manager, and 4) subscribing to new tags when they are added, 5) unsubscribing from moved tags and subscribing to them at their new location, and 6) unsubscribing from removed tags. The idea is to react to any and all tag change events in the tag provider.
To help identify where the memory issue is coming from, I'm dumping memory allocation by thread via the code below:
// Get and enable tracking of thread memory allocation
ThreadMXBean threadMXBean = (ThreadMXBean) ManagementFactory.getThreadMXBean();
if (!threadMXBean.isThreadAllocatedMemorySupported()) {
return;
}
threadMXBean.setThreadAllocatedMemoryEnabled(true);
for (long threadId : threadMXBean.getAllThreadIds()) {
String threadName = threadMXBean.getThreadInfo(threadId).getThreadName();
long allocatedBytes = threadMXBean.getThreadAllocatedBytes(threadId);
if (allocatedBytes <= 0)
continue;
long allocatedMb = allocatedBytes / 1024L / 1024L;
if (allocatedMb > 100) {
_logger.info("High Memory-Allocation Thread | Allocated [MB]: " +
String.format("%,d", allocatedMb) + " | Name: " + threadName +
" | ID: " + threadId);
}
}
When looking at the gateway memory trend (image below), these memory use numbers on this are (obviously) wrong (excessively high). However, in terms of relative distribution, they are indicating that the issue may be coming from subscription models and especially the default tag provider.
To be clear, I am being very diligent about subscribing and unsubscribing to/from tags.
As illustrated in the image below, either references aren't being released for garbage collection to pick up, or something else wonky is going on here.
If I don't restart the gateway, memory use will sit at the same level and not decrease. So, as I keep running more and more cycles, memory keeps increasing until I max out the memory and the gateway crashes.
If I restart my module without restarting the gateway, memory use will sit at the same level and not decrease. Same thing: more cycles, memory increases until the gateway crashes.
If I restart the gateway, memory resets to ~100 MB. When I start running through cycles again, it keeps progressively pumping up the memory every cycle and not reducing back down to a reasonable level.
Any pointers on how to better diagnose what is going on here?
I don't see anything in the SDK that would allow me to check or purge subscriptions, only subscribe or unsubscribe. The only indicator is when you try to subscribe to something you're already subscribed to, or unsubscribe from something you're not subscribed to, it spams the log about it.