Memory Use for Gateway

Both my frontend and backend Gateways demonstrate the sawtooth pattern regarding memory usage.

But, just curious, does it mean something if that pattern is a "reverse" sawtooth where the leading edge is far steeper than the trailing edge? I'd think that the trend should have a gradual increasing slope on the leading edge and then a sharp drop on the trailing edge, no?

Screenshot from 2024-10-07 09-50-54

That is very strange. Do you have some particularly high load running every 15 minutes?

Or, perhaps, some tasks that floods your store and forward system every 15 minutes? (That barely drains in time for the next flood.)

This particular trend I shared is for the frontend GW. It has about 35 Vision clients connected at any time. No Store & Forward Events and there are no scripts running on a 15 minute cycle. I have only 1 GW script that runs each night at midnight that performs a SQL query.

I thought maybe I had too much memory allocated, so I dropped the heap allocation a couple GB, but that didn't seem to make any difference. If I look at this GW's redundant partner, it does demonstrate the "normal" sawtooth pattern, but it's also sitting idle without any client connections.

CPU usage is minimal, averaging about 9%.

I see this same pattern on my backend GW, but again, I feel its CPU usage is minimal (about 45%), considering I'm connected to about 15 PLC devices (mostly AB ControlLogix).

I want to be clear that the system doesn't appear to have any issues caused by this. I have zero clock drift events and no significant error or warning messages in my logs. This post was mostly out of curiosity.

Interestingly, I lowered my heap allocation another 2 GB and now I have a "normal" sawtooth pattern.

Screenshot from 2024-10-07 11-19-30

In the attached image, you can see where I made the change right before 11:00. I'll monitor and see if this change affects performance, but CPU is still holding around 9%.

For this test, I've gone from a heap allocation of 10GB to 6GB.

It would be great if any IA folks can weigh in on why it seems adding too much memory would cause this usage pattern. Again, I'm not sure if it's affecting performance, but it's been nagging at me for quite some time.

1 Like

I have found the memory chart on the performance page to be kind of iffy. I added the gateway memory tag to a historian, and plot it it shows a different story.

This is what i see on the status page:
image

But this is what the plot looks like:
image

This is not the exact same time slice but its close enough to see that the data shows much more of the saw tooth you would expect. I don't know if this is what's happening on your gateway, this is just my observation of the performance page chart.

Good point and something I will certainly check.

Thanks!