Recurring Intermittent Perspective Freeze

Hello everyone,

We’re currently investigating a recurring Ignition Perspective freeze affecting our environment, and we’re hoping to get some technical guidance or shared experiences from the community.

Environment

  • Ignition Version: 8.1.45 (b2025010709)

  • Operating System: Windows Server 2022 (amd64),6 vCPU, 16 GB RAM)

  • Java Version: 17.0.13+11-LTS (Zulu)

  • Memory Allocated: ~4 GB (heap)

  • CPU Usage: ~5%

  • Database Connections: 5/5 (SQL Server)

  • Tags: ~35,821

  • Redundancy: Active / Connected (Cold Standby)

  • Clients: Perspective sessions via Citrix (Microsoft Edge)

Since mid-2025, our operators have experienced intermittent Perspective freezes — the screen remains visible but becomes unresponsive (no clicks or navigation possible). These freezes typically clear up after a few minutes or once the browser or Citrix session is refreshed.

The problem consistently occurs around 10:00 AM, though it doesn’t crash the Gateway. During each freeze, direct Gateway access remains fully functional, and if a new Citrix session or browser instance is opened immediately, Ignition loads and operates normally. This confirms that the Gateway, Citrix infrastructure, and network connectivity remain stable during the event. The problem appears to be confined to active Perspective sessions within existing Citrix/Edge instances, reinforcing that the issue likely resides at the application or browser runtime level, rather than at the system or infrastructure level.

Over the past several days, the Java process (“Zulu Platform x64 Architecture”) has shown a steady increase in memory usage, rising from around 3.3 GB after a reboot to more than 6 GB within four days, while CPU usage remains low.

We also captured the screenshot below from the Gateway’s Status → Overview page alongside Windows Task Manager. At that time, Ignition reported about 3.3 GB of used memory inside the gateway, while Task Manager showed the Zulu Platform x64 Architecture process using over 6.6 GB. CPU usage was around 9–15%, and no other processes appeared to be using excessive resources. The gateway uptime was four days, and memory usage had roughly doubled since the last reboot.

No critical errors appear in the gateway logs at the time of the freeze. Thread diagnostics show many TIMED_WAITING and RUNNABLE threads, mainly under perspective-worker and platform-executor. When the freeze occurs, the Ignition Gateway remains accessible, and Perspective loads normally if opened in a new Citrix session or another browser. This suggests that the issue only affects active Perspective sessions already running when the freeze happens.

We’ve rebooted the server to reset resource usage and monitored the trend over several days, confirming that memory consumption continues to grow between restarts. System performance (CPU, disk, and network) appears stable during the freeze, and no faults are seen in PLC or SQL connectivity. We’re setting up a simple system usage display in Perspective to log and trend CPU, memory, and thread counts over time. We’re also reviewing gateway timer and scheduled scripts to see if any long-running or asynchronous operations might be contributing to the issue but didnt find any yet.

At this point, we’re trying to determine whether the observed memory growth is simply normal Java garbage collection behavior or a sign of a potential leak within the Ignition runtime. We’re also interested to know if others have seen similar cases where Perspective sessions freeze, yet the gateway and new sessions remain responsive. Lastly, we’re considering enabling Java Flight Recorder or detailed GC logging to capture more information, but we’d like to confirm whether that’s the best next step for diagnosing what’s retaining memory between restarts.

1 Like

Regarding your memory usage concern:
Java does not return memory to the OS once it has allocated it (outside of shutting down the Java process). Java's internal garbage collection will automatically recycle the allocated memory.

For this reason, it is recommended to set your initial heap and max heap sizes in the config file to be the same value, so that Ignition allocates the entire expected amount of memory on startup.

The zulu process in task manager reflects the total allocated heap size (plus some overhead), Ignition's status page reflects the amount of the allocated heap actually in use.

Unless you are seeing a trend on the Ignition memory graph where you are slowly consuming all allocated memory and resulting in a gateway hang/shutdown, you are likely okay. Still, it's good to be vigilant and try to avoid common causes of memory leaks.

A good thread to read:

2 Likes

If it’s consistently happening at 10am then first place to check would be if there are any gateway events, scripts, IT automated backups, or other items that execute at that time.

6 Likes

Your configured max heap memory appears to be 8GB. You cropped off the memory trend from the gateway page which is the more important information than an instantaneous value. What’s that trend showing? (first extend it to show as far as it goes)

Note also that the memory in task manager will be more than what’s configured as the max in the ignition.conf (assuming initial is set to max) due to the JVM using memory for things other than the heap

Hi. Did you get any solution to this. I seem to be having the same issue:

Environment

  • Ignition Version: 8.3.0-rc1 (b2025082609)

  • Operating System: Linux Debian 12.0.0

  • Memory Allocated: 4 GB (Max and initial set the same)

  • CPU Usage: ~5%

  • Database Connections: 2/2 (SQL Server)

  • Tags: Less than 10 000 because it is a development machine

  • Redundancy: NA

  • Clients: Perspective sessions via Chrome

Perspective client stops opening popups, buttons that previosuly successfully wrote to tags stop working and navigation buttons do not work, no logs on the gateway.

It happens radomly but it seems tag writes trigger the freeze more often, however just leaving the session open also triggers the error.

Thank you in advance for any help.

There could probably be any number of things that would cause something like that, but since it sounds similar to the issue we recently had, I’ll share what our solution was.

We started seeing the system freeze up around roughly the same time every morning, and I had been noticing an unusually large number of Timed Waiting threads throughout the day (~5,000). Our logs showed that tag reads/writes were timing out during the freeze up, which suggested a network issue. We thought maybe a faulty switch was to blame, given that it is 20 years old, but all indications were that the switch was working properly.

It seems the problem was that our server had two IP addresses assigned to the same NIC, which can be problematic. That was always our setup, so I’m not sure why it would suddenly become an issue, but as soon as we configured the second IP to use a separate NIC, the problem went away. That was a few weeks ago, and since then we haven’t had a single freeze up, and our Timed Waiting threads tend to hover around 500 instead of 5,000.

1 Like

Thanks Ryan, will look into that!

Okay so after some digging and thinking I noticed that some pages are immune to this error. So I started moving objects around, moving objects from the troublesome pages to the working pages to try and identify the troublesome component or view. I found that by moving each component one by one, it was not a single component that caused the error but the number of objects that I moved, it is almost as if when I overload the screen with objects it causes this issue.

I then considered the size of the objects that I was loading on the page. The troublesome pages contained views that I used a drop config on, which was bound to the entire udt instance. These udt instances are particularly large when compared to the objects used on the screens that had no issues. So I changed the drop config to path and then changed the bindings of my custom properties to only use the tag values that are critical to that view rather than loading the entire udt instance. It seems this has worked! I know that this is mentioned in the ignition getting started guide but I was not aware that it would cause this error with no particular warning.

So my new question is…is there something that I should look at to see what “load” my views have during runtime?

1 Like

For future reference, best practice is not to ever bind a UDT, only paths on every component. You have now seen why.

2 Likes