Large tag changes cause gateway problems

I have one large project that has about 17000 tags (too many, but I haven’t had a chance to go through and figure out which ones are needed and which ones aren’t). I was importing a CSV export of these tags today and keep running into problems. If an action takes a long time, including importing a big CSV or renaming a folder with a lot of tags under it, the gateway restarts.

I can’t remember if it was when importing or renaming the big folder, I did get an error, “GatewayException: Results for asynchronous rpc call were not received within the specified amount of time [60000 ms] for task ‘81d72eb9-1f50-4450-9249-5df8ecfef10e’”.

Also, in the logs, I find these lines:
ERROR | wrapper | 2013/08/16 16:56:52 | JVM appears hung: Timed out waiting for signal from JVM.
ERROR | wrapper | 2013/08/16 16:56:52 | JVM did not exit on request, terminated

I assume this is because the action is so big, it takes too long. Is there any way to increase this 60s timeout? I can’t find anything in the configuration settings on the gateway.


It’s definitely a little bit stranger than just a “big operation”, since that restart is coming from the service wrapper, which manages the service, and also keeps an eye on the process. It’s basically saying that the process seems to be completely blocked. Due to how things generally work, I wouldn’t expect this from simply trying to insert a lot of tags.

The one exception, however, might be if the machine was only single core… like a vm with only one processor? What are the specs of the machine Ignition is running on?

The other potentially useful thing to do would be to launch the GCU, and try to click the “thread dump” button while the large operation was in progress, right before the restart. This may show us exactly where the hold up is.


This is on a virtualized machine with 1 CPU and 1G of memory. Does that make a difference?

This is probably premature. I’m planning to upgrade to 7.5.10 or 7.6.x soon and probably should have waited until after that to post this.

That does matter. It’s under our minimum recommended specs for real hardware (dual-core, 2gb ram), and on top of that it’s virtualized which is always slower.


Yes, the single core ends up getting blocked by the changes, and can’t respond to the monitoring process, which ends up causing it to thing that the gateway is “dead”. You should see if perhaps you can allocate another core at least.