Client Latency with AWS Hosted Ignition and RDS Latency

A few months ago we migrated a local Ignition instance to AWS EC2 and RDS (both xlarge) Windows server 2016 and mySQL. What kind of latency should we be expecting to experience? The system is sometimes pretty responsive, but there are often 3 and 4 second pauses in the client user interface. This could be a pause after pressing any component that runs a script, sometimes after navigation components. Did not experience this occur when running locally. I’ve watched the the slow_query_log in mySQL and all queries are under 200ms (majority under 10ms). No obvious errors in the Ignition logs. These are just pauses we encounter during normal GUI navigation and interaction… button presses. Our local network is gigabit fiber and speedtests very fast (60+Mbps down and up).
Just wondering if the pauses are the result of normal handshakes between the client and server. What other things could I check? Customer is frustrated.

Probably garbage collection on the client. Endeavor to use G1GC everywhere. See these articles for server configurations.

You can make the client (or a designer) use G1GC by adding the following to a native client launcher command line:


Also make sure you are allowing your clients to use enough memory.

What are the scripts / button presses doing? Do they require any communication with the gateway?


No specific action in particular. We just notice the pause when we’re trying to press the next button and nothing happens. Some scripts change component properties, some do navigation, some SQL updates and some change query parameters to reload tables/graphs. Once these pauses start happening during a session, they persist in occurrence. And by the way it usually takes 4 or 5 minutes to begin happening.
I’m curious about this garbage collection. Sounds like it could be in play here.

It’s possible, but you said this didn’t happen locally. Now you’ve gone and put the internet in the path between client and gateway and you’re describing what sounds like intermittent high latency between the two, so that wouldn’t be my first suspicion.

I agree, the internet is a variable latency. But we also upgraded to 7.9 from 7.8 at the same time and upgraded java from 7 to version 8 as well at the same time. So those are additional variables that we had hoped (fingers crossed) would not introduce glitches. I’m just wondering if we need to expect this latency as a permanent condition, or if we can mitigate it with some BP’s that I’ve overlooked. It’s pretty bad sometimes. Doing traceroute to the server seems quite slow/many hops. Also, when watching a client connectionin the gateway status monitor, “Last Comm” is hanging out at 600-900ms. I guess these can add up.

You can’t expect a deterministic response from an Internet connection! Also I don’t know how Ignition clients get data from server, whether its complete tags values or exception based or only the data required by the client it display tags on it, is it AJAX based client requests? All these will also matter in client response. May be Kevin can throw some light on it.

I’d like to try this, but where/how do you add this to the command line?
its currently:
“C:\Program Files\Java\jre1.8.0_181\bin\javaws.exe” -localfile -J-Djnlp.application.href= “C:\Users\VM7P\AppData\LocalLow\Sun\Java\Deployment\cache\6.0\8\13d13d88-401d3ffe”

That’s the java webstart shortcut. You need to download and use the native client launcher instead. The jvm-args option can go at the end if the NCL command line.