Processor usage

We have a very simple test app with 6 tags. 3 of which are UDP messages that are on a 100 ms scan class. A tag change script for these tags performs a simple database retrieval and an insert into a log table. The log table requeries every second to get the last hours worth of log entries and shows it in a table. After about an hour the processor climbs to ~99% and stays there until the gateway service is stopped and restarted. It doesn’t matter if we stop all the clients and designers. The processor stays at 99%. Its still usable but slooowww. HELP. BTW, the SQL database is set to autoshrink and is not very large…

If you look at the Threads tab in Console area of the gateway there is a column that tells you what CPU percentage each thread is using… anything in there looking guilty?

It appears to be the http threads. The problem starts when we attach http clients. but the processor stays pegged even after we kill all the clients.

Can you copy the stack trace for the high-cpu usage http thread? Hit the expand/detail link.

Locking
Owns monitor java.util.zip.ZStreamRef@43b9e6fc
Owns monitor java.util.zip.GZIPOutputStream@3fe82219
Owns monitor java.io.OutputStreamWriter@30f8211a
Owns monitor java.io.OutputStreamWriter@30f8211a
Owns monitor java.io.OutputStreamWriter@30f8211a
Owns monitor java.io.BufferedWriter@23440701
Stack
java.util.zip.Deflater.deflateBytes(Native Method)
java.util.zip.Deflater.deflate(Unknown Source)
java.util.zip.DeflaterOutputStream.deflate(Unknown Source)
java.util.zip.DeflaterOutputStream.write(Unknown Source)
java.util.zip.GZIPOutputStream.write(Unknown Source)
sun.nio.cs.StreamEncoder.writeBytes(Unknown Source)
sun.nio.cs.StreamEncoder.implWrite(Unknown Source)
sun.nio.cs.StreamEncoder.write(Unknown Source)
java.io.OutputStreamWriter.write(Unknown Source)
java.io.BufferedWriter.flushBuffer(Unknown Source)
java.io.BufferedWriter.write(Unknown Source)
java.io.PrintWriter.write(Unknown Source)
com.inductiveautomation.ignition.gateway.servlets.gateway.GatewayResponseStreamingDatasetWriter.writePlain(GatewayResponseStreamingDatasetWriter.java:198)
com.inductiveautomation.ignition.gateway.servlets.gateway.GatewayResponseStreamingDatasetWriter.write(GatewayResponseStreamingDatasetWriter.java:135)
com.inductiveautomation.ignition.gateway.servlets.gateway.functions.RunQuery.run(RunQuery.java:146)
com.inductiveautomation.ignition.gateway.servlets.gateway.functions.AbstractDBAction.invoke(AbstractDBAction.java:76)
com.inductiveautomation.ignition.gateway.servlets.Gateway.doPost(Gateway.java:398)
javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
com.inductiveautomation.ignition.gateway.bootstrap.MapServlet.service(MapServlet.java:85)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845)
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
java.lang.Thread.run(Unknown Source)

Mmm, that ones not likely the culprit. It’s only 15% and it’s actually doing something that will finish.

Maybe you could call into support and let them poke around.

… unless it doesn’t.

That function is executing a history query. As far as I know, you guys aren’t doing any type of history query that should take longer than 100ms to finish. So, maybe there’s some combination of conditions that is leading to a query that never finishes.

As the query runs, results are streamed back to the client. That is what’s taking the CPU, in this case. If this happens fairly reliably, you can check the following:

  1. When it’s currently happening, you can verify that the problem is what I think it is by going to Console>Levels, and turning “History.SQLTags.QueryResultWriter” to Trace. You should see rows written non-stop to the console. If so, copy and paste a few of them here, it might be useful to see what is being written. Turn the logger back to “INFO” when done (or DEBUG, for the next step)
  2. If you can trigger it to happen fairly easily, set the “History.SQLTags” to debug. Play around until you see it happen, and export the logs. We’ll be able to look at the logs and see how the query parameters for that query differed from the other ones.

Regards,