Gateway log: Error executing query against logs database

Hi
Today I ran into this error in gateway log list. It seems internal Sqlite is crash.
The list is full with 260k log entries and log list flashing between empty list and following image every 10 sec.
Is there any way to limit log entries or delete it?
I use v8.0.6.

The details of that error would be helpful; if you can’t get them from the gateway UI you can always go to the wrapper.log on the gateway’s filesystem.

If the logging database is somehow corrupted, you can just delete the system_logs.idb file (it lives right next to the wrapper.log) and restart your gateway.

Hi Paul

Here is one of log file:
https://drive.google.com/file/d/1MAFREf34kK2WDMH1DaTJq9cnKocvEI8r/view?usp=drivesdk


When I delete all of files inLogs folder it will automatically create by system with the same size after each reset.
Is there any way to delete the all old logs file?

INFO   | jvm 1    | 2020/04/08 03:23:36 | Caused by: org.sqlite.SQLiteException: [SQLITE_BUSY]  The database file is locked (database is locked)
INFO   | jvm 1    | 2020/04/08 03:23:36 | 	at org.sqlite.core.DB.newSQLException(DB.java:941)
INFO   | jvm 1    | 2020/04/08 03:23:36 | 	at org.sqlite.core.DB.newSQLException(DB.java:953)
INFO   | jvm 1    | 2020/04/08 03:23:36 | 	at org.sqlite.core.DB.throwex(DB.java:918)
INFO   | jvm 1    | 2020/04/08 03:23:36 | 	at org.sqlite.core.NativeDB.prepare_utf8(Native Method)
INFO   | jvm 1    | 2020/04/08 03:23:36 | 	at org.sqlite.core.NativeDB.prepare(NativeDB.java:134)
INFO   | jvm 1    | 2020/04/08 03:23:36 | 	at org.sqlite.core.DB.prepare(DB.java:257)
INFO   | jvm 1    | 2020/04/08 03:23:36 | 	at org.sqlite.core.CorePreparedStatement.<init>(CorePreparedStatement.java:47)
INFO   | jvm 1    | 2020/04/08 03:23:36 | 	at org.sqlite.jdbc3.JDBC3PreparedStatement.<init>(JDBC3PreparedStatement.java:30)
INFO   | jvm 1    | 2020/04/08 03:23:36 | 	at org.sqlite.jdbc4.JDBC4PreparedStatement.<init>(JDBC4PreparedStatement.java:19)
INFO   | jvm 1    | 2020/04/08 03:23:36 | 	at org.sqlite.jdbc4.JDBC4Connection.prepareStatement(JDBC4Connection.java:35)
INFO   | jvm 1    | 2020/04/08 03:23:36 | 	at org.sqlite.jdbc3.JDBC3Connection.prepareStatement(JDBC3Connection.java:241)
INFO   | jvm 1    | 2020/04/08 03:23:36 | 	at org.sqlite.jdbc3.JDBC3Connection.prepareStatement(JDBC3Connection.java:205)
INFO   | jvm 1    | 2020/04/08 03:23:36 | 	at com.inductiveautomation.ignition.gateway.logging.SQLiteAppenderReader.buildPrepStatement(SQLiteAppenderReader.java:109)
INFO   | jvm 1    | 2020/04/08 03:23:36 | 	... 42 common frames omitted

SQLite (most of the time) only allows one connection, because it’s just a DB file and has to maintain locking guarantees. Are you doing anything in scripting that might be interacting with the logging database? Running any other program on your computer that might be connecting to these files?

I’m not sure what ‘system’ means in this case; Ignition, or your actual OS? The OS definitely shouldn’t be recreating these files, although I notice your screenshot is Raspbian or similar; did the log files somehow make it into a disk image or backup that’s being restored? If you shut down the gateway, delete /usr/local/ignition/logs/, and then restart the gateway, you should only get the system_logs.idb and a single wrapper.log file until it rolls over due to the settings in ignition.conf. If you’re getting other behavior (such as the files being recreated without the gateway running), it’s something else on your system.

No not all, I even don’t write any debug information in log file in gateway scripting.
It is a rpi 4.
By system I mean ignition. It recreate the files after each reset with the old data in it.
Does logging system has any purge system to limit the size?
If I shutdown gateway and delete files they recreate by ignition.
The only way I can solve it to uninstall and reinstall gateway.

Any chance you turned on the overlay file system in the raspberry pi config? I’m not super familiar with the exact effects of that setting, but I believe it changes at least some of the SD card to read-only, with changes being held only in RAM. It might explain the log files coming back after a reboot.

2 Likes

There’s zero code in Ignition that would restore old log files. Either they’re not actually getting deleted, or something else about the OS is restoring them. The logging infrastructure around the wrapper.log files isn’t even part of Ignition; it’s part of the Tanuki service wrapper.

You can try changing the logging settings used by adjusting ignition.conf; see this page for reference, but again, I’m very skeptical this has anything to do with Ignition.

1 Like

Not at all.
I install ignition with any changing in my rpi 4 is.
Every things is default.

After updating from version 8.1.19 to version 8.1.22 this issue came

INFO   | jvm 1    | 2022/12/05 12:52:59 | True
INFO   | jvm 1    | 2022/12/05 12:52:59 | Good
INFO   | jvm 1    | 2022/12/05 12:52:59 | Mon Dec 05 12:52:29 EET 2022
INFO   | jvm 1    | 2022/12/05 12:52:59 | True arvo
INFO   | jvm 1    | 2022/12/05 12:53:00 | Scheduling logfile maintenance. Last maintenance: Mon Dec 05 12:51:58 EET 2022, events since then: 1
INFO   | jvm 1    | 2022/12/05 12:53:00 | Starting logfile maintenance. Max entries: 50000, Max filesize: 104857600
INFO   | jvm 1    | 2022/12/05 12:53:03 | E [c.i.i.g.l.SQLiteAppenderReader] [10:53:03]: Error executing query against logs database. route-group=status, route-path=/logs
INFO   | jvm 1    | 2022/12/05 12:53:03 | java.sql.SQLException: Unable to build a valid query with the parameters provided.
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at com.inductiveautomation.ignition.gateway.logging.SQLiteAppenderReader.buildPrepStatement(SQLiteAppenderReader.java:129)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at com.inductiveautomation.ignition.gateway.logging.SQLiteAppenderReader.query(SQLiteAppenderReader.java:154)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at com.inductiveautomation.ignition.gateway.logging.GatewayLoggingManagerImpl.queryLogEvents(GatewayLoggingManagerImpl.java:144)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at com.inductiveautomation.ignition.gateway.web.pages.status.routes.LoggingRoutes.getLogs(LoggingRoutes.java:561)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at com.inductiveautomation.ignition.gateway.dataroutes.Route.service(Route.java:254)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at com.inductiveautomation.ignition.gateway.dataroutes.RouteGroupImpl.service(RouteGroupImpl.java:61)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at com.inductiveautomation.ignition.gateway.dataroutes.RouteGroupCollectionServlet.serviceInternal(RouteGroupCollectionServlet.java:59)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at com.inductiveautomation.ignition.gateway.dataroutes.AbstractRouteGroupServlet.service(AbstractRouteGroupServlet.java:38)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.servlet.ServletHolder$NotAsync.service(ServletHolder.java:1450)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1631)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:600)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1440)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1355)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at com.inductiveautomation.catapult.handlers.RemoteHostNameLookupHandler.handle(RemoteHostNameLookupHandler.java:121)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:59)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.Server.handle(Server.java:516)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:555)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:410)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:164)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at java.base/java.lang.Thread.run(Unknown Source)
INFO   | jvm 1    | 2022/12/05 12:53:03 | Caused by: org.sqlite.SQLiteException: [SQLITE_BUSY] The database file is locked (database is locked)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.sqlite.core.DB.newSQLException(DB.java:1141)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.sqlite.core.DB.newSQLException(DB.java:1152)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.sqlite.core.DB.throwex(DB.java:1112)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.sqlite.core.NativeDB.prepare_utf8(Native Method)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.sqlite.core.NativeDB.prepare(NativeDB.java:117)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.sqlite.core.DB.prepare(DB.java:267)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.sqlite.core.CorePreparedStatement.<init>(CorePreparedStatement.java:44)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.sqlite.jdbc3.JDBC3PreparedStatement.<init>(JDBC3PreparedStatement.java:30)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.sqlite.jdbc4.JDBC4PreparedStatement.<init>(JDBC4PreparedStatement.java:25)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.sqlite.jdbc4.JDBC4Connection.prepareStatement(JDBC4Connection.java:34)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.sqlite.jdbc3.JDBC3Connection.prepareStatement(JDBC3Connection.java:155)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at org.sqlite.jdbc3.JDBC3Connection.prepareStatement(JDBC3Connection.java:135)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	at com.inductiveautomation.ignition.gateway.logging.SQLiteAppenderReader.buildPrepStatement(SQLiteAppenderReader.java:110)
INFO   | jvm 1    | 2022/12/05 12:53:03 | 	... 52 common frames omitted
INFO   | jvm 1    | 2022/12/05 12:53:05 | Cleaned up 500 rows in 5504 ms. Vacuum performed: true*

Is thre a way to handle this and what might be the root cause

SQLite is a single threaded DB. Something is holding on to the old connection. Restart your device?

Based on the log output, it looks like you're hitting log file cleanup based on exceeding the configured size (which defaults to 100MB). The cleanup looks to be taking over 5 seconds and seems to be long enough to block a request from the web UI (for viewing the logs), resulting in that error. If it doesn't clear up on its own (which it very well might, as it continues to prune (default of 500 events per cycle if due to disk space exceeded)), you can look to the tuning parameters in data/logback.xml, specifically diskspaceCleanupEventCount (defaults to 500) and maxDatabaseSize (defaults to 104857600 bytes). Take note that the maintenance elements in that XML file are commented out by default.

Here is a snippet from that configuration file from a fresh install for your reference:

  <appender name="DB" class="com.inductiveautomation.logging.SQLiteAppender">
    <dir>logs</dir>
    <!--
      Maintenance Settings
      entryLimit: The maximum number of entries in the database. However, at any given time, there may be more than this number, due to how cleanup works.
      maxEventsPerMaintenance: The number of event that can happen before a maintenance cycle occurs.
      minTimeBetweenMaintenance: The minimum time (max frequency) between maintenance events. Takes precedent over max events.
      vacuumFrequency: The number of maintenance cycles before a "vacuum" is performed, to recover disk space.
      diskspaceCleanupEventCount: The number of oldest events that will be removed when max database size is exceeded.
      maxDatabaseSize: Database size in bytes, above which will trigger a maintenance cycle to remove diskspaceCleanupEventCount of oldest events.

      On disk, most log events are between 600-800 bytes.
    <entryLimit>50000</entryLimit>
    <maxEventsPerMaintenance>5000</maxEventsPerMaintenance>
    <minTimeBetweenMaintenance>60000</minTimeBetweenMaintenance>
    <vacuumFrequency>3</vacuumFrequency>
    <diskspaceCleanupEventCount>500</diskspaceCleanupEventCount>
    <maxDatabaseSize>104857600</maxDatabaseSize>
    -->
  </appender>
2 Likes

This seems that Web UI for viewing logs is the culprit based on @kcollins1 response.

It turns out that the database size indeed was big based on settings and cleaning up that database took time as at the same time viewing the log page causes this issue. Now when the log file is cleaned it seems to be fine at the moment.

Consider using tail --follow wrapper.log in a console to monitor log activity. It retains line breaks from the log messages, and catches some traffic that bypasses the logging system.