Batchoperations flood of errors

In 8.0.16, I am getting a ton of logs under the tags.storage.batchoperations. Any ideas on what is going on here? It seems like something is corrupted with my tag provider, but not sure how I can fix it…

simpleorm.utils.SException$Error: Cannot access destroyed record [InternalJsonStorageInfoRecord [Destroyed SRecordInstance] Dirty-2]
at simpleorm.dataset.SRecordInstance.checkFieldIsAccessible(SRecordInstance.java:233)
at simpleorm.dataset.SRecordInstance.setObject(SRecordInstance.java:255)
at simpleorm.dataset.SRecordInstance.setObject(SRecordInstance.java:248)
at simpleorm.dataset.SRecordInstance.setString(SRecordInstance.java:424)
at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageInfoRecord.setVersion(InternalJsonStorageInfoRecord.java:35)
at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageManager.saveVersion(InternalJsonStorageManager.java:90)
at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageManager$RedundantTagSynchronizationProvider.setVersion(InternalJsonStorageManager.java:592)
at com.inductiveautomation.ignition.gateway.redundancy.types.AbstractSynchronizedStateProvider.incrementVersion(AbstractSynchronizedStateProvider.java:93)
at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageManager$RedundantTagSynchronizationProvider.processChanges(InternalJsonStorageManager.java:640)
at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageManager.save(InternalJsonStorageManager.java:181)
at com.inductiveautomation.ignition.gateway.tags.config.BatchConfigOperation.execute(BatchConfigOperation.java:100)
at com.inductiveautomation.ignition.gateway.tags.evaluation.BatchContextImpl$OpController.run(BatchContextImpl.java:187)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)

I’ve also seen this alongside some pretty scary stability issues with the platform itself (installed licenses disappearing, HTTP 500 errors when trying to get into the gateway web UI, connected designer sessions locking up). This was on a dev system that gets flogged with major changes all the time for me so I assumed we had created some bad sequence of peering it with other gateways then having those gateways go away or something along those lines to cause it, but the only clue I have is that the internal DB appears to be inaccessible by parts of the system:

INFO | jvm 1 | 2020/11/10 17:51:09 | E [t.s.internaljson ] [17:51:09]: Errror storing tag changes. provider=default
INFO | jvm 1 | 2020/11/10 17:51:09 | simpleorm.utils.SException$Jdbc: Executing UPDATE tagconfig SET cfg = ? WHERE id = ? AND cfg = ? for [TagConfigurationRecord 46ae0157-c826-4e2b-ac9c-7628c7e8d257.5462bb3f-1b41-47ce-9d30-77bfd1bcda1c Dirty0]
INFO | jvm 1 | 2020/11/10 17:51:09 | at simpleorm.sessionjdbc.SSessionJdbcHelper.flushExecuteUpdate(SSessionJdbcHelper.java:434)
INFO | jvm 1 | 2020/11/10 17:51:09 | at simpleorm.sessionjdbc.SSessionJdbcHelper.flush(SSessionJdbcHelper.java:400)
INFO | jvm 1 | 2020/11/10 17:51:09 | at simpleorm.sessionjdbc.SSessionJdbc.flush(SSessionJdbc.java:459)
INFO | jvm 1 | 2020/11/10 17:51:09 | at simpleorm.sessionjdbc.SSessionJdbc.flush(SSessionJdbc.java:443)
INFO | jvm 1 | 2020/11/10 17:51:09 | at simpleorm.sessionjdbc.SSessionJdbc.commit(SSessionJdbc.java:375)
INFO | jvm 1 | 2020/11/10 17:51:09 | at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageManager.saveInternal(InternalJsonStorageManager.java:286)
INFO | jvm 1 | 2020/11/10 17:51:09 | at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageManager$IndependantTagSynchronizationProvider.processChanges(InternalJsonStorageManager.java:532)
INFO | jvm 1 | 2020/11/10 17:51:09 | at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageManager$RedundantTagSynchronizationProvider.processChanges(InternalJsonStorageManager.java:622)
INFO | jvm 1 | 2020/11/10 17:51:09 | at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageManager.save(InternalJsonStorageManager.java:181)
INFO | jvm 1 | 2020/11/10 17:51:09 | at com.inductiveautomation.ignition.gateway.tags.config.BatchConfigOperation.execute(BatchConfigOperation.java:100)
INFO | jvm 1 | 2020/11/10 17:51:09 | at com.inductiveautomation.ignition.gateway.tags.evaluation.BatchContextImpl$OpController.run(BatchContextImpl.java:187)
INFO | jvm 1 | 2020/11/10 17:51:09 | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
INFO | jvm 1 | 2020/11/10 17:51:09 | at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
INFO | jvm 1 | 2020/11/10 17:51:09 | at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
INFO | jvm 1 | 2020/11/10 17:51:09 | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
INFO | jvm 1 | 2020/11/10 17:51:09 | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
INFO | jvm 1 | 2020/11/10 17:51:09 | at java.base/java.lang.Thread.run(Unknown Source)
INFO | jvm 1 | 2020/11/10 17:51:09 | Suppressed: simpleorm.utils.SException$Jdbc: Executing UPDATE tagconfig SET cfg = ? WHERE id = ? AND cfg = ? for [TagConfigurationRecord 4e6678a1-e4ce-4015-ae7b-06cba77ffb05.58a6a5de-1130-4258-b1a9-bb3415003a7b Dirty1]
INFO | jvm 1 | 2020/11/10 17:51:09 | at simpleorm.sessionjdbc.SSessionJdbcHelper.flushExecuteUpdate(SSessionJdbcHelper.java:434)
INFO | jvm 1 | 2020/11/10 17:51:09 | at simpleorm.sessionjdbc.SSessionJdbcHelper.flush(SSessionJdbcHelper.java:400)
INFO | jvm 1 | 2020/11/10 17:51:09 | at simpleorm.sessionjdbc.SSessionJdbc.flush(SSessionJdbc.java:459)
INFO | jvm 1 | 2020/11/10 17:51:09 | at simpleorm.sessionjdbc.SSessionJdbc.flush(SSessionJdbc.java:443)
INFO | jvm 1 | 2020/11/10 17:51:09 | at simpleorm.sessionjdbc.SSessionJdbc.commitAndDetachDataSet(SSessionJdbc.java:382)
INFO | jvm 1 | 2020/11/10 17:51:09 | at com.inductiveautomation.ignition.gateway.localdb.persistence.PersistenceSession.close(PersistenceSession.java:61)
INFO | jvm 1 | 2020/11/10 17:51:09 | at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageManager.saveInternal(InternalJsonStorageManager.java:284)
INFO | jvm 1 | 2020/11/10 17:51:09 | … 11 common frames omitted
INFO | jvm 1 | 2020/11/10 17:51:09 | Caused by: org.sqlite.SQLiteException: [SQLITE_CANTOPEN] Unable to open the database file (unable to open database file)
INFO | jvm 1 | 2020/11/10 17:51:09 | at org.sqlite.core.DB.newSQLException(DB.java:909)
INFO | jvm 1 | 2020/11/10 17:51:09 | at org.sqlite.core.DB.newSQLException(DB.java:921)
INFO | jvm 1 | 2020/11/10 17:51:09 | at org.sqlite.core.DB.execute(DB.java:825)
INFO | jvm 1 | 2020/11/10 17:51:09 | at org.sqlite.core.DB.executeUpdate(DB.java:863)
INFO | jvm 1 | 2020/11/10 17:51:09 | at org.sqlite.jdbc3.JDBC3PreparedStatement.executeUpdate(JDBC3PreparedStatement.java:102)
INFO | jvm 1 | 2020/11/10 17:51:09 | at simpleorm.sessionjdbc.SSessionJdbcHelper.flushExecuteUpdate(SSessionJdbcHelper.java:432)
INFO | jvm 1 | 2020/11/10 17:51:09 | … 17 common frames omitted
INFO | jvm 1 | 2020/11/10 17:51:09 | Caused by: org.sqlite.SQLiteException: [SQLITE_CANTOPEN] Unable to open the database file (unable to open database file)
INFO | jvm 1 | 2020/11/10 17:51:09 | at org.sqlite.core.DB.newSQLException(DB.java:909)
INFO | jvm 1 | 2020/11/10 17:51:09 | at org.sqlite.core.DB.newSQLException(DB.java:921)
INFO | jvm 1 | 2020/11/10 17:51:09 | at org.sqlite.core.DB.execute(DB.java:825)
INFO | jvm 1 | 2020/11/10 17:51:09 | at org.sqlite.core.DB.executeUpdate(DB.java:863)
INFO | jvm 1 | 2020/11/10 17:51:09 | at org.sqlite.jdbc3.JDBC3PreparedStatement.executeUpdate(JDBC3PreparedStatement.java:102)
INFO | jvm 1 | 2020/11/10 17:51:09 | at simpleorm.sessionjdbc.SSessionJdbcHelper.flushExecuteUpdate(SSessionJdbcHelper.java:432)
INFO | jvm 1 | 2020/11/10 17:51:09 | … 16 common frames omitted

Support reviewed this for me and it sounded as its a known issue. A gateway restart fixed it for me, still not sure the root cause.

Same here with the GW restart fixing it, did they give you an indication of what version may have the bug fix?

Hello, is there any update on this?

I am using version 8.1.1 and still facing this issue.

This was fixed in the current release of Ignition. Suggest you update from 8.1.1 to 8.1.11

1 Like

Okay thanks for the update

I’m seeing the same issue in 8.1.14.
Tech support said to reboot the gateway.
I stopped and restarted the service and it helped for a day.
I’ll reboot the server this weekend and see if that helps.

I’m seeing this for the first time today. We plan to reboot the server this weekend to see if that will help. It would be nice to know if there is a known cause so we can address the root issue.

I have an open ticket for this.
I was told that there was a fix for this in 8.1.15.
After updating from 8.1.14 to 8.1.15 and rebooting the server we now get 3 errors per second.
This server is very simple, using all defaults (except user source), only 24 devices, one UDT and one daily scheduled process.

Hi did you find any solution for this from IA team.

No.
I’ve upgraded to 8.1.17 and it’s still there.
It doesn’t seem to impact performance for this server and so far hasn’t shown up on any of our other 130+ gateways. I ran out of energy and got tired of chasing it so had the ticket closed.
Every time it flares up, usually ~two days after a gateway startup, I go to the Log Configuration and turn off the “tags.*.batchoperations” loggers.

I'm not sure when this error persisted for our system. But we are on Version 8.1.18 and this is the first time it has caught my attention. Hoping to be able to restart the gateway soon, as I don't have administrative access on our new server at the moment.

I'm having this on 8.1.24.

3 Likes

I'm having this on 8.1.22

Me too on 8.1.26. I'm having to restart the gateway any time I need to use the logs for debugging.

2 Likes

Upgraded to 8.1.28 and still an issue :frowning:

2 Likes

I think this will have to be the permanent solution :face_exhaling:

1 Like

Agreed. The Log Configuration settings apply to all users, so it is not a bad one either... for now at least. Just need to be careful and have awareness

There should be a fix for this in 8.1.32 RC1 which is currently available to download for testing.


Screenshot from the 8.1.32 RC1 Release Notes