I have already detected this problem on two Gateways and have not found the root cause. I have solved it by restarting the tag provider and the log disappears for a few days but then reappears again. Apparently it had caused me no problems but the Gateway has rebooted and it is the only log that is concurrent. Does anyone know how to fix it and what causes it?
batchoperations
26Nov2020 09:15:57
Error saving node configuration
simpleorm.utils.SException$Error: Cannot access destroyed record [InternalJsonStorageInfoRecord [Destroyed SRecordInstance] Dirty-2]
at simpleorm.dataset.SRecordInstance.checkFieldIsAccessible(SRecordInstance.java:233)
at simpleorm.dataset.SRecordInstance.setObject(SRecordInstance.java:255)
at simpleorm.dataset.SRecordInstance.setObject(SRecordInstance.java:248)
at simpleorm.dataset.SRecordInstance.setString(SRecordInstance.java:424)
at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageInfoRecord.setVersion(InternalJsonStorageInfoRecord.java:35)
at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageManager.saveVersion(InternalJsonStorageManager.java:90)
at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageManager$RedundantTagSynchronizationProvider.setVersion(InternalJsonStorageManager.java:592)
at com.inductiveautomation.ignition.gateway.redundancy.types.AbstractSynchronizedStateProvider.incrementVersion(AbstractSynchronizedStateProvider.java:93)
at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageManager$RedundantTagSynchronizationProvider.processChanges(InternalJsonStorageManager.java:640)
at com.inductiveautomation.ignition.gateway.tags.config.storage.internaljson.InternalJsonStorageManager.save(InternalJsonStorageManager.java:181)
at com.inductiveautomation.ignition.gateway.tags.config.BatchConfigOperation.execute(BatchConfigOperation.java:100)
at com.inductiveautomation.ignition.gateway.tags.evaluation.BatchContextImpl$OpController.run(BatchContextImpl.java:187)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)|
I’m seeing a similar error on an ignition gateway. error seems to be occurring every second and filling up my log files. Any help here would be appreciated.
We just encountered this on a server running 8.1.1, recurring 5x/s. Going through restarting tag providers it went down to 4x/s after restarting one remote tag provider with a faulted GAN connection. And restarting the primary local tag provider cleared up the other 4x/s.
I’m seeing this behavior in 8.0.16. Upgraded from 7.9.5 to 8.0.16 in October 2020. Unsure when this started as I have not been watching the logs. Today needed to do some troubleshooting and found that all of the logs were filled with this message from the past several hours. I just restarted the default tag provider as mentioned above and the messages stopped. I will report back if I see them start again.
Hello all, I am seeing the same thing. I had recently upgraded from 7.9.10 to 8.1.5 on my Dev system and there have been no issues. Now I did the upgrade from 7.9.10 to 8.1.5 on my Prod system and I am seeing the batchoperations error. Mine is specifically tags.execution.batchoperations with an error stating “Uncaught exception executing batch context.”. These are coming in consistently every 30 seconds.
There is no indication in the error details as to what it is doing or why the error is thrown. I turned on the Debug and Trace Loggers which gave a lot more log entries but none of those pointed to anything either. Based on some of the examples, I have also restarted my gateway but the errors come back immediately.
I wanted to post to the thread to get updates and also share I am having a similar issue and all basic tests are not returning anything for me.
I am seeing the same log error with redundant gateways running 8.1.1. This is causing the backup to not synchronize and now the master is reporting the peer is not connected. I tried to force sync on the backup, but that didn’t work. The updates are stacking up. My next try would be to update and restart the gateways, but I want tech support to look at this first.
The Ignition Team is aware of the issue and the next release (version 8.1.15) contains a fix meant to address this log spam. At the moment restarting the Provider/Gateway is the only way to temporarily stop the error spam.
If anyone here is able to reproduce this issue on demand - please send me a DM.
This gateway has no transaction groups or views or queries, it only has connections to devices and about 15 tag providers only. It does have a bunch of outbound remote connections to load-balanced gateways that have remote tag providers referencing these 15 providers.
Hi Kyle,
Is there any update as to why this error occurs and how to fix it?
We are running 8.1.21 and have just seen this error today.
We can see that the harddisk space of the drive which Ignition is installed on is being used up in no time.
You can disable the logger if you are concerned about it taking up hard drive space. Contacting Inductive Automation support is currently the best advice I would recommend. I haven't heard of any consistent replication steps from anyone.