Ignition 8.3.3-RC1 - Historian binding "crashing" ignition gateway

Hi, We have currently updated to 8.3.3 - RC1 to fix some issues regarding edge sync etc, but now we have a new issue. We have a view that displays 10 timeseries chart components. The main gateway is configured to use “Database” as history settings for the remote tag provider. When the view is loaded, the gateway fires up 10 threads (one for each chart component, see below). These Threads seem to never finish, it’s a basic binding “historian, start/end time with 600 points. They also seem to stack, so that after a while the gateway is at 100% CPU load.

I can also mention that we are using the Core Historian for logging, if that can have something to do with it..

perspective-worker-7803, id=197397

2.83

perspective

RUNNABLE

Thread [perspective-worker-7803] id=197397, (RUNNABLE)

app//com.inductiveautomation.ignition.gateway.sqltags.history.query.processing.Interpolator.getValueAt(Interpolator.java:127)

app//com.inductiveautomation.ignition.gateway.sqltags.history.query.columns.CalculatingResultNode.finishAggregationWindow(CalculatingResultNode.java:102)

app//com.inductiveautomation.ignition.gateway.sqltags.history.query.columns.CalculatingResultNode.processValue(CalculatingResultNode.java:214)

com.inductiveautomation.sqlhistorian.gateway.query.DatasourceQueryExecutor.loadValue(DatasourceQueryExecutor.java:594)

com.inductiveautomation.sqlhistorian.gateway.query.DatasourceQueryExecutor.loadValue(DatasourceQueryExecutor.java:92)

com.inductiveautomation.historian.gateway.query.execution.AbstractHistoryLoader.processData(AbstractHistoryLoader.java:175)

com.inductiveautomation.historian.gateway.query.writing.HistoryWriter.readData(HistoryWriter.java:343)

com.inductiveautomation.historian.gateway.query.writing.HistoryWriter.execute(HistoryWriter.java:243)

com.inductiveautomation.historian.gateway.HistorianManagerImpl.queryHistory(HistorianManagerImpl.java:1099)

app//com.inductiveautomation.ignition.gateway.sqltags.history.TagHistoryManagerBridge.queryHistory(TagHistoryManagerBridge.java:99)

app//com.inductiveautomation.ignition.gateway.tags.model.ProjectDefaultTagManagerFacade.queryHistory(ProjectDefaultTagManagerFacade.java:493)

com.inductiveautomation.perspective.gateway.binding.tag.history.AbstractTagHistoryBinding.execute(AbstractTagHistoryBinding.java:261)

com.inductiveautomation.perspective.gateway.binding.tag.history.AbstractTagHistoryBinding$$Lambda$7266/0x0000000801e3f318.apply(Unknown Source)

com.inductiveautomation.perspective.gateway.binding.ValueCache$CachedValue.lambda$fetchValue$0(ValueCache.java:117)

com.inductiveautomation.perspective.gateway.binding.ValueCache$CachedValue$$Lambda$7312/0x0000000801e5a028.run(Unknown Source)

java.base@17.0.17/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)

java.base@17.0.17/java.util.concurrent.FutureTask.run(Unknown Source)

java.base@17.0.17/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

java.base@17.0.17/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

com.inductiveautomation.perspective.gateway.threading.BlockingWork$BlockingWorkRunnable.run(BlockingWork.java:58)

java.base@17.0.17/java.lang.Thread.run(Unknown Source)

A little update:

Using the Historical Provider Tagpath instead of Realtime Provider tagpath in Tag History binding, seems to solve the “threads not completing” part of the problem. But the CPU load is still maxing out, it seems to be the case with all aggregation modes, except “SimpleAverage”.

Did you ever find a full solution for this? I'm seeing exactly the same issue, I'm on 8.3.2. I can't reliably replicate it, but occasionally a tag history binding will spawn a thread that completely consumes a CPU core and causes rapid memory churn until I reboot the gateway.

There's been quite a bit of attention to the historian in recent releases. You should try 8.3.6. Or open a support ticket and ask them to help confirm if your issue is one of the ones that has been addressed.

I upgraded to 8.3.6 and the problem seems to have been fixed. Thanks!