High Number of Quarantined Items

Hi Guys,

I noticed that the number of quarantined items on one of our gateways is very high. Can someone please help me get to the bottom of what is causing these?

Found this in the logs:

java.lang.RuntimeException: com.microsoft.sqlserver.jdbc.SQLServerException: The conversion from UNKNOWN to VARCHAR is unsupported.

at com.inductiveautomation.ignition.common.functional.FragileConsumer.lambda$wrap$1(FragileConsumer.java:47)

at java.base/java.util.stream.ReferencePipeline$2$1.accept(Unknown Source)

at java.base/java.util.stream.ReferencePipeline$2$1.accept(Unknown Source)

at java.base/java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Unknown Source)

at java.base/java.util.stream.ReferencePipeline$Head.forEach(Unknown Source)

at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source)

at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source)

at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)

at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)

at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(Unknown Source)

at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Unknown Source)

at java.base/java.util.stream.ReferencePipeline$Head.forEach(Unknown Source)

at com.inductiveautomation.ignition.gateway.history.sf.sinks.AbstractDatasourceSink.storeToDatasource(AbstractDatasourceSink.java:147)

at com.inductiveautomation.ignition.gateway.history.sf.sinks.AbstractDatasourceSink.storeData(AbstractDatasourceSink.java:117)

at com.inductiveautomation.ignition.gateway.history.sf.sinks.AggregateSink.storeData(AggregateSink.java:180)

at com.inductiveautomation.ignition.gateway.history.forwarders.ForwarderThread.run(ForwarderThread.java:149)

Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The conversion from UNKNOWN to VARCHAR is unsupported.

at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:228)

at com.microsoft.sqlserver.jdbc.DataTypes.throwConversionError(DataTypes.java:1094)

at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.setObject(SQLServerPreparedStatement.java:1595)

at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.setObject(SQLServerPreparedStatement.java:1514)

at org.apache.commons.dbcp2.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:529)

at org.apache.commons.dbcp2.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:529)

at com.inductiveautomation.ignition.gateway.datasource.SRConnectionWrapper$SRPreparedStatement.setObject(SRConnectionWrapper.java:1131)

at com.inductiveautomation.gateway.tags.history.storage.TagHistoryDatasourceSink$TagInsert.populatePreparedStatement(TagHistoryDatasourceSink.java:979)

at com.inductiveautomation.gateway.tags.history.storage.TagHistoryDatasourceSink$BatchedTagInsert.insert(TagHistoryDatasourceSink.java:1029)

at com.inductiveautomation.gateway.tags.history.storage.TagHistoryDatasourceSink.insertTagValues(TagHistoryDatasourceSink.java:926)

at com.inductiveautomation.gateway.tags.history.storage.TagHistoryDatasourceSink.storeScanClassSet(TagHistoryDatasourceSink.java:482)

at com.inductiveautomation.gateway.tags.history.storage.TagHistoryDatasourceSink.storeDataToDatasource(TagHistoryDatasourceSink.java:459)

at com.inductiveautomation.ignition.gateway.history.sf.sinks.AbstractDatasourceSink.lambda$storeToDatasource$3(AbstractDatasourceSink.java:148)

at com.inductiveautomation.ignition.common.functional.FragileConsumer.lambda$wrap$1(FragileConsumer.java:45)

... 15 common frames omitted

Bump

You are going to have to figure out which queries in your app are throwing these errors and fix those bugs. If you find the query and calling code but can’t figure out what’s wrong, post those details.

I am not the sole Ignition Developer and we have more than a handful of projects, I don’t really know how to track down the location of whatever’s failing…

I’d start by disabling every project and enabling 1 at a time to find the the project(s) causing this.
If you can’t disable every project then you’ll most likely have to comb through every project.

1 Like

Export the quaratined items so you can examine the failing SQL in detail.

File is too large (80MB) to upload to forum but looks mostly something like this:

That would be tag history choking. Meaning, your DB isn’t keeping up, or isn’t storing at all. If not storing at all, look in your DB to see if the sqlt* tables are present–there might be a permissions issue preventing the tag historian from creating them automatically.

If storing but not keeping up, get more horsepower for your DB. (It is on a separate machine already, I hope.)

DB is storing, I have no issue bringing up recent data on an easy chart. Yes db is on a separate server.

Inspecting the quarantine export, it looks like only a few tags (<30) that are there.

1 of them is from an AB controller.

~20 are from a particular modbus TCP device

~10 of them them are duplicates within a set of OPC devices.

This makes me think its not the db getting choked as I would see all the other tags I have setup logging on (several thousand).

All of the duplicates (99% of quarantined items) are referring to tags that I originally created (within a UDT) but later deleted…

Sounds like you can discard those from the quarantine.

1 Like

I never though to press the delete button i always just tried the retry button which brought back all the quarantined items after about 5 minutes. I just pressed the delete button and nothing is returning. Thanks Phil!

Is there a guide or hints on how to interpret the causes when looking at a quarantine file? I haven’t been able to find one.

I wonder this as well… Theres only one error message on the store and forward screen for the quarantine data. Does that mean only one of the values in the quarantine has an error or does it mean all of them are in error?

If you export it iirc there are more details on why that point is in quarantine.

I don’t see that, but the files are also very large so I may be missing something… If I understand correctly, if one value in the transaction is wrong it will reject all and quarantine. We had our write size up past 100K to try and catch up on a large backlog.

Can someone from IA confirm if all points in the quarantine were rejected, or just some?

You’re correct… I thought the export had more details.

The details page will 'collapse' the errors if the underlying cause is the same, iirc; it doesn't necessarily mean that they have exactly the same problem, but might all share (the most common) a duplicate primary key error.

That's not exactly how it works. Basically, S+F is a cache of 'records', where records can originate from any Ignition system. For efficiency, most Ignition systems (such as tag history) do not send single insert statements into S+F one-at-a-time; instead, records to insert are batched together. From S+F's perspective, there's only one record - so if it triggers an exception, S+F will then have to hold onto all of the 'inner' records. But that still only counts as one row for S+F's purposes. There's no way to undo that batching, short of manually editing the quarantine export file and re-importing it.