We have a slow, ongoing issue with quarantine history data. We have a lot of Sparkplug edge devices which are cautious about ensuring data makes it through to a listening server, in that they are likely to send a measurement for the same path/timestamp multiple times if there was a networking issue after the prior transmission. That rarely happens with a good Internet connection, but we have a lot of cellular devices and some of them have very bad connections. This happens for us hourly.
The first issue, which is annoying but livable, is that it logs at ERROR level every time this happens (MemoryForwardTransaction
or DatasourceForwardTransaction
with a cause like java.sql.BatchUpdateException: (conn=1426630) Duplicate entry '908299-1755541696924' for key 'PRIMARY'
).
Less livable is that it slowly and steadily fills up the quarantine. I have to remember to go to the server and retry/export/delete the quarantines every so often or things go bad. I usually retry the quarantine a couple times to shake out the history that weren’t duplicated but just stuck in the same records, and then export-then-delete and keep the quarantine files around for a while.
What’s my easiest path to automating the quarantine retry/export/delete to run daily or weekly?
(EDIT: I get why most users wouldn’t want it, but in our case I wish I could configure the store-and-forward engine to use something like MySQL’s “INSERT ON DUPLICATE KEY UPDATE” for history tags and just fix the whole issue at the source…)