Due to recent database issues, our primary gateway has accumulated a significant number of archived store-and-forward caches. To address this, we configured a secondary database connection within this gateway, intended to process these archived data caches. This allows the primary store-and-forward function to focus on handling new and live data.
However, we've encountered a persistent problem that prevents us from utilizing this secondary connection to process the old store-and-forward archives, and we have been working with support to work out the problem.
Meanwhile, I've configured an additional ignition gateway, which connects to the same database and has identical store-and-forward settings, including the partitioning rules based on historical tag settings. I've also transferred one of the archived cache files to this new gateway for processing.
I'm hesitant to load this cache file. Our new gateway doesn't have all of the actual remote tag providers configured, which leads me to question whether the partitioning rules will be followed, so I am cautious about proceeding.
Can anyone provide any insights on this, has anyone done this before?
This is generally dicey, with the current way S+F works.
My personal recommendation would be to load the cache up in Kindling and manually prepare one or more queries to insert the data.
1 Like
Thanks, Paul. I was operating on a much older version of Kindling. This is great! Can you briefly explain (or point me to a resource) how the .data & .script files work here and how I can utilize this to build insert queries manually? Maybe this has been done before and documented? I was unable to find a similar post. We have about 20+ of these ranging from 20k-90k records each 
The whole folder is an HSQLDB "database" - HSQL is kind of like SQLite, in that it's a file-based RDBMS designed to run "attached" to other programs, but HSQLDB uses more than one file at a time. So the entire folder, zipped or unzipped, is what you should open with Kindling.
Once a particular dump is open, you should see a table on the left containing the main "data" that's in that dump. Ideally, the presentation there is useful, and you can just select the rows you care about and copy them to clipboard or dump them to a file.
The view on the right hand side, meanwhile, shows the "errors" logged into S+F, which is another bit of info that's sometimes useful and sometimes not.