Store and Forward out of order

I am having a similar problem. The sqlth_te keeps adding tags that do not exist. I have deleted all tags still it is adding tags. Run the console query on the sqltag db and it shows 0 rows yet the logging continues. Please advise.

Did you upgrade to the 7.5.4 beta?

phoulihan -

Do you have other SQLTag providers defined? Take a look at SQLTags>Realtime in the gateway, even external providers can log history.

The problem your describing doesn’t sound like it’s related to this thread at all, you simply have tags set for history somewhere. I suppose they could be “hidden in memory”, but this is unlikely. Is it possible that you have another Ignition install somewhere pointed to the same db?

Regards,

Colby
I’m back on site today and we are still having the issue with 7.5.4 beta2 running. I have run the queries again and we don’t have duplicates but we are quarantining items like before. I’m up for any kind of testing you would like to do here. Also i am assuming there is a file somewhere that you push the quarantined items into that gets parsed when you run a retry. Can this be looked at by end-user? I wanted to look at it and see if there was any type of a pattern going on here either certain times it happens or certain tags that this occurs with.

Hi,

Sorry I didn’t see this last week. So, you have data being quarantined, is it the same message about duplicate keys? I thought you had switched to monthly partitions and deleted the primary index for now, in which case you shouldn’t see that kind of error again until next month. Was a new partition created, or is the error different?

The data cache data isn’t visible to the user. Even if you look at the raw data, it consists of serialized objects, so it wouldn’t be helpful. However, if you send it to me, I can take a look and export it to CSV so you & I can look and see if any patterns emerge.

The data cache is located in “{InstallDir}\data\datacache\DBNAME”. You should be able to simply copy the folder of the database connection with the problem, paste it somewhere else, and zip it up. If you run into problems with file locks, you can get Ignition to disconnect from the file by going to Databases>Store and Forward in the gateway, and turning of the local cache for the db. After copying the folder, you can re-enable it again (note: this is also handy for cleaning out the cache. After you disable it, you can just delete the folder, and then when you re-enable, it will be created again).

Upload the datacache against ticket #7788 (I couldn’t find the number you referenced in your email, but this one refers to basically the same problem).

Regards,

Yes i had set the system to partition at monthly intervals and it set a new partition and started quarintining items. Last month with index removed it stopped quarinting items.

Oh, ok, so this started at the beginning of the month. And last month, after removing the index, you didn’t see any duplicates actually logged to the database, right? Something that would indicate that instead of having actual duplicate data, it was simply trying to insert the same data multiple times.

I would recommend uploading the datacache so I can take a look, and see if any duplicates happen to appear, and then remove the primary key for this month as well. If duplicates aren’t actually being stored to the database, or not as much as the errors occur with the key, there much be some issue with the store and forward system where it is trying multiple times erroneously.

Regards,

Correct No duplicates

I have uploaded the datacache from one of the projects. It currently has around 200+ quarintined due to dup primary key issue.

The other project i have deleted all the quarintined items in it. I will watch it to see if we get more. This project it set to monthly partition still and i have not deleted the index in this table either. Want to see if we get more quarintined items through the rest of the month.

As for the project cache i sent you i have reset this project to partition daily so we can get more info to troubleshoot. This table also has index still in place in it.

One more note both projects tables for this month have no duplicates in them checked via your query in this thread.

Hi,

Ok, I’ve spent some time looking things over, and there don’t seem to be any actual duplicates in the data. My current thought is this: an error is occurring after the data has been inserted, that causes the overall forward operation to fail. It tries again, ends up with the duplicate key, and that is what is quarantined.

If this is happening fairly regularly for you, could you turn the following two loggers to “Debug”(adjusted for your database name) from Console>Levels for a few minutes:
StoreAndForward.MSSQL.HSQLDataStore.DatasourceForwardTransaction
StoreAndForward.MSSQL.MemoryStore.MemoryForwardTransaction

They should result in a fair number of entries, but keep an eye out- as soon as you see a few duplicate errors, you can turn them back to INFO, and then export the log and send it in (or grab wrapper.log from {InstallDir}\logs. I’m hoping to see a different error message get logged under the debug level.

It appears that the system might be performing an extra commit, which shouldn’t normally hurt, but if it threw an error, could cause what I have in mind. I’ll get this cleaned up, and in the mean time, hopefully we can track down a more specific error to look at.

Regards,

This logger should also provide the same error information, perhaps with a little less extra detail:
StoreAndForward.MSSQL.TagHistoryDatasourceSink

At any rate, what we’re looking for is a different error, before the duplicate key error. We’re going to adjust the system to make sure to only commit once per forward transaction, which I think could be the root of all the problems here, and hopefully we can get a beta up tomorrow.

Regards,

7.5.4-beta4 was just released and should have the change that Colby was talking about.

Colby
Since I put the beta4 on we have not to this point seen the PK error in the quarantine. So looks like you may have got this one resolved. I have had the beta4 running for couple days. I have seen 2 partitions of data tables in two different projects looking good so far.

Thanks again for all the help. Just don’t get this kind of help from most software companies.

Hi,

I’m glad to hear that. It took a little longer to track down than I would have liked, but without being to replicate the issue, it’s always a slow deductive process. In your particular case, I think it’s the database deadlocks that are occurring in other places that were causing the problem. Now, it’s not really a problem, because if that error occurs, the entire set will be rolled back and then re-tried.

Thanks for the followup.

Regards,