Store and Forward out of order

Hello,

I have a problem with a “store and forward”. My data are’nt saved in a database but are redirected to Quaratine. I’ve found the following logs in the console :

16:37:31 DatasourceForwardTransaction Error forwarding data: Duplicate entry ‘291-1347615285843’ for key ‘PRIMARY’
java.sql.BatchUpdateException: Duplicate entry ‘291-1347615285843’ for key ‘PRIMARY’

16:37:25 DatasourceForwardTransaction Error forwarding data: Duplicate entry ‘1080-1347615285796’ for key ‘PRIMARY’
java.sql.BatchUpdateException: Duplicate entry ‘1080-1347615285796’ for key ‘PRIMARY’

I’ve tried to :

  • Empty the quarantine pool
  • Disable the Disk cache en re-enable it
  • execute the SQL queries (the ones i’ve found in the forum the delete the entries ‘HAVING COUNT >1’)

But none did the trick. I keep having thoses cycling message in the console.

How can I get rid of this error ?

My version is 7.4.4

Thank you.

Hi,
don’t know if this of any help, but i had the same problem caused by the transaction group executing twice for the same data. Did you check that the records are really missing in your database?
In my case it was caused by a triggered group with ‘Rest trigger after execution’ set to true. This caused intermittent duplicate entries until i check the ‘Only execute once…’ option.

Hi,

Is this going against MS SQL Server, or MySQL? This error has come up a number of times lately, and we’re still trying to track down the exact cause (there may be multiple). Some cases seem to be caused by tag definitions being duplicated in the internal database. I have a tool that I can send you that will clean them up, if they exist.

Another thing to try is to alter the table to remove the primary key index, which will allow the “duplicates” to insert. Then we can look at which rows are being duplicated, and see if there is anything in common between the tag configs. If you’re using SQL Server this is a bit easier, but it can be done with mysql as well. Let me know which you’re using, and I can give you additional instructions.

Regards,

My database server is MySQL. I would really appreciate to have the tool in order to remove the duplicated tag definition.

Regards.

Hi,

Download this and run it according to the instructions. Let me know if it reports any duplicates, or not.

Regards,

Here is the report.

It seems to be at least one duplicate. After several warnings, the duplicate is deleted but when i restart Ignition, I still have the problem. The console reads :

MemoryForwardTransaction Error forwarding data: Duplicate entry '2712-1348215729020' for key 'PRIMARY' java.sql.BatchUpdateException: Duplicate entry '2712-1348215729020' for key 'PRIMARY'

Is there a way to find what does the number xxxx-1348215729020 represent ?

I would really like to unblock my history storage, even if I have to loose data because all the history storage is out of order and this means that i cannot read history, draw charts… for one week now just because there is some duplicates somewhere in the internal database.

Maybe it is possible to remove the PRIMARY constraint on the database table but i have no idea where to find it.

Best regards.
rapport_ignition_duplicate.txt (7.05 KB)

Hi,

It appears that the tool did not work correctly, due to the state of the db. That’s ok though, because I think the better test is to actually drop the primary key, as you suggested, and then see if duplicates actually do get inserted. This will help us narrow down where the problem is.

To drop the primary key, run this on the current partition table (the example here assumes normal monthly partitioning):

drop index `PRIMARY` on sqlt_data_1_2012_09

After that, you can “retry” any quarantined data. You should not get any more errors.

After a few hours or a day, try running the following:

select tagid, t_stamp, count(*) from sqlt_data_1_2012_09 group by tagid,t_stamp having count(*)>1 and report back the results.

Regards,

Colby
As you know we are having the same issue. On your request we reset partitioning to once a month several days back and once the table was created i deleted the index like we had been doing on the daily tables. I ran your query again this moring and it returned no rows. Since the deletion of the index we have seen no quarintined items. Have you made any headway on this issue?

Once the primary key dropped, the quarantine could be retried without any error. That’s good news.

Here are the results of the second query (attached).

Regards,
Duplicates.csv (3.24 KB)

Hi,

The results of that query are a little interesting - they show that each tag, id 2700-2769, had one event on friday with 27 duplicates, and then one event on monday with 163 duplicates. Most of the events on monday are from about the same time (it says 9:03:32 am, but I believe it is adjusting it for my timezone, so maybe 18:03:32 for you?). On friday, however, nearly all of the tag times are off by a few milliseconds. Those times are at 1:50:16 am, so 10:50:16 am your time, I believe.

Did you change anything at those times? Did you change the tags from OPC time to System time? I assume the system wasn’t running during the weekend.

To see the tag paths, you can run:

select tagpath from sqlth_te where id between 2700 and 2769

Are these your only historical tags?

Continue to run the query for duplicate values periodically, and if you see new entries, look at the console to see if there is any other error or message reported. I’m really trying to track down any kind of indication as to what sort of events cause this, as it doesn’t seem to be something that just happens constantly.

Regards,

Also, it would be interesting to try the following, and see if returns the same results:

select tagid, t_stamp, count(*) from sqlt_data_1_2012_09 group by tagid,t_stamp,intvalue,floatvalue having count(*)>1

This is just to confirm that different values aren’t being stored for the same time.

Regards,

Colby
I went back to one of our daily tables that we say the issue in. Here are rsults from your first and second queries. Let me know if you want to look at any thing else. Or we can setup gotomeeting and you can have access to the DB if you want. They look to be returing the exact same result set.
FirstQueryResults.csv (50.2 KB)
SecondQueryResults.csv (49.8 KB)

Hi,

It appears that you copied and pasted those results from Excel, or something that truncated the timestamp to scientific notation format. Could you maybe try to run the first query again and copy/paste the results in notepad?

And just to be clear, this morning this query was returning nothing, right? So this is a new event that happened today?

Regards,

Colby,
These results are from a previous table before we set partitioning to monthly. This table is from the 17th of this month.
I ran the query against the monthly table this morning that one returns zero rows.The monthly table i deleted the index on as soon as it was created and we havent seen any issues thus far with it.
result sets.csv (123 KB)

Ok, that's interesting that no new duplicates have been inserted in the new partition table.

2Gi- Could you try running the query from above and post the results?

Thanks,

Colby
Actually in the last cvs I posted have results from both queries you requested in this thread.
They both return the exact same result set.

[quote="Colby.Clegg"]
2Gi- Could you try running the query from above and post the results?

Thanks,[/quote]

Hello,

Here are the results of the query. It seems that we have many duplicates.

I'm not sure but I think that the presence of duplicates may be explained by this problem :
http://www.inductiveautomation.com/forum/viewtopic.php?f=72&t=8846

As the timezone library was buggy since the update (it's fixed by now) there were conflits in history timestamps.

Regards.
Resultats_requete_doublons.csv (9.74 KB)

Sorry there hasn’t been a reply, Colby is out of the office until Monday, October 8th.

Hi,

Yes, that is certainly showing a LOT of duplicates for some tags. However, you’re using OPC timestamps, and with the version you’re using, if your deadband is set to 0 (or max age is 1), the same value would be logged each cycle. Perhaps that is what was happening during these times, and so while the value didn’t change, many duplicates were stored. Does this sound like what could be going on?

In the 7.5.4 beta, we’ve modified what deadband=0 means, to fit what we’ve talked about in other threads. Previously, that setting would have recorded a value each time. Now, it will only record when some aspect of the value changes, even if it’s just the timestamp. So, you may want to upgrade to that version, and see if the duplicates keep coming in.

Regards,

thank you,
I just update to 7.5.4 beta2.
For the moment everything goes well.
I come to you in case of problems …
Regards,