While experimenting with the “Delete records older than:” option in a transaction group, I noticed that all current rows were deleted (even if multiple rows were not older than the option) in the database table if I via script console inserted a new row with an old timestamp.
For testing, I used the following configuration:
These value were inserted by the transaction group (on a value change trigger):
30 seconds later I inserted a new row via the script console (system.db.runPrepUpdate), with a t_stamp from yesterday:
Couple of seconds later, even if three of the t_stamps were “inside” the delete option, all the rows were deleted from the table:
It seems to be happening if I insert a new row in the table with a t_stamp outside the delete option, and has a higher row index.
Seems strange that it deletes all rows, even if many of them are inside the “time limit” of the transaction group
Is this normal behaviour? Anyone else experienced this, and knows how to avoid it (besides disabling the option of deleting records older than…)?
Tested on ubuntu (v. 8.1.16) and windows (v. 8.1.11).
Agree that it may not be the normal usage of a transaction group, but I am glad I did not experience this on a large table with thousands of rows resulting in the loss of a large amount of data
Maybe there should be some kind of logic preventing this from happening. If there is no such thing already existing, that I’m not aware of?
Have tried creating new "custom" tables, tried adjusting the "delete records older than:" setting to larger values (hours, days etc.), tried adjusting the transaction group trigger settings, the issue still happens..
I have created a support ticket, and the first response from support is:
What you're seeing here is a known bug related to internal timestamp comparisons. Notice that if you start with a brand new table, the issue will still happen, and it happens every 60 seconds. The issue relates to timezones, so I think the logic behind the scenes will cause some issues.
The support team further says that an internal bug ticket has been created so that the developers may prioritize it. Development is also being asked to verify the conditions required for this issue and potentially escalate this if it poses a risk to large table data.