Is there any way to reduce the density of historical data?I mean, I saved per second, and it’s unnessessary for more then two years old data. But I don’t want loss all old data. Is there any way to reduce the rate to 1 minute for old data?
I’m thinking something like
delete from sqlt_data_X_YYYY_MM where extract(second from from_unixtime(t_stamp/1000)) / 10 != 0
There might be a better/cleaner syntax, but hey… This should delete all entries where the seconds are not between 00 and 09.
You can use the sqlth_partitions table to find all the historian data tables.
edit: Oh wait, you’re historizing every second… then
delete from sqlt_data_X_YYYY_MM where extract(second from from_unixtime(t_stamp/1000)) != 0
Would delete every entry where seconds are not 0.
Use at your own risks ![]()
Haha,It’s worth trying. I’m just worried about whether the ignition will recognize the rest data.
What do you mean, the rest data ?
You’re afraid the remaining data will be corrupted ?
It won’t. Each row in those tables represent an individual data point, removing one shouldn’t break the others.
1 Like
ive deleted individual entries many times with no issue.
1 Like