Logging data over a long time

Hi, I am still evaluating FSQL, it seems very promising so far.

One question;
Let’s say I want to log 100 values.
These 100 values (integers) will be logged every minute.
So in one year that give’s me 100 * 2 * 60 * 8760 = a file of 105 Mb of data.

Anyone can see that files can grow rapidly when logging more values and/or logging faster.

Ideal (for me) would be if I could log certain values lets say every 10 seconds for 6 months.
With that data I could look into trends.
But after 6 months I still wanna be able to look into trends but not so precissely.
So for data that is older than 6 months I would like to get only 1 value every 15 minutes. That value would be the average taken from the values that were logged each 10 seconds.

Is that possible.
I hope my posting makes sense :wink:

Hello and welcome to the forums!

One of the biggest advantages of our software is that everything is centered around standard SQL databases, whose very purpose is to let you store and do all sorts of things with data. Given this, in this case you’re really asking whether a database can accomplish what you want, and not our software.

The short answer is “yes”. The long answer is that long term data management is part of any database, and the are many different ways to accomplish different goals.

Starting with the closest solution to the question you asked, you could write a simple query that would pare down the data over 6 months old, and then have FactorySQL run it periodically. Going a step further, you could implement it as a slightly more complex stored procedure that was a bit more intelligent about paring it down, for example examining the data to only keep the most important points (obviously this depends on what the data is and what “important” means).

However, depending on the particular database you’re using there could be other good options. Several databases offer “archiving” features, that can put aside and compress old data. You don’t actually lose anything, and it’s all available to you- usually at the cost of query speed. MySQL’s archive engine, for example, can compress data up to 60%, which means that by your general calculation, data logged every 10 seconds for 10 years would take up around 3.5 gigabytes. Given the price of disk space these days, that’s practically nothing.

Another database feature that might be useful is “table partitioning”. This is a database supported way to break up the data, primarily used to increase query performance (good if you’re doing a lot of graphing). You could have the database create a new partition periodically (every 6 months or so). Your query doesn’t know about the partition, it just thinks it’s one table. But the queries are faster, and then you can do what you want to older partitions over time (delete data, archive it, etc.).

Anyhow, as you can see, there’s a big variety of options available to you. I didn’t want to overload you with info, but give you a few different paths to look at. I’m sure others on here might have some more suggestions…

Regards,