Local data caching for historical data logging

This a feature request announcement. We’re working on a feature that will allow FactorySQL to locally cache logged data when connection to the SQL database is lost. Upon reconnection, FactorySQL will write all the missed records into the database.

This feature is now implemented.

Can this feature be modified so that when the database connection is re-established, the cached values are sent to the database before any new values? In other words, once FactorySQL starts caching, new values should continue to be cached until the cache is empty.

I have written a procedure that analyzes the database to determine the sequence of events that occurred in the process. When FactorySQL is “backfilling” after losing the database connection it intermingles current values with historical values and the analysis procedure doesn’t work correctly.

The timestamp should still be accurate, can your analysis use that to impose order on the records?

Carl’s right, but I would just add that you should almost always use an ORDER BY clause when selecting from the database. Without it, you’re simply assuming the database is going to return them in some meaningful order- which isn’t always true. MySQL seems be be nice and return them ordered on the primary key, but other databases don’t (and this could change with future versions of mysql, since nothing says it has to do this). So, making sure the t_stamp column is indexed and then ordering on it is the best way to go in this case.

I just wanted to point that out because it’s not always apparent, and it’s a big backwards compatibility/portability issue.

Regards,

The timestamp is accurate and I am using an ORDER BY clause. But if my procedure analyzes data during a time that FactorySQL is in the process of “backfilling” there will be gaps in the historical record. The gaps won’t be filled in until the “backfilling” is complete. My procedure has no way of knowing when the “backfilling” processs is going on or when it is complete. If FactorySQL inserted the cached values in chronological order before inserting the new values, there would be no gaps in the historical record at any time.

Instead of changing the way that that data is “backfilled”, if you could provide a “backfill” indicator of some sort in the database then my procedure could be written so that it doesn’t try to analyze the data while the “backfill” process is active.

Ah, yes, that makes sense. We’ll have to look into what we can do, though a setting like you described that would cause it to load the data cache first doesn’t seem like it would be too difficult.

Regards,

What is the status of this feature request?

Are you planning to implement some functionality to address this issue? If so, what is the schedule for that implementation?

Hi,

Indeed, the next (major) version of the software will include a revamped caching system, which is now actually more of a store and forward system. All data will pass through the system, and new data will only be written once cached data has been exhausted.

We’re currently targeting a release in early 4th quarter (though beta/test access will be available before that).

Regards,

Is the new version still on schedule to be released “early 4th quarter” and is this feature still included? When exactly is “early 4th quarter”?

Thank you

Hi,

The schedule is currently being described as “4th quarter”. We haven’t committed to a firm release date, but we are still confident that it will be in that time frame. We are taking a fairly non-committal approach on this because given the scope of the update, we really want to be sure that it’s ready to release.

In regards to this feature- YES, it is there. The data caching system has been re-written and made into a store and forward system.

Regards,