Transaction group setup question for fault tables

I have fault tables with a single t_stamp for the event.
In SQL, I calculate the durations of these faults by looking at the next row's t_stamp.

I have been wondering if a column for end of that fault would have been better.
(I have been wonder if two t_stamps, one for the start and one for the end of the fault state, would have been better.)

I collect less data per insert.
I process the difference between the event t_stamp and the lead t_stamp in the query.

Seems like a bit of work to wait for the end of the time, then activate the insert and shift the tags.

Sometimes I get a fault code, but then the next fault is like a fault that pops up when someone tries to fix the previous fault. For example they have to open a door to realign a sensor.
Then I think I might have to do math on the time stamps in SQL for those.

Is using one t_stamp best for collecting faults in a transaction group?
Is it non-linear with trade-offs?

Depends on the DB, really, and how comfortable you are with LEAD() or LAG() functions. If you do a great deal of it, and the raw data is never changed after insert, you might consider including a null-able column for next_t_stamp in your tables. Your inserts would leave it null. Then run a maintenance script once a minute or so that uses LEAD() on recent data and updates the rows. Then your reporting queries don't have to do that part and will run faster. As long as they don't include anything where that column is still null.

I wouldn't try to update prior rows one at a time as new rows arrive. Not efficient at all. A maintenance script can make the DB do all the hard work in decent-sized gulps.

1 Like