Not sure if I'm using an hour meter transaction group properly

So here’s my situation:

I have a recipe and a job_type. The recipe can be anything, and job_type can be one of a list of a few types. They’re both string tags. The machine can either be running or not running, this is obviously represented as a boolean tag.

I want to set up a transaction group that hasa running_hours meter split up by each recipe and job_type, tracking how long the machine has run for that given recipe/job_type.

I currently have it set up like this:

I have the job_type and recipe tags coming into the group as Strings, being written to their respective database columns.

I have the running tag set up as an hour meter, and up until now it never reset and was set to retentive.

Other group settings:

  • Update mode: Bi-directional, DB wins.
  • Table action: update/select. I set it to key/value pairs with the job_type and recipe tags matching their respective columns.
  • The group runs every 15 seconds, with no other triggering configured.

So after letting this run for a while, I started to notice that the total sum of all the running_hours of this machine was much higher than was even possible. For instance, I was watching yesterday and it jumped from 25 hours to a total of 36 hours quite quickly, and now it’s at 45 hours, which shouldn’t be possible given how this machine is being run…

My running theory is that since I never had the running_hours meter reset, it kept the previous recipe/job_type's hours when it started logging a new one, therefore causing them to “stack up” and become inaccurate.

So at this point I’ve created a new expression tag called job_changed that is set to the expression:

hasChanged({[~]job_type}) || hasChanged({[~]recipe})

And I’m using this tag to reset the running hour meter tag.

UPDATE: Can confirm that doing this caused the running hours to drop as it started to completely reset rows in the database.

I’m going to keep an eye on things and see if this works, but from what I immediately noticed, the running hours seemed to drop, at least initially. I’m not sure if this is expected given what I’ve set up, but maybe it is? I have a hunch it’s going to end up resetting a bunch of the meters to zero…

Some guidance would be appreciated, I’m really not sure if I have this set up properly at all, and I’m just kind of poking at things to see if I can get it to work.

If it’s entirely unrealistic to even try to split up my hour meter this way, or if doing a more “insert a bunch of rows based on time periods that we’re running” or something, let me know.

I’ve had luck using the “write handshake on success” option under the Trigger tab. I set my counting tag to 0 on success. It seems to work pretty well.

Calculating runtimes this way can be problematic because the transaction group is not running a query every 15 seconds, its just checking the state. So if the motor is off for 5 seconds but on for 10 during a 15 second period, it gets counted as 15 or 0 seconds “On” depending on the state it is in when the transaction group runs.
I worked around this by running a gateway “tag change” script that calculates runtimes of the motors I am tracking using system.tag.queryTagCalculations with the ‘DurationOn’ argument. The expression tag initiating the tag change script is true at the end of every shift and so I get a calculation of runtimes per shift. I write the results to a dataset tag which then gets written to a transaction group table. You could possibly try doing the same but run the gateway event tag change script when the batch process is finished.

1 Like

This machine is only on for multiple minutes at a time. MIssing a 15 second period here or there doesn’t matter.

I do have a tag that signals the end of a job, but sometimes the machine is stopped part-way through and that bit never goes true. I want to be tracking the total amount of time the machine spends in a running state.

If you want to be precise, record every transition of your “Running” status in a table with a timestamp. Perform delta calculations from event to event, summing the deltas that are “true” at the start. (The lead() and lag() window aggregate functions in SQL are the most efficient means to do this, imnsho.) Clip the deltas to specific time periods to summarize by hour/shift/day/week/whatever.

2 Likes

I’ll give this a try - thank you!