Get Average Time Between Tag Value Change

We have a tag that goes high when our machine is running, and low when it is not. We would like to get an idea over X time what the average time between High -> Low is. I have been playing around with tag calculations and tag history functions but I can’t seem to get the values I’m looking for. I do have comparison data and I’m not even close. The closest post I could fine that helped get me sum running time was Calculate runtime percentage

Our start up time takes quite a bit so we want to see how long a machine stays up before it goes down. Right now my confirmed data is saying about 45 minutes (terrible) and the best I can get out of ignition is 7 hours which I know isn’t close.

Hoping someone else has done this before or might have creative suggestions of how we can tackle it.

Personally I’d say the easiest way would be with a transaction group instead of just logging to the historian. It takes just a little planning and setup but its pretty easy.

You would bring in your running bit then use a run always expression with the hasChanged function to trigger it with every change in state of your running bit. This would be used to trigger your transaction group and to reset your timer.

Then have another run always expression set up in hour meter mode, up to you if you want it set up for seconds or minutes. Then set your trigger you created as the reset condition. This timer is optional since you can also just do the math between entries but I like listing it if I display downtime events so logging it is one less thing I need to do on the other end.

You will either want to invert your run signal when it is logged in or remember that when it is logged as low it is your run time entry and when it goes high it is your stopped time entry.

From here you can put in anything else you want to track with it. But that would allow you to create a table that just has your uptime and downtime. Then you just use standard SQL queries to pull it back out. If it overlaps when you want the end of the time range to be you can do extra calculations to fill in the gaps between entries if needed.

I prefer this method just because its easier for me to pull it back out and use it than when it is in the historian. I believe it can still be done from the historian but how much work it is depends on how your have it set up to log to the historian. I also do it this way when I’m trying to track downtime events since I can add more columns in the table to assign reasons or notes or other information about the events.

1 Like

I use transactions groups, logging to a SQL DB on change for a lot of things. For example, I have some alarms where I need to know the duration that the alarm was on. I can just write a query using a lead/lag function to grab the t_stamp and figure the duration out… lots of things you can do with the info at that point.

1 Like

Appreciate the bumps in a new direction. I will read up on the transaction groups and go from there!