Calculating Total Time in Seconds

Hi, I’m working on a project and I’m trying to figure out how to calculate the total time that passes when a boolean tag is active & display that information in a report.

For example, someone activates a toggle button (boolean value = 1) and 30 seconds later, they deactivate the toggle button (boolean value = 0). I want to record & store the time between activation (boolean value = 1) and deactivation (boolean value = 0) and then display that information on a report.

For a better explanation, the picture below shows three sets of time stamps. The toggle button was hit (boolean = 1) and then deactivated (boolean = 0) a few seconds later. I need to figure out how to total the time between the first “1” and the “0”, reset the timer, & then record the next set, and so on.

Total Time Example

If anyone has any ideas, I’m all ears!

1 Like

I had to do something similar for a report, where i was logging the time between safety rounds. I logged the tag change events to a mysql database, then created a query data source within the report, and then created a script data source to perform the calculation and write it back to the original data source. This is the part of my code dealing with the time stamp difference.

I don’t know if this can be adapted for your purpose, but it may help.

secondsBetweenRows = []
	queryData = data['query'].getCoreResults()
	previousTimestamps = {}
	for row in range(queryData.getRowCount()):
		group = queryData.getValueAt(row, 'unit') + queryData.getValueAt(row, 'station') 
		if group in previousTimestamps:
			# do the calculation
			currentTimestamp = queryData.getValueAt(row, 'timeStamp')
			secondsBetween = system.date.secondsBetween(previousTimestamps[group], currentTimestamp)
			secondsBetweenRows.append(secondsBetween)
			previousTimestamps[group] = queryData.getValueAt(row, 'timeStamp')		
		else:
			# first of the group
			secondsBetweenRows.append(0)		
			previousTimestamps[group] = queryData.getValueAt(row, 'timeStamp')
	data['query'] = system.dataset.addColumn(queryData, secondsBetweenRows, 'seconds', int)

This may help you out:

# Create sample dataset

sampleHeaders = ['t_stamp', 'value']
sampleData = [['2021-07-01 13:42:28', 1],
              ['2021-07-01 13:43:23', 1],
              ['2021-07-01 13:43:28', 0],
              ['2021-07-01 13:43:29', 0],
              ['2021-07-01 13:43:30', 1],
              ['2021-07-01 13:43:32', 1],
              ['2021-07-01 13:43:36', 0],
              ['2021-07-01 13:43:39', 1],
              ['2021-07-01 13:43:41', 1],
              ['2021-07-01 13:43:45', 0]
             ]
sampleData = [[system.date.parse(row[0]), row[1]] for row in sampleData]

sampleDataSet = system.dataset.toDataSet(sampleHeaders, sampleData)

# -----------------------------------------------------------------------

# Using a PyDataSet will let us use slicing
pyData = system.dataset.toPyDataSet(sampleDataSet)

# Process Data. Since the first row will not have an elapsed time, we can
# use the None for the run time
headers = ['t_stamp', 'value', 'run time']
data = [list(pyData[0]) + [None]]
#initialize elapsed time
elapsedSeconds = 0
# Using zip will let us iterate over two things at once.
# Start currentRow at row 1 instead of row 0
for currentRow, prevRow in zip(pyData[1:], pyData):
	# Check if the value in the previous row is a 1. 
	# If true, add to the elapsed time.	
	if prevRow['value'] == 1:
		elapsedSeconds += system.date.secondsBetween(prevRow['t_stamp'], currentRow['t_stamp'])
	if currentRow['value'] == 0:
		# Check whether we need to add the elapsed time to the 'run time' column or None
		if prevRow['value'] == 1:
			data.append(list(currentRow) + [elapsedSeconds])
			elapsedSeconds = 0
		else:
			data.append(list(currentRow) + [None])
	else:
		data.append(list(currentRow) + [None])
			
datasetOut = system.dataset.toDataSet(headers, data)

Resultant dataset:

row | t_stamp                      | value | run time
-----------------------------------------------------
0   | Thu Jul 01 13:42:28 EDT 2021 | 1     | None    
1   | Thu Jul 01 13:43:23 EDT 2021 | 1     | None    
2   | Thu Jul 01 13:43:28 EDT 2021 | 0     | 60      
3   | Thu Jul 01 13:43:29 EDT 2021 | 0     | None    
4   | Thu Jul 01 13:43:30 EDT 2021 | 1     | None    
5   | Thu Jul 01 13:43:32 EDT 2021 | 1     | None    
6   | Thu Jul 01 13:43:36 EDT 2021 | 0     | 6       
7   | Thu Jul 01 13:43:39 EDT 2021 | 1     | None    
8   | Thu Jul 01 13:43:41 EDT 2021 | 1     | None    
9   | Thu Jul 01 13:43:45 EDT 2021 | 0     | 6 
4 Likes

I usually do all of this in the database with lead() and/or lag() window functions. Say you have multiple of these recording to a single table, using an id column to segregate them. You record every change, and due to tag restarts or script restarts you get some duplicate rows as you’ve shown. The query to pick out all of the completed cycles that overlap a time period would be something like this:

Select
	id,
	t_stamp,
	finish_ts,
	finish_ts - t_stamp As duration
From (
	Select
		id,
		t_stamp,
		value,
		lead(t_stamp) Over (Partition By id Order By t_stamp) As finish_ts
	From (
		Select
			id,
			t_tstamp,
			value
		From (
			Select
				t1.id,
				t_stamp,
				value,
				lag(value) Over (Partition By id Order By t_stamp) As priorvalue,
			From myChangesTable t1
			Left Join (
				Select id, max(t_stamp) As lastpriorzerots
				Where t_stamp < :start_ts And value = 0
				Group By id
			) sub1 On t1.id = sub1.id
			Left Join (
				Select id, min(t_stamp) As firstpostzerots
				Where t_stamp >= :end_ts And value = 0
				Group By id
			) sub2 On t1.id = sub2.id
			Where t_stamp >= coalesce(sub1.lastpriorzerots, :start_ts)
				And t_stamp < coalesce(sub2.firstpostzerots, :end_ts)
		) inner1
		Where value = 0 Or coalesce(priorvalue, 0) = 0)
	) outer1
) outer2
Where value = 1
Order By id, t_stamp

Note that the above doesn’t clip the runtime to the bounds. When compiling performance stats, you may need to allocate partial values to individual shifts or days or weeks or months.

6 Likes

Can this be used with a transaction group?

Can this work with a transaction group? And if so, would I write this as a client script? Sorry, fairly new to the ignition world.

Not sure I understand the question, but my assumption is that the transaction group writes to the database, while the script would read from it. The database would be the common point.

I guess my question is, I’m not quite sure where to write the script… I know the transaction group is writing to the database, but not sure where if this should be written as a client script, gateway, etc. If that makes sense…

Did you figure this out?

I have a similar situation, but many more values rather than 0 and 1.

I think my transaction group writes the code value and t_stamp to my table.

Then I plan to use a named query to show the data in a table on a perspective view.

What did you end up doing?

I could probably help if you show a sample of the data.

Does this relate to the other thread?

This may help you, done by @JordanCClark as well.

image

linestops_ndx, stopCode, t_stamp

Maybe I should make a new thread instead?
This is Brit’s thread with the 1 and 0 values

1 Like

Agree. But, let's ask a moderator to split the topic. Which I guess this counts as asking...


More questions. I find that I usually have more questions the older I get... :wink:

Are you just wanting a count of the individual error codes? You could do that within the query.

To specify, the OP’s data was coming from the historian. Yours, I believe, is coming from a transaction group, correct?

I think we are both using transaction groups

You didn't answer, so I going to assume 'yes' :crazy_face:

If you're querying the table generated from the transaction group the query would be something like:

SELECT stopCode, 
       COUNT(stopCode) as qty
FROM myTable
WHERE t_stamp >= '2021-11-19 16:00:00' 
      and t_stamp < '2021-11-19 18:00:00'
GROUP BY stopCode
ORDER BY COUNT(stopCode) DESC

If you are looking to script it, this can be done easily enouh with the collections.Counter() library:

sampleHeaders = ['linestops_ndx', 'stopCode', 't_stamp']
sampleData = [
              [16,  534, '2021-11-19 16:58:22'],
			  [17, 1002, '2021-11-19 16:58:22'],
			  [18, 1001, '2021-11-19 16:58:22'],
			  [19, 1002, '2021-11-19 16:58:22'],
			  [20, 1000, '2021-11-19 16:58:22'],
			  [21, 1002, '2021-11-19 16:58:22'],
			  [22,  534, '2021-11-19 16:58:22'],
			  [23, 1001, '2021-11-19 16:58:22'],
			  [24, 1001, '2021-11-19 16:58:22'],
			  [25, 1002, '2021-11-19 16:58:22'],
			  [26,  634, '2021-11-19 16:58:22']
	         ]

sampleDataSet = system.dataset.toDataSet(sampleHeaders, sampleData)

#-----------------------------------------------------------------------

from collections import Counter

# A query result (using system.db.runPrepQuery(), et al.) comes in as a PyDataSet 
pyData = system.dataset.toPyDataSet(sampleDataSet)

# Get comumn names for the incoming data
pyColNames = list(pyData.getColumnNames())
# Get get values of a column
codeList = pyData.getColumnAsList(pyColNames.index('stopCode'))

headers = ['stopCode', 'qty']

dataOut = Counter(codeList).most_common()

dataSetOut = system.dataset.toDataSet(headers, dataOut)

Output dataset:

row | stopCode | qty
--------------------
0   | 1002     | 4  
1   | 1001     | 3  
2   | 534      | 2  
3   | 1000     | 1  
4   | 634      | 1  
1 Like

Is there a keyword like Duration(stopCode) haha because then it would be solved

So, what are you wanting? The time difference between stopCodes?

Nvm. Total time for each stop code?

Yes.
Right now I want to know how much time is the machine in each code.
Then I know how to do get the top x number of those.


semi unrelated
count since the 1st of this month

SELECT stopCode, count(stopCode) as QTY
 FROM myTable
 where (t_stamp> DATEADD(mm, DATEDIFF(mm, 0, GETDATE()) + 0, 0)) 
               and t_stamp <= GETDATE()
 group by stopCode
 order by count(stopCode) desc