Tag history difference over interval

I am trying to find a better way to do production counts over specific intervals using tag history.

Example, I have a tag TestCount, right now it’s a memory tag, but typically this would be an OPC tag from a PLC that increments. I can’t use Count aggregationMode because it may not “catch” each transition, example, it may go from 10 to 13. My desire is to be able to chart this data over a period of time, example, 12 hours, with a certain interval, example 1 hour. So, if at the start of the hour the value was 100 and the end it was 113, then I would chart 13. Also, this value can reset. I have done this before with SQL queries, but was hoping to find an easier way with tag history.

Is there an aggregationMode that will accomplish this that I’m missing? Or, does anyone have any recommendations?

Here is an example, I did a system.tag.queryTagHistory for startDate = ‘2021-06-29 11:00:00’ and endDate = ‘2021-06-30 11:00:00’. Results (tstamp,value):

Tue Jun 29 15:36:58 EDT 2021 0
Tue Jun 29 15:37:14 EDT 2021 1
Tue Jun 29 15:37:16 EDT 2021 2
Tue Jun 29 15:37:19 EDT 2021 3
Tue Jun 29 15:37:23 EDT 2021 4
Tue Jun 29 15:42:59 EDT 2021 5
Tue Jun 29 15:43:13 EDT 2021 6
Tue Jun 29 15:43:54 EDT 2021 7
Tue Jun 29 15:46:01 EDT 2021 8
Tue Jun 29 15:46:04 EDT 2021 9
Tue Jun 29 15:46:41 EDT 2021 10
Tue Jun 29 15:46:44 EDT 2021 11
Tue Jun 29 15:46:48 EDT 2021 12
Tue Jun 29 15:46:58 EDT 2021 0
Tue Jun 29 15:47:08 EDT 2021 1
Tue Jun 29 15:47:49 EDT 2021 2
Tue Jun 29 15:48:28 EDT 2021 1
Tue Jun 29 15:51:06 EDT 2021 2
Tue Jun 29 15:51:10 EDT 2021 3
Tue Jun 29 15:51:13 EDT 2021 4
Tue Jun 29 15:52:50 EDT 2021 0
Tue Jun 29 15:53:46 EDT 2021 1
Wed Jun 30 08:56:45 EDT 2021 5
Wed Jun 30 10:48:46 EDT 2021 6
Wed Jun 30 10:49:32 EDT 2021 8

So, if I want to see counts per hour it would be something like:

Tue Jun 29 11:00:00 EDT 2021 0
Tue Jun 29 12:00:00 EDT 2021 0
Tue Jun 29 13:00:00 EDT 2021 0
Tue Jun 29 14:00:00 EDT 2021 0
Tue Jun 29 15:00:00 EDT 2021 19
Tue Jun 29 16:00:00 EDT 2021 0
Tue Jun 29 17:00:00 EDT 2021 0
Tue Jun 29 18:00:00 EDT 2021 0
Tue Jun 29 19:00:00 EDT 2021 0
Tue Jun 29 20:00:00 EDT 2021 0
Tue Jun 29 21:00:00 EDT 2021 0
Tue Jun 29 22:00:00 EDT 2021 0
Tue Jun 29 23:00:00 EDT 2021 0
Wed Jun 30 00:00:00 EDT 2021 0
Wed Jun 30 01:00:00 EDT 2021 0
Wed Jun 30 02:00:00 EDT 2021 0
Wed Jun 30 03:00:00 EDT 2021 0
Wed Jun 30 04:00:00 EDT 2021 0
Wed Jun 30 05:00:00 EDT 2021 0
Wed Jun 30 06:00:00 EDT 2021 0
Wed Jun 30 07:00:00 EDT 2021 0
Wed Jun 30 08:00:00 EDT 2021 4
Wed Jun 30 09:00:00 EDT 2021 0
Wed Jun 30 10:00:00 EDT 2021 3

Is this possible with the current aggregation options? If not, does anyone have any help on this?

With the help of @JordanCClark (thanks, didn’t know about zip), I ended up with the below script. It will take a history tag (integer, float, double) and find the difference over hour intervals between a start and end date. Any value over 0 will be counted. You can rollover at any time, although your process may deem it necessary to only rollover if you aren’t running. You can then use this is a bar chart, for example, to show counts per hour over time. If the tag doesn’t exist, it should just return zeros, not throw an error. I tested with some fairly large sets and it seems to work very well.

#Takes an integer or float tag and returns an accumulated value for each hour within the start/end dates
def hourInterval(tagPath, startDate, endDate):
	
	# Parse Start & End Dates - Attempts to parse a string and create a Date
	current = start = system.date.parse(startDate,'yyyy-MM-dd HH')
	end = system.date.addHours(system.date.parse(endDate,'yyyy-MM-dd HH'), 1)
			
	#Get data from tag history
	result = system.tag.queryTagHistory(
			paths = [tagPath],
			startDate = start,
			endDate = end,
			aggregationMode = 'MinMax',
			ignoreBadQuality = True,
			timeout = 10000,
			returnSize = 0 #Natural
			)
					
	dataHeaders = ['t_stamp', 'value']
	data = [[system.date.parse(result.getValueAt(row,0)), result.getValueAt(row,1)] for row in range(result.rowCount)]
	if result.rowCount <=0: #if no data, create at least one row
		data.append([startDate,0])
	dataSet = system.dataset.toDataSet(dataHeaders, data)
				
	# Convert to get a PyDataSet.	
	pyData = system.dataset.toPyDataSet(dataSet)
			
	# Create a dictionary with all the hourly times as keys
	timeDict = {}
	while system.date.isBefore(current, end):
		timeDict[current] = 0
		current = system.date.addHours(current, 1)
				
	# Get first row of the dataset:
	if len(pyData) > 0:
		date, value = list(pyData[0])
		# Set minutes and seconds to zero
		date.setSeconds(0)
		date.setMinutes(0)
		
		# Iterate through the rest of the rows.
		# With a PyDataSet we can use a slice to start at the second row
		# With zip, we can now iterate through the sliced dataset and the original at the same time.
		for row, prevRow in zip(pyData[1:], pyData):
			# Get dates and values for the current and previous rows,
			# then set minutes and seconds to zero.
			date, value = list(row)
			prevDate, prevValue = list(prevRow)
			date.setSeconds(0)
			date.setMinutes(0)
			prevDate.setSeconds(0)
			prevDate.setMinutes(0)
			
			# update the current hour with difference.
			if value > prevValue:
				timeDict[date] += value - prevValue
			# if value is lower than the previous value, add the full value.
			elif value < prevValue:
				timeDict[date] += value
			
		# Create and return a dataset created from the time dictionary.
		headers = list(pyData.getColumnNames())
		dataOut = [[system.date.format(time, 'MM-dd HH:00'), float(timeDict[time])] for time in sorted(timeDict.keys())]
		return system.dataset.toDataSet(headers, dataOut)