Hourly aggregates Historian/Transaction group/script

Hello everyone! I am trying to aggregate the number of errors for 1 hour intervals in the following way:

under detailing (in the above image) I would like to show in the respective perspective label (the gray boxes next to scrap and good parts) the amount of errors/triggers for that particular hour. For example, if "error 1" Tag turned true 9 times and "error 2" Tag 6 times from 6-7 am, I would like to show (and maintain) the value of error aggregation and so on for every hour. I tried scripts using if getHour24 ... concatGroup and sums but could not achieve what I wanted, as the returns changed to default once the respective hour did.

Should I use transaction groups or some sort of query calculations? I wanted a bit of guidance to what was best to do or in which direction to head down before jumping in.

See if Tag history difference over interval - #2 by jlandwerlen would work for you.

For this application, I would say neither. It’s one that would be easier with multiple transaction groups, but, if you have good organization of your tags, you can use a gateway tag change event script to write the events to the database.

As an example, I have a set of 4 assembly lines that I grab takt times from, each with 18 stations

The structure of the tags are similar

         (line)           (station)

The script:

import re

regex = r"(34[0-9]*)(.*)(?<=ST)([0-9]*)"

# Get the path of the tag that triggered
pathIn = str(event.getTagPath())
# Replace 'Trigger' with 'Time' to make tag path to read
taktTimeTag = pathIn.replace('Trigger', 'Time')
# Read the takt time
tag = system.tag.readBlocking([taktTimeTag])[0]
# Get the value, quality, and timestamp of the takt time
value, quality, t_stamp = tag.value, tag.quality, tag.timestamp

if value is not None:
	# Create values to put in the database
	takt_time = round(value, 1)            # rounded to 0.1 sec for average and std dev calculations
	rounded_takt = int(round(takt_time,0)) # rounded to 1 sec for histogram fuctions. Storage is cheap.
	# regex pattern search to extract line number and staion.
	match = re.search(regex, pathIn)       
	line, fluff, station = match.groups()  # 'fluff' is a throwaway value

	# Insert values into the db
	query = "INSERT INTO takt (line, station, takt_time, rounded_takt, t_stamp) VALUES (?,?,?,?,?)"
	system.db.runPrepUpdate(query, [int(line), int(station), takt_time, rounded_takt, t_stamp], 'Takt')

My other tkat time scripts are similar, just the regexes change to cover the tag structures there. Some stations are named instead of numbered, etc. (Some of this stuff is several years plus old, and it’s taking time to normalize the tag structures. :wink: )
All of them, however insert into a single table called takt.

So, when they want to show takt time in a table. It’s a single table that I’m drawing from.

1 Like

In your use case, you could use a custom property to query the data with something like:

SELECT dateformat(t_stamp, 'HH:00') AS t_stamp,
       SUM(CASE WHEN errorNum = '1' THEN 1 ELSE 0 END) AS error1,
	   SUM(CASE WHEN errorNum = '2' THEN 1 ELSE 0 END) AS error2	   
FROM error_table
WHERE t_stamp >= '2021-11-24 06:00:00' AND
      t_stamp <  '2021-11-24 14:00:00'
GROUP BY dateformat(t_stamp, 'HH:00')
ORDER BY dateformat(t_stamp, 'HH:00')

In you labels you can use a lookup() expression to grab the specific value from the custom property.

1 Like

If the individual booleans are historized in discrete mode, then a raw history query for the time period can be post-processed in a script binding. That would avoid event scripting and extra tables.

1 Like

@JordanCClark I have to admit I was a bit baffled when I first saw your suggestion :sweat_smile: thank you for the input, I think for the mid and long term implementations like this will be the way to go for me.
Your line data page looks really nice, how did you do add the efficiency graph? or did you embed it from an external source?

Usually, most of my tags are OPC tags for each line, 1 with a integer array [48] data type and another with a boolean array [48] data type and I have a symbol list in excel to know which signal is what. So whenever I have to standardize for something I just change the array element index.
The problem is that all lines have different numbers of signals, some the arrays vary in length.