Report - Tag Historian Query Aggregation by Day

When I'm doing a Tag History Query on a Report I'm having issues dealing with daylight savings as I'm trying to do aggregation over the course of a day (but aggregation intervals can only be configured by hour). Is there anyway to get an aggregation mode by day (instead of hour)? I would hope aggregation by day would then handle the 23 hour day in March and the 25 hour day in November.

Or are there other work arounds people have? I would think a lot of people would have run across this issue and had to deal with it. It's surprising it isn't easier to deal with.

This is a follow up to the discussion listed here:
https://forum.inductiveautomation.com/t/scheduled-reports-offset-due-to-daylight-savings/57227

My gut is a day is hardcoded as 24 hours, like month and year are, see this post,

Yes, I see what your saying, except with daylight savings time where twice a year a day isn't 24 hours. One of the daylight savings days is 23 hours and another is 25 hours. The hope is that if there were a "day" interval it would be smart enough to automatically account for daylight savings days.

Have you tried making a report parameter that computes hoursBetween() your start and end timestamp and using that in your Interval setting?

@pturmel, I appreciate the suggestion, but I'm struggling to see how that might apply to my particular implementation.

I am trying to run a monthly report that determines aggregated data for each day. My start and end timestamps are determined based on the duration of the month. I am aggregating every 24hours for the length of the month. For example min, max, and average pH for each day (24 hour period) of the month. It works great... except for the two months where there is daylight savings (because the 24 hour aggregation gets off as soon as I hit the daylight savings day).

Perhaps I have to aggregate my data for each hour (instead of 24 hours) and then have a secondary script loop through my results and aggregate my data a second time based on date to get the aggregation by the day for the entire month? Once again, it just seems silly that it would require that sort of complexity.

As a personal opinion, I wish we no longer had daylight savings, it makes my life harder than it should be. :laughing:

In your binding window, your aggregation is set to "Interval: 24 hours". Try using a parameter in place of the 24. Calculate that parameter. Make sure your start and end dates are true timestamps, so the calculation will yield 23 or 25 on the DST transitions.

Concur.

2 Likes

@pturmel, you got me thinking about other ways to go about doing this.

I ended up using a report script and looping through the month day by day and aggregating the data using the "system.tag.queryTagHistory" function. I was able to put the results into a dataSet that I used in the report. I wish that the built in Tag History Query could aggregate day by day (to account for daylight savings) but I think this resolves the issue as a workaround.

Here is some sample code from my report script:

#Determine number of days to loop through the month.
startDate = data['StartDate']
endDate = data['EndDate']
timeArrayLength = system.date.daysBetween(startDate, endDate)
	
#Setup to query the tag history to get results
queryPaths = ['[HIST]memoryuse'] #Modify paths as necessary
queryAggregation = 'Average' #Modify aggregation as necessary
queryColumnNames = ['memoryuse'] #Modify names for columns as necessary
queryReturnSize = 1 #Return 1 value per day
	
# Initialize the dataset
header = ['t_stamp']
for i in range(len(queryColumnNames)):
	header.append(queryColumnNames[i])
returnData = []
returnData.append(header)	
returnData_dataSet = system.dataset.toDataSet(header, returnData)	
	
# Loop through each day one at a time (we do this to account for daylight savings days)
for i in range(timeArrayLength):
	queryResult = []
	day = system.date.addDays(startDate, i)
	dayNext = system.date.addDays(startDate, i+1)
	
	queryResult = system.dataset.toPyDataSet(system.tag.queryTagHistory(paths = queryPaths, startDate=day, endDate=dayNext, returnSize=queryReturnSize, aggregationMode=queryAggregation, columnNames=queryColumnNames))
	for row in queryResult:
		returnData_dataSet = system.dataset.addRow(returnData_dataSet, row)
		 
# create a new data source with the filtered results
data['ScriptedAggregate'] = returnData_dataSet

Here is an updated/fixed code:

#Determine number of days to loop through the month. We do this day by day to catch daylight savings days.
startDate = data['StartDate']
endDate = data['EndDate']
timeArrayLength = system.date.daysBetween(startDate, endDate)
	
#Query the tag history to get results
queryPaths = ['[HIST]memoryuse'] #Modify as necessary
queryAggregation = 'Average' #Modify as necessary
queryColumnNames = ['memoryuse'] #Modify as necessary
queryReturnSize = 1 #Return 1 value per day
	
# Initialize the dataset
from java.lang import Integer, Object, String, Double
from java.util import Date
from com.inductiveautomation.ignition.common.util import DatasetBuilder
	
header = ['t_stamp']
columnTypes = [Date]
for i in range(len(queryColumnNames)):
	header.append(queryColumnNames[i])
	columnTypes.append(Double) #Here we assume double works for our data. May need to modify.

#Initilize our dataset with column type defined
results = DatasetBuilder.newBuilder().colNames(header).colTypes(columnTypes)
	
# Loop through each day one at a time (we do this to account for daylight savings days)
for i in range(timeArrayLength):
	queryResult = []
	day = system.date.addDays(startDate, i)
	dayNext = system.date.addDays(startDate, i+1)
	
	queryResult = system.dataset.toPyDataSet(system.tag.queryTagHistory(paths = queryPaths, startDate=day, endDate=dayNext, returnSize=queryReturnSize, aggregationMode=queryAggregation, columnNames=queryColumnNames))
	for row in queryResult:
		row_out = []
		for j in range(len(queryColumnNames) + 1):
			row_out.append(row[j])
		results.addRow(row_out) #Add rows to our dataset
	
#Build our dataset
returnDataSet = results.build()	 

data['ScriptedAggregate'] = returnDataSet
1 Like