I am currently trying to use system.tag.queryTagHistory to perform a recursive calculation on historical data from an accumulator scale. This scale logs a data point every 10 seconds, but it to 0 at random intervals. As such, to find the total tracked weight over a given time, I need to be able to take the sum up all of the weights after the scale resets itself. To do this, I need to be able to not only find the minimum value of the data (which, of course, will be 0), but also the times at which the data is at the value 0. This is my first time using system.tag.queryTagHistory before, but I believe I may be misunderstanding what is actually being represented by the output values.
I’ve attempted to run the following code in the scripting console:
from datetime import datetime
startTime = ‘2018-07-28 14:00:00’
endTime = ‘2018-07-31 12:00:00’
results = system.tag.queryTagHistory(paths = [’[default]B1Line/AutoStacker/Weight/TotalizedWeight’],startDate = startTime, endDate = endTime, aggregationMode = ‘Minimum’,returnSize = 3)
for i in range(results.getRowCount()):
print str(results.getValueAt(i,0)) + ‘,’ + str(results.getValueAt(i,1))
And the output is the following:
Sat Jul 28 14:00:00 CDT 2018,0.0
Sun Jul 29 13:20:00 CDT 2018,16.8457126617
Mon Jul 30 12:40:00 CDT 2018,0.0
Looking at the actual data, it does appear that the minimum values are correct, but the time stamp seems to be off. I expect to see a 0 (or close to 0) value at around July 28, 29 and 30 at 11pm. Is the timestamp associated with the value not the timestamp of the actual data point? If not, is there a way to retrieve the time at which that data point was logged via queryTagHistory?