[Optimized] I need help optimizing system.tag.queryTagHistory for diagnostic tool

Hello Everyone,

I created this diagnostic tool that I’d love to share on the Ignition Exchange but I could use help optimizing it. I originally made this post a few month ago and here is the link with a detailed description of the concept.

[[Solved] Need help with system.tag.browse and diagnostic tool - Bit State Change Locator]

This tool is a “bit state change locator”. I use it to diagnose causes or effect within the system by using a selected state change’s timestamp chosen from an initial query result. A tag is selected and queried, a selectable list of “timestamps” and “value changed to” results are used to then recursively browse folders and query all tags at the selected timestamp (± a seconds buffer)

The tool works for me but I know it can be improved so any help or suggestions are appreciated. Here are my questions:

  1. Using system.tag.queryTagHistory(), I iterate through a list of browse results and if a change in state occurred, the tag is read by system.tag.readBlocking() to gather the meta data to be displayed on the final results table. I query the tags one at a time because I don’t understand how the code would work if I load the entire list of browse results into the query. I can’t seem to parse through the results list unless I do it one at a time. The query as a result can be slow.

  2. But is that a bad thing? Can a slow query ever be better on the system? I watched the database from the gateway and while this query happens, the database shows only 1 query happening at a time. Can this insure this diagnostic tool is not going to overload the system with too many queries?

  3. And is there a way to show on the screen that this script is querying? It is hard to tell when the query is finished.

I have some more work to do this and I encourage ideas and suggestions for this tool.
Here is the code for the second button and thanks for reading.

CODE BEGINS:

#grab the "Power Table" dataset
tableData = event.source.parent.parent.getComponent('Power Table').data
selectedRow = event.source.parent.parent.getComponent('Power Table').selectedRow


#grab the time stamp from first column of the selected row from the dataset.
negBuffer = event.source.parent.getComponent('Spinner').intValue
posBuffer = event.source.parent.getComponent('Spinner 1').intValue
date = system.date.parse(tableData.getValueAt(selectedRow, 0))

#OPTION for manual timestamp input
#date = system.date.parse("2019-10-23 04:24:10")

#start and end date time buffer to create a time window to search.
sd = system.date.addSeconds(date, negBuffer)
ed = system.date.addSeconds(date, posBuffer)


#selects the folder to search through.
folder = event.source.parent.getComponent('Label').text


#browse through the chosen folder and queries every tag at the specified timestamp (+/- {x} second), creates oneRow, a row with 5 columns, for every flagged tag and appends each row to the list "rows".
tagPaths = []
def browse(path, filter):
	results = system.tag.browse(path, filter)
	for result in results.getResults():
		if result['hasChildren'] == False:
			tagPaths.append(result['fullPath'])
	
		if result['hasChildren'] == True:
			browse(result['fullPath'], filter)
browse(folder, {'dataType':'boolean', 'valueSource':'opc'})


tags = tagPaths
rows = []	

for tag in tags:
	check = system.tag.queryTagHistory(paths=[tag], calculations=['Count'], startDate=sd, endDate=ed)
	#if a change in state is detected, append a new row with the tag path, the tag name, timestamp, value changed to, and documentation.
	if check.rowCount >= 1:
		for x in range(check.rowCount):
			doc = system.tag.readBlocking([(tag.toString() + ".Documentation")])
			name = system.tag.readBlocking([(tag.toString() + ".Name")])
			oneRow = [tag.toString(), name[0].value, check.getValueAt(x,0), check.getValueAt(x,1), doc[0].value]
			rows.append(oneRow)


			
#Create the headers and datset. Populate 'Power Table 1' with the dataset.
headers = ['Path', 'Name', 'Time Stamp', 'Value Changed To', 'Documentation']
data = system.dataset.toDataSet(headers, rows)
table = event.source.parent.parent.getComponent('Power Table 1')
table.data = data
event.source.parent.parent.getComponent('Power Table 1').data = data

Compared with what you have, this shaves 10-20s or so off the time when i tested with 2 tags, and 1 hour of data. You could shave off a bit more if you can avoid reading the duplicate doco and name tags…

tagPaths = []
docTags = []
nameTags = []
checkTSs = []
checkVals = []

for tag in tags:
	check = system.tag.queryTagHistory(paths=[tag], calculations=['Count'], startDate=sd, endDate=ed)
	check = system.dataset.toPyDataSet(check)

	#if a change in state is detected, append new rows with the tag path, the tag name, timestamp, value changed to, and documentation, for each historic value.
	if len(check) >= 1:
		tagPaths.extend([tag]*len(check))
		docTags.extend([tag + ".Documentation"]*len(check))
		nameTags.extend([tag + ".Name"]*len(check))
		TS,Val = zip(*check)
		checkTSs.extend(TS)
		checkVals.extend(Val)

docVals = system.tag.readBlocking(docTags)
nameVals = system.tag.readBlocking(nameTags)

rows = [[tag[0], tag[1].value, tag[2], tag[3], tag[4].value] for tag in zip(tagPaths, nameVals, checkTSs, checkVals, docVals)]

Hi Nick,
Thanks for the assistance, I will play around with this when I am back at work. Also FYI the query is a kind of opposite of the test you ran. Usually the query is many tags queried within a short time window of a few seconds as opposed to a few tags at a larger time span. The real time saver would be passing the entire list of browsed tag paths into the system.tag.queryTagHistory (as opposed to iterating through the results list and querying the tags one at a time) and then figure out how to parse through the results of the multi-tag query to build the results table. I’m not sure how to iterate through that kind of query result to build the table. Thanks again!

Kevin

Hello Everyone,

Previously this code worked by iterating through a list of tags and queried each tag one at a time. This worked well and allowed me to fairly easily build the final results table, but the query could take anywhere from 30 seconds to close to a minute. The goal is to optimize this by loading an entire list of tags into the query instead of doing it one at a time. The problem is with the resulting dataset which has some issues that needed to be worked around before the results can be used in my final results table.

The query is for boolean OPC tags. My tags’ scan rate is 1/sec and the historical scan class is set to evaluate on change. Once again this a diagnostic tool, it is not for an operators console and should be run from the gateway only.

The resulting dataset from a multitag query using system.tag.queryTag History() with the ‘Count’ argument has the following characteristics:

  • The first column is for the time stamps and there is an additional column for each tag that is queried. So 100 tags will result in a dataset with 101 columns.

  • There is a row for every timestamp (window) in which any tag changed state. If 2 or more tags changed state in the same time window, both will be shown in the same row.

  • If a tag does not change state, it has value of ‘None’ for that time stamp unless the tag previously changed state. After a tag has shown a change in state, that tag’s state will continue to be shown on every row following its initial change.


                tag1          tag2           tag3         tag4       tag5
t_stamp 1       None          None           None         None         1
t_stamp 2       None          None           None          0           0
t_stamp 3       None           1             None          0           0
t_stamp 4       None           1             None          0           1
t_stamp 5       None           0             None          0           1
t_stamp 6       None           0             None          1           1        
         
  • Instead of reverting to ‘None’ the tags state is shown even though it didn’t change state. I am not sure if this is intentional or a bug, but it seems to me that it should show ‘None’

The work around took me awhile to figure out, but I am happy with the results. It requires a few conditional arguments, one for the first row and one for the rest of the rows which checks the previous rows value to determine if a change in state occurred.

Also I took the query result’s tag paths and turned them in to a sorted set so that I have list of tag paths without duplicates to read the .Name and .Documentation tag values (as suggested by Nick). This was tricky as I found out the hard way that sets are not sorted, which was something that was messing up my final results table.

Here is the code. I have tested this with a list of around 1100 tags and a 10 second time range. The performance is much better than before, taking only a few seconds. I am not sure at what point that too many tags could be loaded into the query, so caution is advised. Please feel free to offer more suggestions. I plan on making more improvements, including a more flexible time range selector.

tableData = event.source.parent.parent.getComponent('Power Table').data
selectedRow = event.source.parent.parent.getComponent('Power Table').selectedRow


#grab the time stamp from 1st column of the selected row from the dataset.
negBuffer = event.source.parent.getComponent('Spinner').intValue
posBuffer = event.source.parent.getComponent('Spinner 1').intValue
date = system.date.parse(tableData.getValueAt(selectedRow, 0))
#date = system.date.parse("2019-12-31 22:16:11")

#start and end date time buffer to create a time window to search.
sd = system.date.addSeconds(date, negBuffer)
ed = system.date.addSeconds(date, posBuffer)


#selects the folder to search through.
folder = event.source.parent.getComponent('Label').text

#crete tagPaths list, browse through selected folder searching for boolean OPC tags
tagPaths = []
def browse(path, filter):
	results = system.tag.browse(path, filter)
	for result in results.getResults():
		if result['hasChildren'] == False:
			tagPaths.append(result['fullPath'])
	
		if result['hasChildren'] == True:
			browse(result['fullPath'], filter)
browse(folder, {'dataType':'boolean', 'valueSource':'opc'})

tags = tagPaths

#query browsed tag results for state change 'Count'
check = system.tag.queryTagHistory(paths=tags, calculations=['Count'], startDate=sd, endDate=ed)

filteredResults = []
#iterate through each column and each row filtering out 'None' values, append [[filteredResults]] list with a list [tag path, time stamp, value change to]
for y in range(1, check.columnCount):
	for x in range(check.rowCount):
	
		#1st row conditional argument, removes all 'None' values, append values to [[filteredResults]] list
		if x == 0 and check.getValueAt(x,y) != None:
			filteredResults.append([tags[y-1], check.getValueAt(x,0), check.getValueAt(x,y)])
			
		#remaining rows conditional argument, removes 'None' values and values where tag did not change state by comparing previous row's value.
		elif check.getValueAt(x,y) != None and check.getValueAt(x,y) != check.getValueAt(x-1,y):
			filteredResults.append([tags[y-1], check.getValueAt(x,0), check.getValueAt(x,y)])

#create [resultPaths] list and append with the 1st items of each list in [[filteredResults]] list
resultPaths=[]
for results in filteredResults:
	resultPaths.append(results[0])
	
#create a tag path set to be used to read .Name and .Documentation values
#create a set from [resultPaths] and make a list again in order to have only one copy of each tag path. the set is sorted by [resultPaths] index.
tagSet = list(sorted(set(resultPaths), key=resultPaths.index))

#create [namePaths] and [docPaths] lists and add .Name and .Documentation to the end of each tag path from [tagSet]
namePaths = []
docPaths = []

for tag in tagSet:
	namePaths.append(tag.toString() + ".Name")
	docPaths.append(tag.toString() + ".Documentation")

#read name and documentaion tags
tagSetNames = system.tag.readBlocking(namePaths)
tagSetDocs = system.tag.readBlocking(docPaths) 

#create [tagNameList] and [tagDocList] lists
tagNameList = []
tagDocList = []

#append list so they have the same length as the the number of tags in the [resultPaths].
#The idea is to only read the tags' name and documentation once and create as many copies as needed to be displayed in the final results table.
for i in range(len(tagSet)):
	for z in range(resultPaths.count(tagSet[i])):
		tagNameList.append(tagSetNames[i])
		tagDocList.append(tagSetDocs[i])

#insert the tag's name and documentation value into the appropiate spot in each list in the [[filteredResults]] list.
for i in range(len(resultPaths)):
		filteredResults[i].insert(1, tagNameList[i].value)
		filteredResults[i].insert(4, tagDocList[i].value)
	

#Create the headers and datset. Populate 'Power Table 1' with the dataset.
headers = ['Path', 'Name', 'Time Stamp', 'Value Changed To', 'Documentation']
data = system.dataset.toDataSet(headers, filteredResults)
table = event.source.parent.parent.getComponent('Power Table 1')
table.data = data
event.source.parent.parent.getComponent('Power Table 1').data = data