Best Practices for Querying Tag History Data [SOLVED]

I have a query that I use to populate a table, it does several things but its source of data is the “sqlth_te” to get tag paths and “sqlt_data_1_2021_xx”. If its necessary to know, its Maria DB.

Everything will work as expected until another table is auto generated like follows:

sqlt_data_1_2021_01 → sqlt_data_1_2021_02

When this happens the source of data in the query needs to be changed. Also, the “id” in “sqlth_te” changes so that where there were 600 items of interest before, there become 1200 (double).

I want to ask the forum what are best practices for writing queries so that when new tables are auto-generated, the change is made automatically and its not necessary to go back and modify the query.

Thanks,
Nick

Alphabot Nested Queries.txt (1.4 KB)

Is there a reason you’re not using the built in scripting function queryTagHistory?

There very well may be, I’m just curious why.

Best practice would be to use the built in tag history bindings/functions. They already handle the table partitioning for you including instances where requested historical data spans multiple partitions.

As posted above if you are using this data in Ignition.

However if you have a 3rd party system that needs access to this data then you will want to move to a historical transaction group.

See here: https://docs.inductiveautomation.com/display/DOC81/Types+of+Groups

Gents, I am not using queryTagHistory yet but I think that is the tip I was looking for so thanks for your quick responses.

I will work with it and then if it does give me what I need, I’ll post back here with more details.

Thanks,
Nick

Here is how I got the equivalent functionality using the system.tag.queryTagHistory function. I think there is room to optimize but functionality wise this does what I want it to (equivalent of the query attached to my original post):

In the table data binding, it is called using an expression like this: “runScript(“alphabot.get_status”, 300000)”

@justin.brzozoski thanks for your advice.

def get_status():
	# Base path to tags, tag names, and tags to exclude
	base_path = "[default]Alphabot/05 Bot Status"
	channels = ["/ConnectionSTS", "/InductionSTS", "/SafetySTS"]
	exclude = ["num_bots_connected", "num_bots_inducted", "num_bots_safety_ok"]
	
	# Get a list of all full tag paths
	paths = []	
	results = system.tag.browse(path = base_path)
	
	for result in results.getResults():
		split = str(result["fullPath"]).split("/")
		if split[-1] not in exclude:
			for channel in channels:
				paths.append(str(result["fullPath"]) + channel)
	
	# Query tag history
	endTime = system.date.now()
	startTime = system.date.addHours(endTime, -24)
	dataset = system.tag.queryTagHistory(paths=paths,
									   startDate=startTime,
									   endDate=endTime)
	
	# Create a list of  bot numbers
	col_names = list(dataset.getColumnNames())
	del col_names[0]
	index = [int(str(i.split("/")[2]).replace("Bot", "")) for i in col_names]
	
	# Convert data set to pydataset and compute size						   	   						   						   
	py = system.dataset.toPyDataSet(dataset)
	num_rows = len(py)
	num_cols = len(py[0])
	
	# Compute which bots are connected, inducted, and safe
	connected = [index[i] for i in range(1, num_cols, 3) if py[num_rows-1][i]==1]
	inducted = [index[i] for i in range(2, num_cols, 3) if py[num_rows-1][i]==1]
	safe = [index[i]-1 for i in range(3, num_cols, 3) if py[num_rows-1][i]==1]
	
	# Create the aggregated table
	headers = ["Alphabot", "Connected", "Inducted", "Safety_OK"]
	data = []
	for i in inducted:
		row = [i]
		if i in connected:
			row.append(1)
		else:
			row.append(0)
			
		row.append(1)
		
		if i in safe:
			row.append(1)
		else:
			row.append(0)	
		data.append(row)
		
	return system.dataset.toDataSet(headers, data)

I wouldn’t call that a solution. You are running a scripted query inside of runScript(). That is extraordinarily bad practice, especially in Vision. It would also be true for any tag reads, alarms status calls, or anything else that would be a round-trip to the gateway from a Vision client. It is less bad in Perspective or in expression tags, but makes your gateway hard to debug. Don’t do it.

2 Likes

@pturmel thanks for the heads up. Then I would like to hear your opinion on the best way to accomplish this:

The data is 3 channels per bot, I simply want to return an aggregated table like below that shows only the bots that are inducted and what is the status of connection and safety.

This is in perspective.

Bot Num Connected Inducted Safety OK
1 1 1 1
2 0 1 1
3 1 1 1
. . . .
. . . .
. . . .
N 1 1 1

Thanks,
Nick

Why is this a history query when you are displaying what appears to be live data? Since there is tag history, presumably there are tags? I would just get the data from the live tags and construct a dataset with system.dataset.toDataSet(). Running in a timer event, writing to a memory tag, so any client can display it.

I agree with @pturmel, it doesn’t seem from your code that you are really interested in historical data.

If you really do want historical data then why are you removing any date time information that would make the data meaningful?

In addition to that I am curious about what makes the last row special?

connected = [index[i] for i in range(1, num_cols, 3) if py[num_rows-1][i]==1]
inducted = [index[i] for i in range(2, num_cols, 3) if py[num_rows-1][i]==1]
safe = [index[i]-1 for i in range(3, num_cols, 3) if py[num_rows-1][i]==1]

Assuming that each row is an individual bot if that last bot is not inducted, then your code will fall apart (IMHO) because the inducted list which you use later to build your data will be empty and your function will at best return an empty dataset, but in all likely hood will error stating that the number of columns in data doesn’t match the number of column headers.

All of that being said, if you really do, for whatever reason, want historical data with the date column removed then your code for that can be simplified

# Query tag history
dsHistory = system.tag.queryTagHistory(paths=paths, endDate=endTime, intervalHours=-24)
	
# Create the aggregated table
headers = ["Alphabot", "Connected", "Inducted", "Safety_OK"]
data = [[dsHistory.getValueAt(row,col) for col in range(1,dsHistory.getColumnCount())] for row in range(dsHistory.getRowCount())]

Or if you really insist on using a pyDataSet then

# Query tag history
dsHistory = system.tag.queryTagHistory(paths=paths, endDate=endTime, intervalHours=-24)

dsHistory = system.dataset.toPyDataSet(dsHistory)

# Create the aggregated table
headers = ["Alphabot", "Connected", "Inducted", "Safety_OK"]
data = [[value for i, value in enumerate(row) if i > 0] for row in dsHistory]

@pturmel I think this is what you suggested, if so it really helped me simplify. Here is the code which just reads from the tags. I read the tags 3 at a time to simplify the indexing. The good side effect is that when you do it this way, the table automatically shows up with little check marks, which is good.

Appreciate the help and if you happen to have any further suggestions for improvement, I’ll gladly accept.

One little trick is that when writing to the tag “dataset” has to be in square brackets.

Thanks,
Nick

# Base path and channel names
base = "[default]Alphabot/05 Bot Status/Bot"
tags = ["/ConnectionSTS", "/InductionSTS", "/SafetySTS"]

data = []

# Read tags, if inducted is True, add a line to data
for i in range(1, 201):
	paths = []
	for tag in tags:
		paths.append(base+str(i)+tag)
	vals = system.tag.readBlocking(paths)
	if vals[1].value == True:
		data.append([i, vals[0].value, vals[1].value, vals[2].value])

# Build the dataset
headers = ["Alphabot", "Connected", "Inducted", "Safety_OK"]
dataset = system.dataset.toDataSet(headers, data)

# Write the dataset to memory tag
write_path = ["[default]Alphabot/05 Bot Status/bot_status_summary_table"]
system.tag.writeBlocking(write_path, [dataset])

The code will run much faster if you don’t split these up. Read all of them at once. Consider using a project script module top level variable to hold the constructed list of tag paths, so that part won’t have to execute every time. (I strongly recommend placing all gateway event code in a project script module, leaving just a one-line function call in the event itself.)

Something like this:

# In a script, "bots", perhaps

# Base path and channel names
basepath = "[default]Alphabot/05 Bot Status/Bot"
basetags = ["/ConnectionSTS", "/InductionSTS", "/SafetySTS"]

tagpaths = [basepath+str(i)+t for i in range(1, 201) for t in basetags]
headers = ["Alphabot", "Connected", "Inducted", "Safety_OK"]
write_path = "[default]Alphabot/05 Bot Status/bot_status_summary_table"

def updateBots():
	values = [x.value for x in system.tag.readBlocking(tagpaths)]
	enumerated_triplets = zip(range(1, 201), *([iter(values)]*3))
	data = []
	for i, conx, induct, safe in enumerated_triplets:
		if induct:
			data.append([i, conx, induct, safe])
	system.tag.writeBlocking([write_path], [system.dataset.toDataSet(headers, data)])

Then the event just has:

bots.updateBots()
1 Like

@pturmel thanks again for all your advice today.

I now have everything written in the project library and running from a gateway timer script. It runs in 30ms. Being able to populate the table with a single tag that has the data aggregated already is super convenient.

Cheers,

Nick

1 Like

@pturmel I have used this little bit of code for many things over the last year. Thanks again for this tip.

Nick

You’re welcome, but I can’t take credit. That’s somewhere in python’s iterator docs.