Gateway Tag Change Script Performance

Hello all,

Recently I’ve encountered a performance issue with a script I’m running that checks for tag changes on 4800 different tags. On occasion, this script has to run 4800 times in double succession. Essentially triggering a tag change script 9600 times with ~3.5 seconds in between the first set of 4800 triggering and the second set of 4800…

I had to increase the heap size min and max to 2048/4096 in the config file in order for the script to not completely overload the Gateway. For some clarity, here’s the script below:

index = int(str(event.tagPath.itemName).split("_")[-1])
column = str(event.tagPath).split("/")[1]
currentValues ="[default]ExtInterfaceTags/loadedData").value
	newValues = system.dataset.setValue(currentValues, index - 1, column, newValue.value)
	system.tag.writeSynchronous("[default]ExtInterfaceTags/loadedData", newValues)
except IndexError:
	system.util.getLogger("Value Change Debug").info("%s changed, trying to write to dataset index %s" % (event.tagPath, index))

This script’s purpose is to update a dataset that is binded to a Table being used to populate results from a sequenced process. The tag change script is no issue when running the normal process of updating the values from 0 to the result value, as it is only a few tags changing at a time every 5+ seconds, but in order to clear the table, we have to populate all 600 rows with numbers, then set them to zero.

So, I guess my questions are:

1.) How can I make the script process faster? I know this may not be possible as there aren’t really any lines of code to remove to speed it up.
2.) How do I stop the connection from being “overloaded”? I get around 3 error messages on a client tag that is set to be a heartbeat saying that it has timed out. I have noticed the heap never goes much above 2.2 gbs, so I wouldn’t imagine this is the problem.
3.) What alternatives do I have?
-Maybe an alternative method to clearing the table?
-I don’t have the reporting module or anything to allow SQL.
-Is there a way to control visibility of rows based on an index of rows? This would essentially let me remove the
populating the fields with 2 sets of values and hard code the table if there is a way to do that.

Whatever suggestions any of you have, would be greatly appreciated!

Datasets can be very inefficient as they’re not indexed, and not mutable too. As a result, changes aren’t atomic and you need to use the writeSynchronous to ensure that every updated comes through.

I think working via an SQL table would be the solution. An SQL update is atomic (for most database systems at least), so the scripts can even be executed in parallel. And with the correct index on your table, the updates will be very fast.

If you want to read the SQL table back into a tag, you can always create an SQL tag.

AFAIK, you don’t need extra modules to execute raw SQL queries. The only reason you need extra modules is when you use extra features (like reporting or historian) that auto-create the queries and tables for you.

I would use a project script module to hold your data and supply some helper functions. Something like this:

# Project script 'myTable'
# Initialize to 600 rows of zeros
columnNames = ['FirstColumn', 'SecondColumn']
columnNameIndex = dict([(n, i) for i, n in enumerate(columnNames)])
rows = [[0] * len(columnNames)] * 600
timeStamps = [0,]

def updateRows(row, column, value):
    rows[row][columnNameIndex[column]] = value
    timeStamps[1] =

def updateDS():
    if timeStamps[0] < timeStamps[1]:
        timeStamps[0] =
        system.tag.write("[default]ExtInterfaceTags/loadedData", system.dataset.toDataSet(columnNames, rows))

Then your event script would be something like this:

rowIndex = int(str(event.tagPath.itemName).split("_")[-1])
columnName = str(event.tagPath).split("/")[1]
project.myTable.updateRows(rowIndex, columnName, newValue.value)

Finally, use a gateway timer event to check the timestamps and write the dataset tag when new data is present (at a suitable pace):