Best Way to Script from Tag History NOT using Vision

Ok, I'll back up a bit here and go back to the array approach since there seems to be more nuances with querying history than I expected. If you don't need it in a tag, a tag history binding would be perfect for this.

I'd create a UDT with 3 members.

  1. Your value tag.
  2. Array of the last 12 values. Add custom a number property "index" to this tag so you don't have to shift the array, just keep track of the oldest entry.
  3. Expression tag for the average.

Use a tag change script on your value tag (1) to update the array tag (2) and increment the index property.

index_num, val_arr = [readQV.value for readQV in system.tag.readBlocking(['[.]prevVals.index','[.]prevVals'])]

index = int(index_num)
vals = list(val_arr)

vals[index] = currentValue.value

next_index = (index+1) % len(vals)

system.tag.writeBlocking(['[.]prevVals.index','[.]prevVals'],[next_index,vals])

Then use your expression to calculate the average tag (3).

Hey sorry, just got back to this. Regarding using the Gateway events instead of Tag Event Scripts, I get that its better now. But the way I coded it originally I used tag event scripts. On a bit of a time crunch, so won't change at this point for the larger scripts on the other forum, but will change for sustainability & performance in the future. For this will use them now though, Thanks! Why does Mass Balance History Storage need to have its history enabled if thats only the database i'm trying to write, and I only need to keep the last 12 good values?

I donā€™t know the details of your system, so Iā€™ll only say this choice may come to back to bite you, if it isnā€™t already.

That is a tag, even if the data type is dataset. Just writing to it using the tag write functions will not store its value in a database.

system.tag.queryTagHistory() requires the tag which you are requesting data for to be historized. This is why your dataset is empty.

If you choose to use a dataset tag to store your data then you must use the dataset functions to interface.

I suggest that you look up working with Datasets in the manual. Iā€™m away from my computer so I canā€™t provide an example script of how to do this atm, but if no one has done so by the morning, Iā€™ll throw one together.

Okay, as promised, if you would like to do this without using Tag History, then a script similar to this is how I would achieve it.

  1. You will need a tag(s) that you want to store.
  2. Put this script into a project script library
def saveLastHourData():
   #read in the all tag values that you want to save
   #as well as the dataset tag where you are saving them
   tagPaths = ["Path/To/Data/Tag","Path/To/Dataset/Tag"]
   tagValues = [qv.value for qv in system.tag.readBlocking(tagPaths)]
   
   #if you insure that the dataset is always the last path,
   #then you can use that to separate it from the other tags
   dataRow = [tagValues[:-1]]
   dataSet = tagValues[-1]

   #if this is the very first row, then we need to construct the
   #dataset from scratch
   if not dataSet.rowCount:
      headers = tagPaths[:-1]
      dataSet = system.dataset.toDataSet(headers,dataRow)
   elif dataSet.rowCount < 12:
      dataSet = system.dataset.addRow(dataSet,row = dataRow)
   elif dataSet.rowCount >= 12:
      dataSet = system.dataset.deleteRow(dataSet,0)
      dataSet = system.dataset.addRow(dataSet,row = dataRow)

   system.tag.writeBlocking(tagPaths[-1],[dataSet])
  1. Configure a Gateway Scheduled Event to run each hour, and use a script similar to the following
yourLibraryName.saveLastHourData()
1 Like

But then you would need to add one of these scripts for every tag that uses this functionality.

I use something similar to the script that I posted most recently on a tag change script within a UDT and I've had zero issues with it and have over 50 instances of the UDT.

That may be so, and if it works for you great, but I will never recommend putting this type of functionality inside of a tag value change script. This script and the one that you posted (let me be clear, I don't think your script is poorly written) are unlikely to run in single digit milliseconds. IMO, it is poor practice for scripts of that nature to be in value change scripts, due to the nature of value change scripts and their limitations.

You would only need one of these scripts for each dataset that you want to maintain.

Using a different period is easily obtained, and can also easily be made generic so a single script could be made to service multiple use cases.

Another option, for the case where the user prefers 'newest data at the top', is to add the optional parameter for rowIndex=0, then modify to delete any rows after 'max':

	dataSet = system.dataset.addRow(dataset=dataSet, rowIndex=0, row=dataRow)
	maxRows = 12
	rowCount = dataSet.getRowCount()
	if rowCount > maxRows:
			rowsToDelete = range(maxRows, rowCount, 1)
			dataSet = system.dataset.deleteRows(dataSet, rowsToDelete)

I considered another option whereby the calculation is done within this script, prior to writing the data back to the tag. Then, the new value (and previous '12' aggregation calcs) are also stored in the dataset (in the next column). Regardless of 'latest @ top/bottom', the latest calc should always be in the same cell (er... [row, column]).

New approach, not needing a list of the last 12 values at all if you only want to display the running average of the last 12 values.

If you just use a tag change script on the value tag to write to the average tag.

avg = system.tag.readBlocking(['[.]runningAvg'])[0].value

newAvg = avg + ( ( currentValue.value - avg ) / 12 )

system.tag.writeBlocking(['[.]runningAvg'],[newAvg])

That's not actually a running average. It is a single pole digital filter that decays towards new values (like an RC electrical network) based on update pace and divisor. You absolutely must track prior values individually to have a "running average".

I tested this with multiple sets of data and obtained equal results to the manual average regardless of the value's scale.

Stick a single transient value in your series that is a million times larger and see what happens. Count how many non-extreme new values it takes to recover. Compare the output to what a true running average produces.

1 Like

ok, ya. Scratch that

1 Like

There are a couple of good posts on comparisons of Gateway Events vs Tag Events on the forum, with (my) most-notable thread:

Taking above advice into consideration, and reading:

I would:

  1. Change 'this tag' to a dataset tag.
  2. Modify the existing script to append/insert the value to 'this tag' as a dataset.

then, either:

  • Add another expression tag, utilizing mean(dataset, columnIndex) expression.
  • Modify the existing script to calculate a new mean from the latest 12 values, and store that value (and last 11 historical calcs) in another column of the same dataset.

While the thread pool event queue size per tag is of note, the one that makes me consider value change events a last option is this one:

So, not only is there a thread pool event que per tag, but all tag event executions are also in a limited thread pool. This means that only 3 threads can be executing at one time. If you litter your tags with value change scripts it isn't hard at all to hit this limit, and then the chance that you run into the pool queue size per tag only increases.

Be careful with terminology. There is no thread pool per tag. There is an event queue per tag. By default, five possible entries per tag.

There is one thread pool for scripting for the entire tag subsystem, and one thread pool for expression tags for the entire tag subsystem. Each has three threads, by default.

"event queue" ā‰  "thread pool"

Yep. Thanks. Believe I corrected the terminology.

I think you'd be surprised how fast this runs:

Using:

	from java.lang.System import nanoTime as nt
	t=[nt()]
	index_num, val_arr = [readQV.value for readQV in system.tag.readBlocking(['[.]prevVals.index','[.]prevVals'])]
	
	index = int(index_num)
	vals = list(val_arr)
	
	vals[index] = currentValue.value
	
	next_index = (index+1) % len(vals)
	
	system.tag.writeBlocking(['[.]prevVals.index','[.]prevVals'],[next_index,vals])
	t.append(nt())
	system.util.getLogger('TEST').info('Exec time: {} us'.format((t[-1]-t[0])/1000.0))

Collecting 20 samples, I saw the average ranged between 377 and 658 Ī¼s. I tried writeAsync too but it was extremely comparable.

Worth noting also, that the execution averages were actually very slightly lower (min 303 Ī¼s, max 612 Ī¼s) when I popped and inserted a value into the array (this would be my go to) instead of using the index tracker:

vals = system.tag.readBlocking('[.]prevVals')[0].value
vals.pop()
vals.insert(0, currentValue.value)
	
system.tag.writeAsync(['[.]prevVals'], [vals])
1 Like