Hi guys.
I want to mass writing all my source tags (200 tags) value to destination tags (200 tags) value with 5 second interval cycle. The tags all are OPC tags.
Is there any way I can do it using Transaction group to do this data passing without storing any values inside my database?
Use a script. Read all the values in one call to system.tag.readBlocking()
, then write them all to the other tags with one call to system.tag.writeBlocking()
.
Consider using a project script module top-level variable to hold the two lists of tag paths as static values.
I have tried out this way, but if I do 3 set of it (Total of 600 source tags to 600 destination tags and each set is 200 tags) the write speed will takes long time (10-15s) and it will also affecting my another tag event script using valueChanged under different UDTs.
Don't use tag event scripts for this. Use a gateway timer event (part of the project) that runs every five seconds. That scripts reads all of your tags in bulk, then writes them in bulk, in the project script subsystem. (It doesn't interfere with tag events on the tag).
By "in bulk", I mean you provide a list of tagpaths to the read and write operations. No Looping.
In a project script, named tagCopy
, perhaps:
# Top level lists of tagpaths
readPaths = ['[default]first/read/tag/path',
'[default]second/read/tag/path',
'[default]second/read/tag/path']
writePaths = ['[default]first/write/tag/path',
'[default]second/write/tag/path',
'[default]second/write/tag/path']
# The above lists are effectively constants
# This function does the work efficiently
def myCopyEvent():
qValues = system.tag.readBlocking(readPaths)
system.tag.writeBlocking(writePaths, qValues)
Then, in your Timer Event, you have the one-liner:
tagCopy.myCopyEvent()
How does this script deal with bad quality tagpath on the readPaths?
Will it trigger many error and cause a timeout to happen causing gateway performance to degrade?
No. It just returns bad quality for the tag value.