I connect a simens PLC 1500 by OPC.
I try to use readblcoking(tagPaths) , but sometimes it cost too much time (more than 1 sec ).
I want to check this function is reading tags from PLC direct or ignition read all tags from PLC and store to memory , this function read tags from memory.
I believe when running anything against tags on a provider such as [default] it is reading from Ignitions tag store.
ReadBlocking will return you a PyList of QualifiedValues from what it currently has in memory for that provider.
In terms of it taking "More than 1 second" I am unsure what your use case is, but the result of the readBlocking function depends somewhat on Gateway Memory/CPU allocation as well as the number of tags you are actually trying to read. Keep in mind too, if you are reading say 100 tags, and then running a process of looping, sorting, etc on them that will also take time.
I recommend if you are having a specific issue other than speed of your gateway returning the results of your readBlocking call you should add to this post what those issues may be, or make another post with a different topic heading to get more assistance.
system.tag.readBlocking reads the QualifiedValue from the Ignition tag database. See the manual: system.tag.readBlocking | Ignition User Manual
If you want the actual live value from the PLC, you will need to use the system.opc.readValues function. See the manual: system.opc.readValues | Ignition User Manual
As @Benjamin_Furlani noted, the response time for either is going to be extremely sensitive to gateway load, where the function is called etc... If you are calling the function in a client scope environment, that call has to make a round trip to the gateway with then also involves network latency etc...
Note... that the readBlocking is a thread blocking call. So if used on a button on a client screen, then the screen will freeze while that read is happening. You can mitigate that by using either system.tag.readAsync, or a combination of system.util.invokeAsync and system.util.invokeLater together to mitigate that freeze aspect.
I have read hundreds of tags in milliseconds. Show the code that is taking too much time, there may be other reasons the execution time is taking so long.
I try to use readasync function but it also cause a long delay time , I reduce the max processor state of compute to 99% that will cause the cpu run at 2.4Ghz, if I modify the max processor state to 100% , the script execute low than 1 second .
The code like this
self.view.custom.instance = utils.getTagPathAsDict(tagPath)
The function getTagPathAsDict will read all of tags from tagPath by readblocking .
The count of tags in tagPath is 79.
This code may execute 21 or 28 times per second.
If the CPU run at 2.4GHz, the time of execute will get longer and longer (1sec,2sec,and more).
The scripts execute in one core of CPU or all of cores of CPU?
Because the project is just one of my project ,if I use all project with this code ,it will execute more than 200 times per second , it will cause more problems.
self.view.custom.instance = utils.getTagPathAsDict(tagPath)
The function getTagPathAsDict will read all of tags from tagPath by readblocking .
The count of tags in tagPath is 79.
This code may execute 21 or 28 times per second.
Post the actual function definition you're calling in the project script.
import system.tag as Tag
def getTagPathAsDict(tagPath_):
"""
:param tagPath_:
:return:
"""
dict_ = {}
tags = Tag.browse(tagPath_)
for tag in tags:
keyName = tag['fullPath'].split('/')[-1]
if str(tag['tagType']) != 'Folder':
dict_[keyName] = Tag.readBlocking([tag["fullPath"]])[0].value
else:
#name =getTagPathAsDict(str(element['fullPath']) + "/info/name")
dict_[keyName] = getTagPathAsDict(tag["fullPath"])
return dict_
I write a sample code for it ,it's different with really code ,but the cost time operator is the same.
If several read the same tags in the same time ,will it cause spend more time or not?
I tested the function , read all of the tagPath's tag to dict is faster than system.tag.readBlocking([tagPath])
sumList = []
for i in range(100):
start_ = time.time()
x= system.tag.readBlocking([tagPath])
end_ =time.time()
sumList.append(end_-start_)
print(sum(sumList)/len(sumList))
sumList = []
for i in range(100):
start_ = time.time()
x= utils.getTagPathAsDict(tagPath)
end_ =time.time()
sumList.append(end_-start_)
print(sum(sumList)/len(sumList))
output is:
readBlocking cost 0.159990000725s
read all of children tag one by one to dict cost 0.0268899989128s
Could you tell me the reason of different cost time .
What is TagPath, because I have never, in 8 years found this to be true.
There's literally no way browsing and reading many tags in sequence recursively is faster than an individual tag read.
What your sample script there is doing is catastrophic for performance, and probably explains your issues in one.
What are you actually doing that lead you to this script, that completely ignores the benefits of a tag hierarchy and instead reads every single tag underneath a path in one go? What are you doing with this massive result dictionary, especially every ~35 milliseconds? (A rate that will basically never be consistently achievable even with a simple flat read, because of garbage collection stalls in Ignition).
The scripting practices here don't make sense, and it's likely you've engineered yourself into a corner that's going to take significant effort to crawl out of.
Thank you , I will try to do more test in that ,from my side I just want to use the bind to an custom parameter, use read tag per second is not an good solution .
Could you tell as the opc tags update time or update strategy?I have some tags in plc is changed very fast like current( LT#time type) , if I bind these tags to parameter ,could it cause some issues?
Ignition is going to poll the PLC at a pace set in Ignition's tag group. It will only notice changes that often, no matter how quickly the signal changes in the PLC. If there is no change from one poll to the next, Ignition will avoid updating bindings with the unchanged value.
Polling a PLC very fast is difficult to achieve with ordinary protocols.
I got it , Thank you very much .