Communications & Scripting Query

So lets say I decide to monitor how long a falling temperature takes to drop below a predefined value by using a while loop as shown:

An event script will kick this off as it knows that the temperature is about to start falling from 1000°C

start = system.date.now()
doStuff = True
while system.tag.read(temperature).value > 500:
	# script should only run for ten minutes max...
	if system.date.secondsBetween(start,system.date.now()) > 900:
		doStuff = False
		break
	#time.sleep(0.5)

if doStuff:
	#do stuff
	cooldownTime = system.date.secondsBetween(start,system.date.now())

I get that using the sleep function in the while loop will drastically slow things down, but what would be the effect without the sleep function and the tag being read, an OPC tag to an ethernet AB-PLC5-80C?

Would the many threads of tag reads make it’s way out onto the network and hammer the PLC?

I guess I just want to understand better how the Tag Group settings and other ‘polling’ type settings work in regards to things like scripting tag reads etc. I was under the impression that the Ignition OPC Server would poll the PLC for tag data as setup, say once a second; read requests from within ignition would return the value last read by the server thus not propagating all reads out over the network.

Use the preformatted text option to put your code in or surround your code with ``` to make it more readable.

If you’re planning to do this in a valueChange event script, don’t. Particularly if you plan on monitoring multiple tags this way. Tag Event Scripts have a dedicated thread pool, which this would claim 1 thread until it finished, and if there are multiple, you will tie up the thread pool and no other event scripts would be able to execute. Problems abound.

A better solution is record a timestamp when the temperature begins to fall. Then again when it hits your lower threshold.

Do this in a Gateway Event Script, probably a timer. Code would look something like this:

testValues = system.tag.readBlocking(['pathToTemp','pathToLastTemp','pathToFallingFlag'])
fallFlag = testValues[2]
if testValues[0].value < testValues[1].value and not fallFlag:
    fallStart = system.date.now()
    fallFlag = True
elif testValue[0].value < 500:
    totalFallTime = system.date.secondsBetween(fallStart,system.date.now())

system.tag.writeBlocking(['pathToTotalTime','pathToFallFlag'],[totalFallTime,fallFlag])

No need to loop it, the OPC Server will read the data when it needs to, you won’t have any extra load due to a script.

Thanks for your reply.
Yeah sorry about the formatting :slightly_smiling_face: my bad. Pretty sure the formatting buttons were not there, I don’t know maybe they were, it’s been a long day…

A Gateway Event tag change script is used to trigger the monitoring process. There are indeed multiple tags for monitoring, each tag monitored then calls a function in a shared gateway script using system.util.invokeAsynchronous() that does the calculation in parallel threads and places the results in a DB table.

I accidentally left the sleep commented out and since the script was rolled out, I have, by chance, noticed anomalies in alarm timestamps for certain events in the alarm journal around the same time as execution. (alarms that should have the same timestamp but are 10+ seconds apart for example)
I am doing an investigation to determine whether flooding the gateway with all these reads would impact things like the alarm journal and alarm event detections etc.

That may be more dependent upon the hardware than the method. How many CPU’s, how much RAM is available, how much RAM has been allocated to Ignition, and are there any other memory hogs (e.g. DB software) running on the system?

The Gateway is part of a redundancy pair running in a vmware cluster on seperate VM’s. Small 2-Node HA dedicated cluster with a V-SAN and plenty resources.

Each VM has 8GB RAM allocated (since upgraded to 16Gb), 1024gb initial / 2048 max heap allocated to Ignition (since upgraded to 2048gb/4096gb). 4 x CPU’s each and both running Ignition 8.0.15 (since upgraded to 8.1.3)

There are no other resource hungry apps on the master server. The historian runs on a different server.

I am aware that we have performance issues with the PLC5 in question (the last remaining one thankfully, soon to be upgraded) as the load factor was rather high @ around 200% on a 2s schedule. I have since created a few other Tag Groups and optimized tags to prioritize ‘safety’ related tags for alarms etc.

I guess that ultimately the alarm timestamp accuracies will always be subject to how quick the Gateway processes the tag data unless timestamps are generated in the PLC; this is ‘most likely’ out of the question given there are some 10K+ alarms. I notice that the default alarm ‘Timestamp Source’ config value for all my alarms is ‘system’, one wonders if i’m better setting this to ‘Value’; the documentation is quite vague on this. I’m guessing the ‘Value’ is when the the OPC server read the tag value that triggered the event; generated by the OPC server. In the case of an internal Ignition OPC server would ‘System’ and ‘Value’ essentially be the same?

The Value is the OPC Timestamp of the last change. This should be the most accurate. The System is the Gateway time and I believe that is the time when the Gateway records the alarm, so there is a good chance they are different.

Perhaps @Colby.Clegg can give some more detail.