Does Gateway Events have a max number of running scripts?

I've got a project aiming to catch a trigger point from plc, then grab the telegram I need from other opc points. Accroding to Scripting architecture choices - General Discussion - Inductive Automation Forum, I chose to use Gateway tag change events and configure all the trigger points paths in it, then use readBlocking in the script area to read the telegram.

Here comes the point: telegrams are not always ready when trigger point is activated, so I wrote several retries to read, but the total running time, including that for the script dealing with the telegram, seems to be too long, and the single-thread queuing mechanism is stopping me from dealing all the request synchronously.

Right now I'm thinking about spliting all the trigger points to seperated TagChange items. Is it feasible, as there are about dozens of them and could be triggered every 50ms? Would it exceeds, if there is any, thread number limit of gateway events?

That is likely the root of your problem. Find a trigger that works when the telegram is ready. A tag event should not have a retry within it. (Maybe a retry in a timer event that does cleanup.)

That problem may not be covered by changing trigger tag: I already set the tag after writing telegram to their place, but readBlocking still tell me my telegram is empty. I've already set the trigger and telegram to the same tag group, and set mode to polled rather than subscribed, but the problem is still there

Share more details. How are these telegrams being transmitted/received?

I'm confused what you mean by a "telegram"?

If you're trying to read a PLC tag though after the trigger, I would always read it using instead of relying on the Ignition tag to have updated its value in time.


That may help, let me try

Thanks for your help, now their is no reties anymore. But there comes other problems:

  1. The system.opc.write function may sometimes failed, or to say, perform as "Phantom Write". The write itself is successed, double checked with, but the value at plc side stay the same. The case disappered after I added a delay for 250ms everytime I read/write data with time.sleep.
  2. The read/write function would always take longer after downloading program to PLC, and recover after a minute or so. Is their any solution to this?

There are many posts on this forum explaining why sleep() should not be used.

I've fully recognized how blocking in scripts could bring the whole gateway down, but even system.opc.write told me that my write is successed( which actually not). The optimized write is already disabled, so no idea how to tackle this problem.

You're going to have to share your code for us to help further.

The logic is like this:

  1. In gateway events, the trigger signal is monitored, and tag change scripts is like so:
if newValue.getValue() == True:
	config = system.tag.getConfiguration(str(event.tagPath.getParentPath()).replace('handshake/outHandshake', ''))
	dbNumber = str(config[0]['parameters']['DB Number'].value)
	plcName = str(config[0]['parameters']['PLC'].value)
	locationCode = str(event.tagPath.getParentPath()).split('/')[1]
	infoResult = system.opc.readValues('Ignition OPC UA Server',\
	project.mes.sendTelegram(str(event.tagPath), locationCode, \
	 infoResult[0].value, \
	 plcName, \
	 infoResult[1].value, \
	 infoResult[2].value, \
	 infoResult[3].value, \
	 infoResult[4].value, dbNumber
  1. As written above, telegram data is read, and self-developed function project.mes.sendTelegram is called.
def sendTelegram(tagPath, locationCode, offline, plcName, seqnr, telegramCode, timestamp, body, dbNumber = ''):

	startDate =;
	logger = system.util.getLogger('MES-'+locationCode) + " " + telegramCode + " " + body)
	retryNum = 0
	homePath = tagPath.split("/handshake")[0]
	handshakePath = tagPath.split("outHandshake")[0] + "inHandshake/";
	delay = 0
	# generate delays for correctly writes to PLC
	import time
	system.opc.writeValue('Ignition OPC UA Server', "["+plcName+"]DB"+dbNumber+",X16.0", False)
	# generate delays for correctly reads from PLC
	checkValue = system.opc.readValue('Ignition OPC UA Server' , "[" + plcName + "]DB" + dbNumber + ",X16.0").getValue()
		while checkValue == True:
			logger.warn('Write problem')
			system.opc.writeValue('Ignition OPC UA Server' ,  "[" + plcName + "]DB" + dbNumber + ",X16.0" , False)
			# generate delays for correctly reads from PLC
			checkValue = system.opc.readValue('Ignition OPC UA Server' , "[" + plcName + "]DB" + dbNumber + ",X16.0").getValue()

	# more logics about interactions with Java Instance

Without the time.sleep(0.25), there always occur that value False is not correctly written to the target place, while the extra readValue still telling me the data is written to False (No log info with "Write Probelm" ever appeared).

The return from system.opc.readValue() is not a boolean. It is a quality object. You should be testing checkValue.good. Also, you aren't capturing and examining the quality object returned by system.opc.writeValue(). When that object's .good is true, you can be sure that the value reached the target device.

I can't help noticing that you seem to be writing back to the boolean that triggered your event. Don't do that. It is extremely bad for handshake timing (racy) to write such a trigger from two directions (Ignition and PLC). Always use a separate acknowledgement tag. The PLC should be the entity that clears the trigger when it sees the acknowledgement, and your tag change event should be clearing the acknowledgement whenever the trigger is false. (The PLC must wait for acknowledgement to turn off before firing the trigger again.)

May I suggest pre-formatting the path so you don't have to do it again every time ? Much better for readability:

opcPath = "[{}]DB{},X16.0".format(plcName, dbNumber)
system.opc.writeValue('Ignition OPC UA Server', opcPath, False)

About checkValue: I've already added .getValue() after it, so what I got should just be the value I want.
Thanks for your other advice, but it may be too late to recraft the current mechanism. Yout idea would be taken if I have the chance to make a change.
And what about my second problem, I would be more than happy if you got any idea.

Well, you just said you can't change the mechanism. Changing the mechanism is almost certainly the fix. At least, that's how I would fix it.