Missed gateway tag change events/split separated strings into individual tags

I have a set of transaction groups that write to a handshake tag that is OPC, 1 for success, 2 for failure.
There is a gateway event that is monitoring that handshake tag for a value change to run a script. I"m seeing many instances where that script does not run, as evidenced by the following:
The first line of the script writes system.tag.now() to a datetime tag.
I would expect that the transaction group “last triggered” time and the script execution time would be similar on each execution of the transaction group.
I do not see that, I am seeing instances where the transaction has run, but the script did not run.

The OPC handshake tag is set at 1000ms, direct, to a logix PLC.
Is this the right way to do this? Is there a better way to trigger a script at the end of a transaction group?
I’m on 8.0.9

You could put a tag script on the tag. I’m not sure that would be better.

Is there a difference in the execution engine/priority that would make this approach work better?
When does the tag change get “sensed” by the event engine in the event of and OPC write?
My impression is that the handshake is written immediately by the group completion. Should the event engine detect that or does it need to be written to the PLC and read back to be detected, in which case I must make sure that it is at a changed value for longer than my worst case OPC/PLC/OPC loop.

Perhaps I’m asking the wrong question here;

My transaction group stored procedure returns up to 60 strings, each of which represents a set of values separated by a character. Each string gets split by the script into individual tags, which are in an array of UDTS.

Is there a better way to split the strings apart and write them to the individual tags than a script? I have optimized teh script to use a single writeall operation instead of multiple writes.

Consider a scripted database operation and post-process all together, instead of a transaction group.


We were able to work around this by moving from a tag change script to a timed script and using explicit values for the handshaking of the script. In addition, the handshake tag was buried in a PLC UDT and we had to split it out into a top level tag to get reliability.

I have a transaction group that “misses” the trigger quite often. I do see some time-outs on the driver which is an old SLC 5/05 so I have been attributing the failures to that.(age of the system) and the tag scan classes. BUT it has continued really no matter how I tweak things. What I have considered is creating the logic in the PLC that would create a table of the records like the last 10 or 20 events and just read that in Ignition. I would be interested to read some information on what your describing a post process to be.
My basic concept is to offload that trigger and record storing functionality to the PLC instead of Ignition. BUT that mutes the purpose of Ignitions functionality…

Ok. I use multiple techniques depending on the client’s needs and trust in hardware at various points along the chain.

In all cases, I avoid triggering on booleans, and I avoid writing any given value from more than one direction. I like to use small integers as triggers–increment to trigger, rolling over naturally, but skipping zero. Sometimes an LINT timestamp is a natural trigger (Logix UTC wallclock µs or Omron clock ns, perhaps).

Case 1: Fire and Forget

Sometimes one just wants production data corresponding to events for non-critical purposes. Missed records aren’t the end of the world. Applicable to any reasonable pace the OPC driver can keep up with.

  • Use a standard transaction group.
  • No particular Database technology requirements.
  • Use trigger condition “Active on value change” and select “Prevent trigger caused by group start”.
  • Maybe bypass store and forward.
  • Set option “OPC data mode” to “Read”.
  • Maybe use write failure handshake to an alarm bit, reset by an operator.
  • PLC must ensure all group data is stable in all other registers before updating the trigger value. (An exception in Logix would be filling a complete data structure, including the new trigger, with a Copy Synchronous operation.)

Case 2: Low speed, high reliability, interlocked

This is suitable when recording piece production “must-have” QA data, where a line can stop for DB retries. Perhaps 20-30 pieces per minute. Use a transaction group as in case 1, with these adjustments:

  • Likely bypass store and forward. Note that if used, “write success” means the record made it to the S&F system, not to the final DB.
  • Likely use a clustered/high-availability database configuration.
  • Use write success handshake to a PLC bit that permits the next cycle to start. PLC resets, but only after dwelling through a couple subscription intervals of the OPC driver. And dwelling a couple intervals before allowing a new trigger.
  • Use write failure handshake to a PLC bit that starts a retry with a new trigger. PLC resets with dwells as for success, possibly with longer delays for repeated failures.

Case 3: High speed, high reliability, interlocked

Similar to case 2, but when higher speeds are required. To get rid of handshake dwells in the success path, a unidirectional handshake is needed. I recommend handshaking by echoing the trigger value to another register in the PLC to indicate success. A bit as in case 2 to indicate failure (line stop) is still appropriate, with dwells.

  • Script the equivalent of a transaction group with the required handshake, in a tag change event. Something like this, tweaked to your needs:
#### In the event:
someScript.myTransaction(initialChange, newValue, event)

#### In someScript:
from java.lang import Throwable
logger = system.util.getLogger("SomeProject.someScript")

cache = {}
echoTagPath = '[someProvider]path/to/trigger/echo'
failTagPath = '[someProvider]path/to/failure/bit'
opcServer = "Ignition OPC UA Server"
opcItems = [... all the opc item paths to read together ...]
transactionSQL = """Insert Into "someTable" (t_stamp, ...  other columns ...)
	Values (CURRENT_TIMESTAMP, ... ? placeholders for values ...)"""

def myTransaction(initialChange, newValue, event):
	if initialChange:
		cache['lastEcho'] = system.tag.readBlocking([echoTagPath])[0].value

	# Check for zero, null, or duplicate trigger
	if not newValue.value or newValue.value == cache['lastEcho']:

	# Set the cached last trigger early to prevent accidental retriggers
	# if the DB operations run long.  Also prevents retriggers after failure
	# unless the project is saved/restarted.
	cache['lastEcho'] = newValue.value

	# Obtain all of the data to record.  Check for all good.
	QVlist = system.opc.readValues(opcServer, opcItems)
	if all([qv.quality.good for qv in QVlist]):
		# extract all the values and submit the SQL
		params = [qv.value for qv in QVlist]
			# Possibly use system.db.runSFPrepUpdate() instead.
			system.db.runPrepUpdate(transactionSQL, params)
			system.tag.writeBlocking([echoTagPath], [newValue.value])
		except Throwable, t:
			# Log and set failure handshake
			logger.warn("Java Error recording transaction", t)
			system.tag.writeBlocking([failTagPath], [True])
		except Exception, e:
			# See https://www.automation-pros.com/ignition/later.py
			logger.warn("Jython Error recording transaction", later.PythonAsJavaException(e))
			system.tag.writeBlocking([failTagPath], [True])
		# Log failed OPC reads
		failed = [(item, qv) for (item, qv) in zip(opcItems, QVlist) if not qv.quality.good]
		logger.warnf("OPC Read Error(s) for transaction: %s", repr(failed))
		system.tag.writeBlocking([failTagPath], [True])
  • Should use a clustered/high-availability database configuration.
  • The PLC must treat echo == trigger as success, and not equal as a line pause. (Unless zero, startup perhaps.)
  • PLC should handle failures as in case 3.

Case 4: High speed, high reliability, buffered

This is identical to case 3 from the Ignition side, but fed from the PLC via a FIFO. I like to use an array of UDTs for the FIFO (moving subscripts, not copying the entire array) and a single tag of that UDT for the Ignition interface. The production line records into the FIFO while the PLC feeds Ignition asynchronously. Loading and unloading the FIFO would use Copy Synchronous instructions for data consistency. This approach allows for very high speed operations, especially where the line cannot stop instantly. This can also be deployed where a line cannot stop at all, with a large FIFO to cover significant Ignition/DB outages.

Case 5: High speed, high reliability, multi-buffered

This is identical to case 4, but the Ignition interface UDT is composed of a trigger and count plus a nested array of the UDT of the FIFO. Each Ignition OPC read picks up multiple records, and the SQL in the script would be tweaked to insert multiple rows at a time. This allows the production line to place records in the FIFO much faster than Ignition’s round trip. Use this approach when Ignition’s OPC driver is bottlenecked by latency.

Of course, with the scripted solutions, any additional processing can be inserted where needed. Like the OP’s need to split PLC strings into separate values. That would just be another operation between the “All Good” check and the actual params list generation.