Ok. I use multiple techniques depending on the client’s needs and trust in hardware at various points along the chain.
In all cases, I avoid triggering on booleans, and I avoid writing any given value from more than one direction. I like to use small integers as triggers–increment to trigger, rolling over naturally, but skipping zero. Sometimes an LINT timestamp is a natural trigger (Logix UTC wallclock µs or Omron clock ns, perhaps).
Case 1: Fire and Forget
Sometimes one just wants production data corresponding to events for non-critical purposes. Missed records aren’t the end of the world. Applicable to any reasonable pace the OPC driver can keep up with.
- Use a standard transaction group.
- No particular Database technology requirements.
- Use trigger condition “Active on value change” and select “Prevent trigger caused by group start”.
- Maybe bypass store and forward.
- Set option “OPC data mode” to “Read”.
- Maybe use write failure handshake to an alarm bit, reset by an operator.
- PLC must ensure all group data is stable in all other registers before updating the trigger value. (An exception in Logix would be filling a complete data structure, including the new trigger, with a Copy Synchronous operation.)
Case 2: Low speed, high reliability, interlocked
This is suitable when recording piece production “must-have” QA data, where a line can stop for DB retries. Perhaps 20-30 pieces per minute. Use a transaction group as in case 1, with these adjustments:
- Likely bypass store and forward. Note that if used, “write success” means the record made it to the S&F system, not to the final DB.
- Likely use a clustered/high-availability database configuration.
- Use write success handshake to a PLC bit that permits the next cycle to start. PLC resets, but only after dwelling through a couple subscription intervals of the OPC driver. And dwelling a couple intervals before allowing a new trigger.
- Use write failure handshake to a PLC bit that starts a retry with a new trigger. PLC resets with dwells as for success, possibly with longer delays for repeated failures.
Case 3: High speed, high reliability, interlocked
Similar to case 2, but when higher speeds are required. To get rid of handshake dwells in the success path, a unidirectional handshake is needed. I recommend handshaking by echoing the trigger value to another register in the PLC to indicate success. A bit as in case 2 to indicate failure (line stop) is still appropriate, with dwells.
- Script the equivalent of a transaction group with the required handshake, in a tag change event. Something like this, tweaked to your needs:
#### In the event:
someScript.myTransaction(initialChange, newValue, event)
#### In someScript:
from java.lang import Throwable
logger = system.util.getLogger("SomeProject.someScript")
cache = {}
echoTagPath = '[someProvider]path/to/trigger/echo'
failTagPath = '[someProvider]path/to/failure/bit'
opcServer = "Ignition OPC UA Server"
opcItems = [... all the opc item paths to read together ...]
transactionSQL = """Insert Into "someTable" (t_stamp, ... other columns ...)
Values (CURRENT_TIMESTAMP, ... ? placeholders for values ...)"""
def myTransaction(initialChange, newValue, event):
if initialChange:
cache['lastEcho'] = system.tag.readBlocking([echoTagPath])[0].value
# Check for zero, null, or duplicate trigger
if not newValue.value or newValue.value == cache['lastEcho']:
return
# Set the cached last trigger early to prevent accidental retriggers
# if the DB operations run long. Also prevents retriggers after failure
# unless the project is saved/restarted.
cache['lastEcho'] = newValue.value
# Obtain all of the data to record. Check for all good.
QVlist = system.opc.readValues(opcServer, opcItems)
if all([qv.quality.good for qv in QVlist]):
# extract all the values and submit the SQL
params = [qv.value for qv in QVlist]
try:
# Possibly use system.db.runSFPrepUpdate() instead.
system.db.runPrepUpdate(transactionSQL, params)
system.tag.writeBlocking([echoTagPath], [newValue.value])
except Throwable, t:
# Log and set failure handshake
logger.warn("Java Error recording transaction", t)
system.tag.writeBlocking([failTagPath], [True])
except Exception, e:
# See https://www.automation-pros.com/ignition/later.py
logger.warn("Jython Error recording transaction", later.PythonAsJavaException(e))
system.tag.writeBlocking([failTagPath], [True])
else:
# Log failed OPC reads
failed = [(item, qv) for (item, qv) in zip(opcItems, QVlist) if not qv.quality.good]
logger.warnf("OPC Read Error(s) for transaction: %s", repr(failed))
system.tag.writeBlocking([failTagPath], [True])
- Should use a clustered/high-availability database configuration.
- The PLC must treat echo == trigger as success, and not equal as a line pause. (Unless zero, startup perhaps.)
- PLC should handle failures as in case 3.
Case 4: High speed, high reliability, buffered
This is identical to case 3 from the Ignition side, but fed from the PLC via a FIFO. I like to use an array of UDTs for the FIFO (moving subscripts, not copying the entire array) and a single tag of that UDT for the Ignition interface. The production line records into the FIFO while the PLC feeds Ignition asynchronously. Loading and unloading the FIFO would use Copy Synchronous instructions for data consistency. This approach allows for very high speed operations, especially where the line cannot stop instantly. This can also be deployed where a line cannot stop at all, with a large FIFO to cover significant Ignition/DB outages.
Case 5: High speed, high reliability, multi-buffered
This is identical to case 4, but the Ignition interface UDT is composed of a trigger and count plus a nested array of the UDT of the FIFO. Each Ignition OPC read picks up multiple records, and the SQL in the script would be tweaked to insert multiple rows at a time. This allows the production line to place records in the FIFO much faster than Ignition’s round trip. Use this approach when Ignition’s OPC driver is bottlenecked by latency.
Of course, with the scripted solutions, any additional processing can be inserted where needed. Like the OP’s need to split PLC strings into separate values. That would just be another operation between the “All Good” check and the actual params list generation.