Transaction groups optimization

Any hot tips for optimizing transaction groups? I’m logging data (a few strings and INTs) from a ControlLogix PLC into SQL server via a stored proc at a rate of a record every second. The records are carton weights and carton details, for each carton.

I have a trigger bit that triggers the transaction on the rising edge of the trigger and a write handshake that sets the result to 1 on success and 2 on failure. The OPC data mode is set to “Read”.

All of the executions are succeeding, but sometimes when I set the trigger high, the transaction doesn’t trigger. I currently have a 1s delay between triggers (which is a bit long for the carton rate) with the group update rate on the trigger at 100ms.

I have multiple check weighers that I’m buffering the carton barcode and weight into an array in the PLC, and a upload routine that pushes the records up to the DB. Perhaps I should have a separate buffer and upload routine for each check weigher?

Reading some of these very useful forum posts, I understand I should shift the transaction groups into a separate project.

My questions are:

  1. Any hot tips for optimizing the performance?
  2. What performance can I expect?
  3. What is a realistic minimum tag group update rate I should use for the trigger?
  4. Is there any documentation on optimizing this and guidelines on realistic update rates?



Don’t use a boolean. Use a small integer that increments to trigger. Let it wrap around.

1 Like

Thanks Phil, that makes sense having the tag only change value once for each transaction.
That made a significant difference.

The most important part is to have any given tag only be written from one direction. Never set something with Ignition that the PLC resets, or vice versa. There be dragons. If you need a handshake in the PLC, use another tag that Ignition can write the trigger value back to (echo).


Yep, totally agree and doing that already. Thanks.