'How to' transaction group

Hi all,

I want to map a Db table providing equipment active alarms into ignition tags.

The db table contains, around 5 (equipment) x 80 (alarms) = 400 rows
The table would always have all 400 rows, the Active field will change 0/1 to indicate active alarm.

The idea is to have 5 instances of ‘equipmentAlarm’ datatype. The datatype would contain 80 fields, 1 per alarm and I want to drive the alarm condition from an ‘Active’ column on the Db. Other columns will be used to customize the alarm (Display path, ActivePipeline, ActivationDelay, etc)

EquipmentId AlarmId Active [… other columns]

Equipment_A Alarm_1 0
Equipment_A Alarm_2 0
Equipment_A Alarm_3 0
Equipment_A Alarm_4 0
Equipment_A Alarm_5 0
Equipment_A Alarm_6 1
Equipment_A Alarm_7 0
Equipment_A Alarm_8 0
Equipment_A Alarm_9 0
Equipment_A Alarm_10 0
Equipment_B Alarm_1 1
Equipment_B Alarm_2 1
Equipment_B Alarm_3 0
Equipment_B Alarm_4 0
Equipment_B Alarm_5 0
Equipment_B Alarm_6 0
Equipment_B Alarm_7 0
Equipment_B Alarm_8 0
Equipment_B Alarm_9 0
Equipment_B Alarm_10 0

Is there an efficient way to accomplish this with transaction groups? I don’t want to end up creating 400 transaction groups. The goals is to minimize the queries to the DB (max 1 per equipment)

Cheers!

Interesting. I would create a dataset tag with a query that grabs the entire table on every scan. And then use a tag change event to fan out the data to your UDTs. Which would then be memory tags. Use system.tag.writeAll() to maximize the speed.

yeah I was testing a similar approach using a project script.

In fact, I ended up using just a dataset to query the alarms and a UDT with lookups to that dataset to bind the alarm properties.
I used project script to Add/Remove tags (from that Alarm UDT) to maintain the tags as rows are added/removed from the db.

thanks!