Tag group history, one shot and double-history

I’m building a plant dashboard and need to get production counts by shift. There are multiple shift patterns but they all start on the hour or at half-past. I reckon that the simplest way to get a responsive dashboard would be to record each machine’s count every 30 minutes into a dedicated historian table.

I’ve created a tag group for this.

The driving expression is

(dateExtract(now(1000), "minute") = 0) || (dateExtract(now(1000), "minute") = 30)

and One Shot is set to true. I’m expecting it to fire once every 30 minutes.

To speed up development I have it firing at 0, 5, 10, 15, etc. and am seeing some double records within the same minute.

# tagid, intvalue, floatvalue, stringvalue, datevalue, dataintegrity, t_stamp, from_unixtime(t_stamp/1000)
'2', '17866883', NULL, NULL, NULL, '192', '1646475001333', '2022-03-05 10:10:01.3330'
'2', '17866895', NULL, NULL, NULL, '192', '1646475056386', '2022-03-05 10:10:56.3860'

'2', '17866945', NULL, NULL, NULL, '192', '1646475301683', '2022-03-05 10:15:01.6830'
'2', '17866957', NULL, NULL, NULL, '192', '1646475357737', '2022-03-05 10:15:57.7370'

'2', '17867120', NULL, NULL, NULL, '192', '1646476201958', '2022-03-05 10:30:01.9580'
'2', '17867132', NULL, NULL, NULL, '192', '1646476259013', '2022-03-05 10:30:59.0130'

'2', '17867181', NULL, NULL, NULL, '192', '1646476501299', '2022-03-05 10:35:01.2990'

Questions:

  1. What’s wrong with my tag group configuration.
  2. Is there a better strategy?

This is a job for a scheduled transaction group or a scheduled event. The tag historian is not really made to do synchronized snapshots, IMNSHO. Square peg, round hole.

Thanks, Phil. That was the direction I needed to go and get a grip on Transaction Manager.

I plan to retrieve the record for the time in question and allow for the couple of seconds t_stamp error as follows:

SELECT  t_stamp , {columnName} AS count
FROM  manf_dashboard_summary
WHERE t_stamp >= :timestamp
LIMIT 1

The {columnName} querystring will be generated by the script so I’m not concerned about SQL injection.

Does that look like a reasonable approach?

Meh. I would use a wide table. I would use OPC Read mode on your transaction group to ensure data is sampled after the trigger change. (Or system.opc.read*() in your scheduled script.)

Related discussions:

https://forum.inductiveautomation.com/search?q=historian%20tables%20tall%20wide