Transaction Group Querying SQLtags History

I’m looking for a little advice.

I’m setting up a history transaction group in which I would like to include some averages (for the last 5 minutes from the time the transaction group is running) queried from SQLtags history. However, I’ve noted that the history data will be partitioned into multiple tables.

Robert has an excellent post on parsing the partitions table:
viewtopic.php?f=81&t=6060&p=16522&hilit=query+sqlt_data#p16522
but I’m not sure how best to integrate logic like this into a transaction group that uses the expression language.

Things get a little complicated because the transaction group is also calculating the tag name that we will be looking up.

I thought I’d check in here before going to far into it…

Thanks for reading…

Unfortunately, I think it’s going to take a bit of work on our part to make things like this possible. There are a number of issues working against you, such as the fact that the “average” aggregation mode doesn’t quite work like you would expect from an aggregate. And then, as you noticed, the fact that there isn’t really a natural way to work sqltags history into transaction groups.

I would say the best thing to do would be to write a script module that queried the tags you want (using system.tag.queryTagHistory, with fixed result of 10 results or so over your 5 minute range), averaged them together, and then wrote them to the database. The script could be run on a timer, or called from a transaction group expression item (which itself could be triggered) using the “runScript” expression.

We definitely intend to improve this soon. I think the next step is to make sure aggregates work as expected (and add additional aggregates), and then perhaps add dataset support to items in groups. That way, you could have one expression which queried using an expression function, and others that map elements to columns.

Regards,

Hi Colby,

Thanks for your reply. I had another thought that I’d like to run by you.

This occurred to me last night. It involves recording the data in multiple places, but simplifies things. What if I set up two transaction groups as follow?

[ul]Group 1 logs data to table 1 every 10 seconds.
Group 2 updates every 5 minutes and it queries the average of the values for the last 5 minutes in table 1 and logs it to table 2.
[/ul]

I don’t need long-term storage on these so I can set them up to only keep 1 year of data or less.

Does this sound like a workable plan?

For mySQL, does this look like a correct query syntax for group 2?
SELECT AVG(rf_In) FROM report_table1
WHERE t_stamp > DATE_SUB(NOW(), INTERVAL 5 MINUTE)

Hi,

Yes, I think something like that is a good idea. I would probably just do the whole thing in sql, unless you had something else that you wanted to do on the average tags (alerting, etc).

That is:

insert into report_table1_avg(rf_in, col2) select avg(rf_in), avg(col2) from report_table1 WHERE  t_stamp > DATE_SUB(NOW(), INTERVAL 5 MINUTE)

You could have this in a run always expression item, in a group set to update the first row of a dummy table and run every 5 min, or you could run it using the system.db.runUpdateQuery function in a gateway timer script. And, if you’re not aware, mysql has a handy “create table x like y” statement to copy a table’s schema. So, you can create your average table off of the “raw” table.

Hope this helps,