I am working on a project that has a many tags being recorded using Tag Historian
.
End result of what I want is for there to be a button to be clicked twice - once to start and once to stop. I record the timestamp of the clicks. What this should do then is grab all the historical tag data from in between the two clicks and then I export them to CSV and do other stuff with that.
Now given I already have historical tables in MySQL database, my plan was to query on the correct sqlt_data table and use the two clicks of the button timestamps in my query to get the data.
My two problems -
-
the t_stamp column is just a string of numbers and a format I am not sure of. What is the string mask format of this, so that I can use the same format when I record the time of the clicking of my buttons?
-
In my gateway tag change script, how can I make sure that I am querying the correct sqlt_data_etc table given that a new one is made each month? Also, how to handle say clicking the button to start in one month and ending in another month?
My coworker had another idea that would solve problem 2 - create a new table whoâs only job is to store the tag data in between the button clicks and after the export of the CSV and everything else, delete all the rows. Is that also possible, to send the tag historical data to two tables, sqlt_data_1_2019_04 AND another table (assuming the button was clicked to start)?
Tag historian stores timestamps as Java does - it's a count of milliseconds since the "epoch" - Jan 1, 1970. MySQL has a builtin FROM_UNIXTIME
function - just do FROM_UNIXTIME(t_stamp / 1000)
to retrieve the timestamp in a human readable way.
However, as for actually querying the data out - it's a decidedly non trivial task. Tag historian's DB structure is complicated, so say the least. Briefly:
sqlth_drv
stores 'driver' records - at least two per Gateway that's storing history into this database; one with a null provider and one with the actual name of the realtime tag provider (typically, default
)
sqlth_partitions
defines which table(s) data is actually stored in for what provider, over a given timeslice (start_time
and end_time
)
sqlth_te
defines tags by their full path, and is where the tagid
column in the actual data tables comes from. scid
in this table refers to an entry in sqlth_scinfo
, which itself ties back to sqlth_drv
to uniquely identify a given tag, in a given realtime tag provider, on a given gateway - even if the path to certain tags is identical.
If you want to easily query data out of the DB, you would be much better off using transaction groups to store the records you want in a dedicated table. The historian is, as a consequence of its flexibilty, very complex. It's actually impossible, at least in every SQL flavor I'm familiar with, to write a single SQL query that accurately retrieves data from the historian - because you can't have dynamic table names in a SQL query.
1 Like
Ok so transaction groups seem to be the way to go. So the best way to do this to me it seems based on what you said is to create a transaction group that turns on when the button is clicked and then Iâm storing all this data also to my own custom table, and when the button is clicked off, turn off the transaction group and I can then grab everything from my table for the CSV and clear it.
My last question then would be how would I turn transaction groups on/off programatically in Ignition?
Donât think of it as turning off the entire group - leave the group always running (at whatever poll rate you need to log data into the database), and add a trigger tag. So, your action in the client will switch a tagâs value (triggering the transaction group when it next runs) and then the group will log data until the trigger goes back to false.
There is a complication inherent here if you plan on having multiple clients that can perform this action at the same time - if you havenât already made considerations around that.
1 Like
Luckily I itâs only a single client user so that should not be an issue. Thanks!
1 Like
I just thought of one potential issue and I want to know if you can clarify that maybe itâs not an issue.
The tag historian runs a default scan class at every 2 seconds. Now, say my transaction groups getâs triggered to turn on and I have it to run every 2 seconds as well while itâs on. Given thereâs no option on the historical transaction group to choose a scan class and instead I have to set the timer to 2 seconds to âmatchâ the polling rate of my historian scan class, theyâre no going to be on the same exact schedule correct? So my timestamps in my sqlt_data_2_2019_04 could be something end with something like 1000, 3000, 5000 etc, and depending on when the button was clicked that triggers my transcation group, I could get something like 1235, 3235, 5235, etc, and nothing will match up.
Is this what could possibly happen? It may be fine, I have to ask my coworker, but it may be the case where this would confuse our customer, why do none of the t_stmaps in my tag historian tables match up with what is supposed to be a subset of it.
You are assuming, even in your worst case, a level of clock synchronization among Ignition subsystems that simply isnât there. Tags in a scan class are read from devices at âbest effortâ timing, within the requested period. A two-second scan class could start a cycle at 12:02:02.189 and deliver actual values scattered pseudo-randomly over the next few hundreds of milliseconds. Or longer, if one or more devices have overloaded comms.
Meanwhile, your transaction group on a two-second interval could be executed at 12:02:02.305 and (if trigger is true) grab all of the tags at that point, in the middle of updates from the scan class. In other words, you have no guarantee that the values recorded in the transaction group are from the same âscanâ â some tags are likely to be two seconds old, and in overload, potentially much older.
The closest you can get to coordinated read and record of multiple items is to use âOPC Read Modeâ in your transaction group, with OPC Items instead of tags. Or use scripting with system.opc.readValues() followed by a system.db.* insert operation.
In other words, you have no guarantee that the values recorded in the transaction group are from the same âscanâ â some tags are likely to be two seconds old, and in overload, potentially much older.
Exactly what I thought might be an issue. If I have a list of time stamps (12:02:02.189. 12:02:04.197.12:02:06.421, 12:02:08.624)
and my buttons supposed to make a subset of that and instead I get values (12:02:03.823,12:02:05.131)
, it might raise the question why doesn't this line up. However I just confirmed this doesn't matter to the customer thankfully.
Another issue I was concerned about was the extra load on the queries and database. I have 15 tags that are use tag historian so 15 queries every 2 seconds, and now I am going to turn that into 30 queries every 2 seconds with my transaction group triggered. I personally have no knowledge of the servers setup and I don't think that's too much of a load increase, but if it was more tags and a weak server setup then it seems like it could start to slow things down pretty quick.
Where would you end up putting this system.opc.readValues() script? I'm going to end up going with just a transcation group given that I know it does not matter to our customer but I would like to know more about Ignition as a whole.
Is there a reason why you are wanting to query the historian tables directly and not just use system.tag.queryTagHistory()?
2 Likes
I usually put them in gateway tag change event scripts, monitoring a trigger. For your application, it'd be a gateway timer event script.