My problem is that should I want to join two SQL tables that have been created by two seperate transaction groups the only method I can see of doing this is to join on t_stamp. If both are set to trigger every 60 seconds then the t_stamp can be anywhere between 0 and 59, each table records from the second it is saved?? So I wrote a script that runs every 5 seconds (quicker for testing) and used this one common trigger to trigger the two transaction groups but when run over 10 minutes and then joining the two tables in SQL I get gaps where the second group has been recoreded to SQL milli seconds after the first and has gone over to the next whole second, also the actual seconds occasionally varied from being x2, x7, x2, x7 seconds to x3, x8 seconds (the script was set to “fixed Rate 5 seconds”). I realise one transaction group would solve it but I do not want data being recorded many times in multiple tables. To sum up I want to be able to record/stream data to SQL and be able to join any amount of tables to create one table (for reports etc), is there a correct procedure for doing this?
Well, this does strike me as somewhat of an odd question, but I’m not exactly sure why. I feel like there’s something simple that I could be missing, so maybe someone else will offer their opinion, but one way that came to mind:
In each group, add an expression item, data type Int8, set to write back to a field (named whatever you want). Set the expression to:
That is current time in UTC converted to Minutes. Since each of your groups run every minute, both tables will have corresponding values for the minute the record was stored in. Then you could join on this.
You could do this through querying as well, without adding additional items, but that would likely be very slow, as you would have to perform operations on the timestamp that would prevent the use of indexes. By doing it like this, you can add an index to this column in each table, and the joins will be very fast.
Like I said, probably one of many ways, just the first that came to mind.
Perhaps this will clarify - say I want to make a bar graph from two seperate tables, then I need to join them prior to “making the graph”. If the transaction groups are 60 seconds then the seconds could be stripped off in the SQL query. If the transaction groups were 5 seconds then it becomes more difficult especially when one one record at 1,6,11,16 etc and the second records at 3, 8, 13, 18. The goal is to be able to join any amount of tables that share the same trigger time range with a simple JOIN allowing any data to be used for tables/graphs etc without having to set up extra transaction groups or “complex” queries. The “common trigger” method explained above proved unreliable as even though both are triggered together the second transaction group seems to be delayed by xx milliseconds after the first one and therefore its t_stamp sometimes “ticked over” to the next second.
A method mentioned by an Integrator was for the trigger to grab the time and assign it to t_stamp for every group therefore ensuring every t_stamp created by the 5 second trigger always had just one EXACT t_stamp. I will look into your method but would appreciate feedback on the method I described above so I get it right when my first project gets under way. Thanks
Tried the method mentioned by Colby and yes it works OK but this only works for a 60 second transaction group, right?
I would like a method that can be used for every transaction group for all seconds setting, i.e. 1, 30, 60 that enables any table to be joined in the future. Can you advise on this?
You say this is an odd request, is it not normal to try to join different tables from different transaction groups? I see this as flexible for any future reports etc.
I think this is sort of what you were getting at… What if you made a SQLTag with the current date in it (now()), add that to each of your groups, and disable the checkbox for store timestamp in the groups? You could then have some other boolean SQLTag that flips high every X seconds and have the groups trigger once each time that tag goes high?
I don’t know if this guarantees that they will always have the same timestamp, though. You may have to mess with the scan classes of the date and trigger tag. I do see what Colby is saying, though. It doesn’t seem like an odd request, but I can’t picture any time I’ve needed to do it yet either.
I think your method pretty much ties up with the method from the Integrator. He had a script (timer, 5 seconds) that set a bit high, then sleep() (couple of seconds whilst groups write to SQL) then set bit low As mentioned above this provided a t_stamp that was given to each group. This method looks like a solution to me but I wanted to run it past the experts at Induction. As Colby mentioned earlier, some ways will prove slower etc than others.
My thoughts (yet again, yawn, lol) - if this method is not used then periodic logging tables do not have a way of being joined? and therefore cannot be “joined” on a bar graph should the need arise - with this method any data can be joined with no hassle and no need to foresee what reports will be asked for except ensure each group uses this method. Perhaps I am worrying about nothing?
You could also eliminate the need for a script by making your trigger tag have an expression of
if( dateExtract(now(),“second”) % 5 = 0,1,0)
for each interval you’d like to use. You’d have to make sure your groups run at least every second so they don’t miss it.
I get what you’re saying about being able to ensure joining any historical table. It’s not a bad idea. It just hasn’t come up for me; usually I have wide tables where everything logically grouped is together in the table, or the various tables just need to be trended together which is based on a date range so it doesn’t matter, or I’m performing aggregations up to a common granularity (day, week, month) where the different timestamps are irrelevant.
Wide table I understand but are you saying, in your 3rd method, that your SQL query takes different tables, averages/totals (or whatever is required) their data and then puts the results of these together for bar charts etc. Have I made a stupid error when say trying to put a couple of 24 hour totals together out of two tables with “out of sync” t_stamps? I tried joining the tables before carrying out the averages or totals.
To sum up and not waste your time - should I address my SQL methods to solve what is noob mistake?
You can do something like this:
select a.day, a.value1_avg, a.value2_sum, b.value3_avg, b.value4_sum from ( select to_char(t_stamp,'YYYYMMDD') day, avg(value1) value1_avg, sum(value2) value2_sum from table1 where t_stamp between start_date and end_date group by to_char(t_stamp,'YYYYMMDD') ) a, ( select to_char(t_stamp,'YYYYMMDD') day, avg(value3) value3_avg, sum(value4) value4_sum from table2 where t_stamp between start_date and end_date group by to_char(t_stamp,'YYYYMMDD') ) b where a.day = b.day
The to_char part is Oracle syntax.
For SQL Server http://msdn.microsoft.com/en-us/library/ms174420.aspx
For MySQL use year(), month(), day() etc http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html
I hope this helps.
Oops, didn’t actually answer your question . The reason I lean towards the SQL solution is that I don’t know how absolutely confident you can ever be that everything gets the same timestamp value. You’d have to make sure the group executions, as well as the trigger and date SQLTag’s scan classes don’t occasionally overlap and cause two groups to get different timestamps. If they somehow ever were off by a second, then nothing would end up in your graph. Maybe that would never happen, or maybe it would, but I’d rather just do the SQL than try to foolproof the logging mechanism if they are going to be in separate tables.
I intended to post something along the lines of what Dan posted, suggesting doing it in SQL. Conveniently, he did it for me
Anyhow, going that route will offer more flexibility and be more predictable. I’m pretty sure I’ve done stuff like this myself, without thinking much about it.
I once did a project where we had many tables and wanted all of their timestamp to line up exactly, though we didn’t need to support multiple rates. In that case I accomplished it by having each group update the first row of its table (that is, the table only had 1 row). Then, I wrote a stored procedure that would “snapshot” the data from each of the tables into a corresponding history table, but using a common timestamp instead of the TS stored in the “status” table. The stored procedure was then called every minute.
In that case, we were building a click-to-graph/click-to-table system, so there was another table that mapped between point name and table/column where the history was stored. Again, I suggest doing it in SQL, but I just wanted to mention one setup that came to mind.