Historical Data

Hi,

I would like to feedback high speed historical data to ignition so we could use ignition as a base for viewing historical data.
I was thinking of a in rack computer that would scan at 0.01sec into a local db and push this data to my sql server use by ignition.
To do this I would need to know the database structure for storing data by ignition.
My question is.
Can you provide this structure or do you have another way of doing this?

regards

Hi,

The first decision to make is whether or not storing the data in Ignition’s “tag history” format makes sense. You can pretty much store the data any way you want in the database, and get to it one way or another in Ignition. For rapid data like this, it might make more sense to store it in a wider table, with a column per tag, instead of purely vertically, as Ignition does.

That said, if you want to go forward with storing it in the tag history tables, you can find some information about the table structure in the Programmer’s Guide (appendix c), included with the module sdk (available on the downloads page).

In short, I think you’ll need to do the following:

  1. Create a row in the sqlth_drv table for your “driver”.
  2. Create an entry in the sqlth_scinfo table with name of “exempt”, and the id of the driver you just created as the drvid (by creating this “scan class” and using it for your tags, it avoids having to deal with the sqlth_sce table).
  3. Create your tags in the sqlth_te table, using the id of the scan class you just created as your scid.
  4. Insert the history into the correct data table, using the ids of the tags you just created.

To make things a bit easier, you may want to turn off data partitioning in the history provider settings.

Hope this helps,

Can I use the trend component to show this high speed data in any format I choose in sql server?
example wide table one column per tag.

thanks

yes, you can use the dbpens function on the easy chart if your data is stored in a certain format. additionally, you can use a classic chart.

we have brought in some high density points using the easy chart, but we also limited the date ranges so that users could only pull a certain amount of data at a time so that they dont overload the system with some ginormous request of data.

Hi,

I start looking at this and I would need more information about date time represent with a bigint.

thanks

yeah I dont believe you will be able to use bigint as a timestamp on the easy chart. I guess on the classic chart it would work, but you would need to convert it to a timestamp in your query.

The bigint is unix time format, so number of milliseconds since Jan 1, 1970 (UTC). You actually can use it in the easy chart, you’ll just need to make the “time column” a function that converts it to a date. For example, with mysql, “unix_timestamp(t_stamp/1000)”. With SQL Server, you have to use dateadd to add the milliseconds to 1/1/1970.

Anyhow, all that said, the classic chart would be worth learning about. The db pens are less efficient in that it has to query each pen separately. While this is actually a bit easier with the historian tables, if you can get the data into a wider format, either through logging or a query, the classic chart lets you get everything at once (the classic chart has datasets… the first column is the timestamp, the other columns are the pens. It starts with one dataset, but you can add others).

Hope this helps,