Best practices for SQL bridge datalogging

Ok, so we log our time series process data to an existing SQL database managed by our IT department. We have transaction groups logging to individual tables for each process, 25-75 tags per process, currently at 10 tables all 1s interval logging. We initially set up this way because it was easy to track tags for each piece of equipment separately but I was speaking with our IT department and they said it would be easier for them to use the data from their side if we aggregated all the time series data logs to a single table.

I know we could build a cross reference to the existing tables the way they are set up currently but it made me wonder if there was a benefit for either scenario other than preference or ease of writing queries? 1 large table vs many small tables? (Also newbie to SQL here)

You might want to consider using the tag historian module for this quantity/speed of data collection. The historian will “compress” the data and take care of database table management. It also integrates very well with the presentation tools available in the vision module.

If you need access to specific 1 second readings (i.e. the actual reading at 12:31:06 AM) then I would suggest using 1 table per your IT department. While this is controversial to many, a simplified table design includes date/time, reading name/index, reading value columns. When new signals are added, all you need to do is determine the new reading name/index.

With this design, you will need to really pay attention to database performance and learn some “advanced” methods for getting data out of the database.

1 Like

It’s easier to manage a DB that has fewer, but larger tables, with arbitrary tag data consolidated into those tables. It’s easier to query when unique data has its own column in a table. In order to get the query convenience of the latter with the management convenience of the former, Ignition has its own tag historian that uses consolidate tables, that can be enabled per tag, without transaction groups. The tag historian has the code (scripting functions and query bindings) to retrieve the data you want the way you want (mostly) without worrying about the complex queries.

There are scripting functions that can be used to back-fill data from other sources into the tag historian, so you can move your existing history into the historian.

FWIW, I tend to favor the wide tables instead of the consolidated tables, especially if I need to record at sub-second intervals. Indices for consolidated tables are much more complex (and slower) than for wide tables.

1 Like

Thanks for the replies. We had looked at the historian module initially but chose the SQL bridge because it seemed like we more options for both data logging,recipe management, and searching through the data. Sometimes our process engineers want to query specific data based on recipes or other attributes. I assume this is not possible with the historian data?

Well, you do lose the natural association of being in the same row of a table, filterable just with a WHERE clause. So you generally would have to query for the recipes/attributes to establish the time periods of interest, then separately query for the data of interest. Using Ignition’s history query script functions. If you dig into the tag historian’s table structures, you could construct combinations of unions, joins, and pivots to get the data you want.

Process engineers who are used to writing their own data-mining SQL will absolutely hate the tag historian, and will push such ad-hoc query tasks onto you or your IT group. Trade-offs.


This makes a lot of sense. So it sounds like the data can be queried either way, it seems the historian really shines with its ease of use with built in charts and compression… The question would be then to spend time building a straightforward charting interface for SQL bridge or time building all the necessary queries for the historian… Thanks for the help.

1 Like

It’s worth noting that this is exactly what the ‘DB Pens’ feature of the easy chart is designed around- working with ‘wide’ data in a single table, as long as there’s a useful timestamp column to index off of.


Yes. And if you create a cross-reference somewhere between tag and table/column/where clause, you can override the dropPens event to mimic the convenience of tag history pens. The somewhere can even be bits of text embedded into the tag’s documentation property. /-:

Until we can use custom properties for such things… (-:


This sounds interesting, would definitely make it easy if it was built in. Any idea if something like this is planned in the coming update?

Do you mean built in to the EasyChart? No sign of such in v8. Sometimes I think the SQL Bridge is a red-headed step-child… /-: