Hey guys, looking for some assistance understanding the perks of using SQL Bridge instead of or as well as Tag Historian. I don’t quite see what its major selling points are. I’ve used both together and separately but in my mind if I’m just chasing data logging, the tag historian is the best option.
For instance, I’ve played around with writing an application that writes a huge amount of data to an L84ES GuardLogix PLC, about 60,000 DINTS. I can’t write to an array as a whole or individual tags because it is far too cumbersome and a huge resource hog to have Ignition cache that information at which ever scan time. Instead I wrote some code that uses the system.opc functions and I write to the tags directly without defining them in the tag browser.
• Could SQL Bridge help me do this?; or;
• Is using system.opc commands perhaps the most efficient way?
I’ve also sent this on to our friendly neighborhood Ignition support but thought I’d run it by you guys as well.
If the data collection is at fixed intervals, especially very fast intervals, then the wide tables produced by transaction groups are far superior. Transaction groups are also the only non-scripted way to use one trigger to snapshot a number of related values (use OPC Read mode!).
This is what I would do as well. The SQL Bridge can do DB->PLC transactions, say for recipe loads, but I’ve always found the scripted versions easier to work with and easier for my customers to maintain. Tags should be used where PLC values are needed on the UI, or need to be monitored for change events (always with the project-scoped change events).
I will also admit that I am biased in favor of wide-table data logging. Part of the reason is that I created a module specifically to handle caching of high-density wide-table datasets (for charting in particular). Also because I’ve had applications with high-density recording requirements that Ignition/Java couldn’t handle – wide-table logging from external code was required. (Ignition and Java have gotten faster – the next such task will be jython in Ignition, but still with wide tables.)
If you need time series data, use the historian. If you don’t consider the SQL bridge.
I’d rather store data in wide tables that will be used for back end reports rather than trying to query the tag historian (less it’s driving a chart). Just makes building reports easier. I may also consider the short term and long term impacts. During operation, I may want high resolution for trending, updates every second to the historian. But for long-term data storage, and driving a production reports, 1 minute, or maybe even 1 hour resolution is all I need to summarize the production. Using SQL bridge and wide tables makes long-term data storage for recalling production reports more manageable than the historian, which I may setup to only hold 6 months of data at that 1 second scan class (not getting into historian modes…)
I always use SQL Bridge for SQL - PLC transactions, usually recipe management functions. I may also use SQL bridge for specific event tracking, or operator entry however scripting may be a better option for some of this depending on what you need to achieve.
If you could write to an array (or multiple arrays) then the SQL block groups work great. But you have to consider this as a design requirement at the PLC and DB levels. When dealing with recipes, or general program parameters I’ll create a dedicated UDT in PLC to receive the data, the resulting tag would look something like:
Each .AreaN.Array would require an SQL transaction for the array. In your example you would need 60 transaction groups to handle 60,000 tags (assumming array structure), they are pretty simple to configure.
SQL Bridge transactions groups also provide handshaking, which I find highly valuable when dealing with recipes to ensure the PLC received the data.
The 60,000 tags are in 200030 arrays and I have many of them. Considering the load on the tag browser I think I’ll stick with the system.opc. functions as if deploying on a tag limited application I won’t be affected. Some testing to be done and some requirements to be better defined and we’ll be on our way.
I would consider some type of packing/unpacking mechanism. Pack multiple tags into a single block transaction, say 30 blocks of 30, so 1 transaction downloads 30 blocks = 900 tags. An AOI in the PLC would then unpack and breakout the data to the single 30 tag arrays. It would cut down your transaction count to be fairly manageable and the AOI would be quick to build. Of course packing the data that way depends on how you query SQL for the data.