Many tags to log but few will be changing

I’d like to create a group that contains more than 300 OPC tags, but I don’t want to log all of their values every time the group fires. Only the tags that change should be logged, and they should be logged as one record per each changed tag, rather than all tags in a single record. Is it possible to do this in FactorySQL?

I’m concerned about data duplication. There are thousands of tags for each wastewater plant, (not all of them are logging) and potentially dozens of wastewater plants. I’d hate to create a history table that has hundreds of columns, and each time one of them changes, a new record is added where most of the values are duplicates of the previous record.

Hi Tim,

Looking at FSQL, it looks like the “async update” checkbox on the group trigger tab should be what you are looking for.

As for the data duplication, I would highly recommend designing a fully normalized database schema ahead of time for such a large task. It might be somewhat of a p.i.t.a. now, but you’ll be glad you did later on if you have to support it. There are many database books which teach you how to do this, but it should be something similar to this (off the top of my head, so bear with me :wink:):


PlantID (PK)


PumpID (PK)


PlantID (PK, FK)
PumpID (PK, FK)


PumpID (PK, FK)
LogDate (PK)

Hope that helps.


I think I know what TimF has in mind, a table like the following:



which would grow vertically, and only have the values that have changed. This comes up from time to time, but is not currently possible using only FactorySQL functions (note: in the next major release, block groups will support inserting just the changed rows of a block, which will indeed let you build something like this).

In the mean time, you can do one of a few things. You could just use the groups as they are, and minimize duplicate data using techniques like HandledException suggested. Or, you could get a little creative, and perhaps accomplish this using a block group and a database trigger or a secondary group. I’ll explain…

You could have a block group, with all of the tags in one “value” column. I would also make a copy of that item and change it to look at the “item path” property, with a string data type. I would also have it store row id. The block group would be set to “update the first block” of a “status” table. You would also manually construct a history table with a similar schema to that of status table (“create table block_history like block_status” is a handy query to do that in mysql)
The block group will only write to rows that have changed. Therefore, you could write a database trigger that executes on update, and inserts the new data into the history table.

If you didn’t want to use a db trigger, you could have a standard group in FactorySQL that runs fairly quickly, tracks the last execution time (look into the StoreVariable/GetVariable action item functions), and then runs a query like:

INSERT INTO block_history SELECT * FROM block_status WHERE t_stamp>{lastexec}

All in all, a bit more complicated, but if you’re sure you want that structure and you’re up for it, should work out.


[quote=“Colby.Clegg”]a table like the following:



which would grow vertically, and only have the values that have changed.[/quote]
yep, this is what I’m going for.

Another table would be the Tags table where the TagId ties to a TagName field and maybe some other information. Optionally, I may have an Event table where an EventId will tie together a collection of simultaneous tag updates together, but I’m not sure if that’s necessary. It could be that a simple t_stamp field is all that’s needed to provide that function. I need a t_stamp field though, either in this table or in the Event table.

I’ll be trying out your suggestions this morning and see if I can make that work.


Yeah, I left a few things unsaid in my first post. Primarily, I suggested creating a tag path item exactly for the purpose of turning the status table into a look up table - the second part to that, which I should have mentioned, is that when snap-shotting the changes into the history table, you would only include the row index of the item, and not the path again.

You could definitely add an event id, though how you would wrap up all of the values into a particular event could be a bit tricky. The DB trigger that I first suggested probably wouldn’t work- it gets called for each row, not for the overall transaction (as far as I’m aware, could be wrong). A time based approach would work better (the snapshot query run every so often), but you would still have a difficult time ensuring that every record of the event was copied, and that no other records were included. Of course, if you actually have some sort of bit that indicates an “event” it becomes a lot easier and you can just trigger off that to run the copy query.

Let us know how it goes…