Store OPC Timestamp To The Database Without Adding New Item

Add an option/feature in the Transaction Groups Action Tab and Group Tag Alerting Setup that will use the OPC server’s timestamp instead of the database timestamp.

I’m currently working a project that uses the DNP3.0 communications protocol that supports Unsolicited Report By Exception Messaging (URBE). Using URBE allows me to setup the slave devices to only send data with a timestamp to the master when there’s a change that exceeds the deadband limit inside the slave. The slave devices are also setup to store data with timestamps in the event of a comms lost to the OPC Server. The OPC server never polls the slave for data and it uses the DNP timestamp instead of the server’s timestamp.

Currently I have to add a second item with each tag to get the correct timestamp and have no way to use the OPC timestamp with Tag Alert Function.

In terms of using it as the group’s timestamp, you can already do this by de-selecting “store timestamp” for the group and pointing your secondary item to the “t_stamp” column instead. Since you would need to choose a representative item for the group’s timestamp anyhow, I don’t think creating one secondary item is all that much work.

In terms of using the timestamp in alerting, we’ll look into what we can do to make it available in the message. I don’t think it’s feasible to use that time for the active/clear times, however, we do have a feature coming down the road where you’ll be able to attach associated values to the alert that will then be available on query, which should let you do what you want.



I know seems easy to add a secondary item when you have only a few groups but my system contains over a thousand groups because I want to use a long table format in my database. I will probably have a thousand more groups before I’m finished and not having to add a new item to each group would save a lot of development time.

I’m not sure what you mean when you say: "I don’t think it’s feasible to use that time for the active/clear times” Do you mean it can’t be setup in the Ignition Software or that it doesn’t make sense to use that time? This feature is probably the most important to me and my client because the whole idea is to know exactly when the alarm occurred in the field device.

Well, I’m looking at it like this: groups use data from all sorts of OPC servers, and it’s not all that uncommon to have data from multiple servers together in one group. So, if we were to add an option to use the “opc time” for the group, we’d have to let you select which item to get the time from. Adding another item is more work, but doesn’t seem like so much more work that it would warrant a new feature that would only be used vary rarely.

Stepping aside from that for just a moment, you say you’ll have thousands of groups because you want to use “a long table format”. Is this something that the block group could be used for? If you could use a block group, it would help both the time it took you to configure it, and the run time performance. If you can’t currently use the block group, looking at addressing those issues might be a better way for us to go.

In terms of using the OPC time for the alert/clear, I probably shouldn’t have said that. I don’t have a 100% solid idea of how this playback of old data is going to work, but I suppose it could be possible to just use the timestamp of the data any time an alert event was generated. I had a particular case in mind that I didn’t think would work, but I now don’t think it’s really an issue.


The benefit in using the DNP protocol is to retain event data to the millisecond. In the power industry this is critical to retain accurate Sequence Of Event (SOE) records. This is done per point which means that basically every tag is its own group as the group has to contain OPC Timestamp, DNP Point value, and OPC Quality. The impact on development time is huge when developing an application and creating a group for each tag. If there are 15,000 DNP tags then there has to be 15,000 groups. This is because when storing the data, the database historical data table has basically only four columns, tagname, OPC Time stamp, DNP Value, OPC Quality. So the table is very long with very few columns. A particular timestamp is never associated with multiple points.

In my opinion, almost everyone would prefer to use the OPC Timestamp vs the database timestamp. The OPC Timestamp will always be more accurate. This applies to historical values and alarming/alerting.

Thank you,
A concerned DNP user


I discussed this a bit more with rhoag off this thread, and agree that we need to handle this in some way or another. First, we need to address the alerts, because that functionality is simply missing.

In regards to the groups, I think there’s a deeper issue that needs to be addressed. The situation of 15000 groups all operating the exact same way on different addresses is exactly what block groups are intended to handle in a more efficient manner, so I would like to see what we could do to make that possible. The feature to have block groups insert just the changed rows was included in Ignition. The big missing part is that block groups don’t currently support alerts. If they did, it should be possible to handle all of this with 1 block group with 3 items in it, instead of 15k groups each with 3 items.

On a side note, you wouldn’t really create 15k groups by hand, would you? That seems like a very impractical model- what if changes need to be made? What if you accidentally lose everything? I would be looking at generating it through scripting, in which case it would be trivial to add a separate opc timestamp item. Anyhow, that’s just me…

At any rate, the important point here is that we’re going to get something figured out for you guys one way or another. It will be in the 7.1 time frame, which should be less than a month (we can certainly get you something to test with sooner, though).



I know this is a busy time for your company and I do appreciate your support. We will be looking forward to testing your solution.



The “timestamp source” option has been added to alerts for 7.0.9 allowing you to use either the system time (the default, acts like the current version) or the value’s time for alerting.

We’re continuing to work on the other improvements we talked about.


Its great you guys got Alerts to support OPC timestamps. It was a great feature to add!!!

Are you guys still planning on implementing the option to use OPC Timestamps on logged/history tags? If so, any tentative dates for when this feature will be available?

Yes, we’re still planning on handling this in some way soon. History tags are a bit of an interesting problem- just a few revisions ago they actually did store the OPC timestamp. However, due to how we calculate quality when querying (using timestamps that come from the system indicating when the scan classes ran), we ran into problems when the OPC servers reported different times. So, we changed it to store the system time.

We likely need to add [an option for] an additional timestamp for data time to the history log tables. This would probably occur within the next month.


Just wanted to check in on the ETA for this feature.

Any updates would be appreciated.


Eek, sorry I let this thread die. So many things come up it’s sometimes easy to lose track of features requests like this.

Let’s see where we stand: OPC timestamp on alerts, and alerting in block groups were both implemented, so those should handle those cases.

Using OPC time for historical tags isn’t currently possible- that would be the next up. Due to the complexities of the SQLTags history system, I think this would be an overall setting in the historian, something like “use device time for timestamps”. The data will then be stored with the system time AND the OPC time, but only the latter will be used for querying and in the return - the former will be used for calculating quality.

We should be able to accomplish that in the next round of minor feature request that we put in, which I’m hoping to get done within the next 2-3 weeks.


After discussing it some more, we’ve come up with the current plan:

  1. For the immediate term, use the group based approach you currently have, and come up with an easy way to make sure each item is configured correctly with the extra timestamp item. A group-level option wouldn’t work very well because you would still need to designate a particular item to look at- this would be unnecessarily confusing for the normal use cases, and wouldn’t save you much time on config.

  2. In the near future (the next feature release, 7.2, around late October) make sure that block groups have all the necessary features to support what you want to do, and find a quick way to convert what you have over to them. The purpose for this is really to lay the foundation for future growth of your project- to increase efficiency when running the project, and make it easy to add new sites.



Thanks for the update. I just spoke with Rob on what you guys discussed and it sounds like a great plan.

I would like to throw out another idea. With Igniton ver 7.x came historical logging supporting table partitioning (I think the historical logging feature uses the system timestamp, not the OPC timestamp) This is an excellent feature however with larger tag count systems (over 100,000 tags) this table will still get very big very fast.

What about implementing historical logging groups within the built in historical logging feature. The history groups would basically allow for projects to configure which tags would be stored in additional history tables. Basically the user would configure the logging deadband and also which group the tag would be stored in. I realize this is very similiar to what Rob and yourself are proposing but this feature may benefit your architecture for larger projects. For example, say you have 100 sites. Rather than have all this data go into only one built in table, allow the users to create their own groups/tables to store the history on a per site basis. I’m assuming that the built in history storage could then be configured to use the OPC timestamp or system timestamp (configuragle).

If this isn’t clear let me know and I’ll give you guys a call to discuss.



Yeah, the big problem with SQLTags history right now is that you’re only querying based on the tag path and time - not good when you want to group/browse by site or equipment, etc.

By hosting it under a block group, you can store any columns you want, and then only insert the changed rows. When a row has “changed” is determined against the item’s deadband, but as far as I can tell all of your stuff is boolean.

Your idea of partitioning based on site is interesting, but honestly is probably better done (and can be done today) through the database. MySQL, for example, can easily partition tables based on a column value, such as the site id. The different partitions can even be stored to different physical disks, I believe.

Now, there was once another idea floating around that might be in line with what you’re suggesting- having historical groups register their tags with SQLTags history. That is, if a group logs history, that history could be worked into the SQLTags history query system, so that you’re defining the physical storage of the data, but are getting to it through the same mechanism as other historical tag data.

The latter item you were referring to is exactly what I was trying saying. When I said logging group I was referring to a physical table to store/organize the data.

I agree the retrieval of the data would be challenging to manage but I’m sure you guys could handle it!!!

Keep up the good work guys

Has this workaround/functionality been added to 7.2.x? In the past you were planning on modifying the Historical transaction group to handle this type of feature.

Any updates would be appreciated.

I intended to post soon with an update here. In 7.2.3 (or possibly 4- in terms of time, within a week there will be a dev build up) we’ll release an update to SQLTags history that lets you store the OPC time. This should make it very easy for you to log your different events and back history.

I believe that using that, and perhaps a small amount of additional database work, you should be able to accomplish everything you want. It will certainly make it very easy to store the history for new sites, and will be very efficient. The only difficulty I can see might be in setting up the query screens you want, but we can work on that.


Hi All,

It seems that in 7.2.3 we have the option to choose the OPC TIMESTAMP in the SQLTag History configuration windows.
If this option is choosen, the lastchange timestamp is correct (in SQL Tag browser panel) while the data stored in the database stil remain to be the system timestamp (drop a table, populate with Historical Tags).

Tried the 7.2.4 beta, same behaviour.

Also, don’t know (ddn’t search, sorry) if the possibility will be added to the Transaction Group…

Kind regards,