Run time vs Logged t_stamp

I have multiple PLCs connected to the Ignition server. The PLCs are all Allen Bradley, but some are SLC 500’s and some are Compact Logix. The processes we run have a run timer that the operator resets, and the PLC counts by the tenth of a minute. We have about 200 tags on the large PLCs to record and log at a rate of every 30 seconds. The data log has its own time stamp, we use this data for validating our tags and our process.

I have noticed, many times that the time stamp when the row was made will skip recording the run time and log the next row with the added value. In other words, I have a new row getting made every 30 seconds, and quite often the row will display a time stamp of 30 seconds, but the run time will give me a 0 change from the last record then will give me a reading of 1 min with the next row. I will attach a screen shot.

My biggest concern is that if I cannot trust it to keep accurate records with a simple clock, i cannot trust it to record accurate readings at all. I do not think this is a hardware issue, because some of the other readings that are to the .0000001 do not change row to row as well, which is highly unlikely.

Thanks in advance,
Kurt

I’m not sure I completely understand your setup here, but, what rate is the scan class you’ve got these tags in? Are you using sqltags history? What is the historical rate? What creates the row?

Thank you for your reply,

We are using Ignition SQL tags historian to create rows in SQL Server database. There is three different scan classes these values are being written, (default) @ 30000ms has 8818 total subscribed items, (default) leased scan class fast @10000ms has 318 total subscribed items, (default) leased scan class slow @30000ms has 19 total subscribed items.

The (default) scan class @ 30000ms is the one we are using to create these data logging entries that are having these anomalies where it will write the row, but not accurately. Is 8818 a number to large to handle? I am rather certain we are only actively using a third of those tags to write to our database.

Thanks for your help,
Kurt

Hi Kurt,

Historical scan classes must be slower than the OPC scan class of the values or you get exactly what you see: snapshots with values one cycle old. The issue is that OPC can’t guarantee the arrival of fresh data at precisely the scan class rate as it depends on the network and the PLC response times. A slightly delayed OPC response could arrive just after the historical snapshot, where it normally arrives before.

As for your driver load – it’s almost trivial. I’ve successfully run thousands of tags at half-second intervals from Logix processors. I suggest you set your default scan class to 15 or 20 seconds. Note though that the values recorded will be the most recent arrival.

If you want your PLC data to be snapshotted just before the DB write, use a SQL Bridge transaction group with the OPC items themselves (not the SQLtags), and use the “Read” OPC data mode in the group’s advanced options.

Thank you for your quick response Phil,

We are using transaction groups to pre-load data before sending it to the SQL server, I will have to look more into using the OPC items themselves. I am pretty sure that we are using tags so we can re-write the names of the columns before they are sent to SQL.

Would simply setting the default scan class to a faster rate solve this problem? As I understand it, the program is reading and writing at the same time in certain instances? That would make sense, why it happens at random times and not on a set time interval.

Thanks,
Kurt

[quote=“krausch123456789”]I am pretty sure that we are using tags so we can re-write the names of the columns before they are sent to SQL.[/quote]Each item you record in your transaction group is given an explicit column name – no need to stage through another table. If you’re dragging from the OPC browser, it’ll suggest a target name, but you don’t have to keep it.[quote=“krausch123456789”]Would simply setting the default scan class to a faster rate solve this problem? As I understand it, the program is reading and writing at the same time in certain instances? That would make sense, why it happens at random times and not on a set time interval.[/quote]A faster scan class will ensure the data is more fresh before it writes, but won’t synchronize the data snapshot and the recording.

Thanks Phil,

Is there a way to keep the traffic constant on the PLC to 30sec read time, and have a write time to the SQL table also a 30sec interval without having a missed row? Would changing the OPC data mode from Subscribe to Read do this? I have version 7.6.4 running.

I would change the read time, scan class, but I think the data would show weird values if I changed it to something that is not divisible to 30, and if i changed it to a scan class of something divisible by 30 there is still a chance that the overlap of read and write might happen again.

A slower write time would not be a big deal at all. I would rather have the write time be affected, than the data that it is writing be off.

Thanks again,
Kurt

[quote=“krausch123456789”]Is there a way to keep the traffic constant on the PLC to 30sec read time, and have a write time to the SQL table also a 30sec interval without having a missed row? Would changing the OPC data mode from Subscribe to Read do this? I have version 7.6.4 running.[/quote]If the OPC items aren’t referenced anywhere else, I presume so. If they are also assigned to SQLtags, I’m not sure how the Transaction Group and the SQLtag scan class will interact. Sounds like something to test :slight_smile: [quote=“krausch123456789”]I would change the read time, scan class, but I think the data would show weird values if I changed it to something that is not divisible to 30, and if i changed it to a scan class of something divisible by 30 there is still a chance that the overlap of read and write might happen again.[/quote]Any time you rely on a subscribed mode in a scan class, the scan class’s period (plus a little) will be your uncertainty in any given read value. So if the data really has to be snapshotted together at 30-second intervals, I don’t see an alternative to OPC data mode == Read. The key here is “snapshotted together”. Only a transaction group in read mode will do this.

Thanks Phil,

I definitely have something to try now. I really appreciate your help.

Kurt