Fast tags seem to be interfering with default tag group tags

I have several tags that need to log at 20ms per the customer. We did an update from 8.1 to 8.3.4 on Saturday/Sunday and everything looked ok. 4pm on Monday it seems we lost data. In order for me to “band-aid fix” the tags that were no longer logging, I had to to move them from the default tag group to the faster tag group otherwise they would report their values as Bad_Failure.

How do I get my tags stop reporting as Bad_Failure when they were just fine before? Is the 20ms logging rate unrealistic for tags?

Depends on the driver and total tag count, but typically that is unreasonably fast. (That is 50 per second. So will load down a driver 50x compared to the 1-second default.)

I would guess that you really weren't achieving the 20ms pace in v8.1.

Undoubtedly. I use a scrippted transaction, and a PLC ring buffer to achieve that rate of data collection.

Regarding this, what is a realistic tag rate to achieve? and how would I implement a PLC ring buffer on my plc that is collecting this data and pushing it to ignition to be logged via a transaction group?

I don't have generic implementation advice. The clients I've helped with ring buffers got it as part of large $$ projects.

Is this for power systems frequency logging, by chance?

Not currently; It is for pressure (psi) and load cell (load) recording. However I will be implementing power system frequency logging and other power monitoring later on.

Grand, if you don’t have PQM already on site, I have achieved 10mS logging of events with Janitza hardware, which does the heavy lifting and FTPs me a ~80KB file per 30 seconds worth of data.

How would I go about verifying how fast the tags are actually logging so I can use that information to troubleshoot further?

You mentioned transaction groups a few replies back. If this is your current architecture, then check the DB t_stamp for the relevant tags.

Thank you for the recommendation. I found that the tags were only logging every roughly 100ms vs 20ms. After talking with my engineer about it, I changed the tags to log every 100ms, rebooted, and everything seems back to normal now.

Exact implemetation details will varry based on the PLC brand. I don't actually use transaction groups, but instead, run a script on a gateway timer event that uses system.opc.readValues() to pull the data from the buffer.

In general a ring buffer is just an array, and you track the head, and tail of the data. Make sure that the array has enough elements to hold the amount of data that you want.

For instance, say you want Ignition to scan at a 5 second rate, and you want to collect a sample every 20ms, then you would need an array that holds more than 250 elements, probably want to double that. Realistically, for I would probably do a 100 element array, and collect the data every 1 second.

If you need historical data then once you've collected the data, you can use system.historian.storeDataPoints() to store the data. If you're using the Ignition historian.

3 Likes

We log everything to an MS SQL2022 database.

If you are running custom SQL tables/not using Ignition historian, then you would just construct a SQL insert Prep statement to handle the number of records pulled, and then execute the insert via system.db.runPrepUpdate

1 Like