I have several tags that need to log at 20ms per the customer. We did an update from 8.1 to 8.3.4 on Saturday/Sunday and everything looked ok. 4pm on Monday it seems we lost data. In order for me to “band-aid fix” the tags that were no longer logging, I had to to move them from the default tag group to the faster tag group otherwise they would report their values as Bad_Failure.
How do I get my tags stop reporting as Bad_Failure when they were just fine before? Is the 20ms logging rate unrealistic for tags?
Depends on the driver and total tag count, but typically that is unreasonably fast. (That is 50 per second. So will load down a driver 50x compared to the 1-second default.)
I would guess that you really weren't achieving the 20ms pace in v8.1.
Regarding this, what is a realistic tag rate to achieve? and how would I implement a PLC ring buffer on my plc that is collecting this data and pushing it to ignition to be logged via a transaction group?
Not currently; It is for pressure (psi) and load cell (load) recording. However I will be implementing power system frequency logging and other power monitoring later on.
Grand, if you don’t have PQM already on site, I have achieved 10mS logging of events with Janitza hardware, which does the heavy lifting and FTPs me a ~80KB file per 30 seconds worth of data.
Thank you for the recommendation. I found that the tags were only logging every roughly 100ms vs 20ms. After talking with my engineer about it, I changed the tags to log every 100ms, rebooted, and everything seems back to normal now.
Exact implemetation details will varry based on the PLC brand. I don't actually use transaction groups, but instead, run a script on a gateway timer event that uses system.opc.readValues() to pull the data from the buffer.
In general a ring buffer is just an array, and you track the head, and tail of the data. Make sure that the array has enough elements to hold the amount of data that you want.
For instance, say you want Ignition to scan at a 5 second rate, and you want to collect a sample every 20ms, then you would need an array that holds more than 250 elements, probably want to double that. Realistically, for I would probably do a 100 element array, and collect the data every 1 second.
If you need historical data then once you've collected the data, you can use system.historian.storeDataPoints() to store the data. If you're using the Ignition historian.
If you are running custom SQL tables/not using Ignition historian, then you would just construct a SQL insert Prep statement to handle the number of records pulled, and then execute the insert via system.db.runPrepUpdate