My original issue started around sometime earlier this month and it’s still persisting. It seems like a report template I created a couple of months ago is now missing large chunks of data daily. It varies by the tag that’s having its history logged, but for the most part, so much of the data is missing and will not show up on the daily report that I email out to the customer.
I originally tried the following steps and methods:
- Showing the support person the overall functionality of XLReporter (Since that is the platform that’s being used for trending and logging values onto my tables). One thing I did show was that I do have some of the same tags repeated multiple times within my summary values table so I can let XLReporter do different calculations (Max, min, and average of just one tag).
- Each tag within Ignition Historian Designer does have history enabled for it. So far all of them are “on change” with a varying max and min time sample depending on what the tag is. I was told that having them all set to an “Auto” Deadband style as opposed to something like “Digital” could be causing the issue.
- They checked through my query browser for any recent history logging. It seemed that a tag has not been stored since June 16th (Sounds about right…) and that the partition was changed to a one-year size.
They decided to call again in the morning and I had found something within the actual web browser for Ignition that sent some red flags. In the Overview and Performance tab, it seemed like the database was approaching full capacity. They realized the other ticket they created was related to the new ticket they made because the local cache size was increased and the memory usage/CPU immediately went down.
They also attempted to make a new schema to test further but I did not have permission to do so on that specific PC.
So some things I was told to do that could solve the issue:
- Regenerate the data cache folder by completely stopping the service and renaming the SQL folder within the Ignition/data/datacache folder as an attempt to start pulling data logging values again.
- Create a new database connection but the only downfall is that it will generate a separate store and forward engine for that connection and I would not be able to access previous tag history without altering the tables.
- Increasing the max records for the Store & Forward engine… Except that depending on how fast it fills up it could just be a temporary fix.
So my question is, how can we prevent this from happening? This has happened one other time and I was unable to do much because the customer has direct access to the SQL server I’m pretty sure they just increased the size of the local cache and we ran into this issue again.
Does anyone have any other tips on how to resolve this issue? I am not able to do much in this case and want to test a few methods to see if I’ve been going about the right way with this.
Here is how the table looks currently. Obviously, the whole table should be logging data with the exception of the system total values for the PIDs.