Totalizers, historian and reporting

Sorry… this is a bit long.

Many of our customers have devices that totalize data in various methods. Some are done in the device and we read it via comms to the PLC, others are totalizing pulse to the PLC and the PLC just increments a value and still others are calculated roughly in the PLC to generate the totalizer value.

Many of these values represent a production value for the end user, and they reset them manually either on the device or from Ignition via a pushbutton etc… So at any point, the total value can be reset back to 0 via operatorinteraction, or possibly a PLC download.

The hurdle we are trying to jump over is how to properly historize this data so we can easily report on a total across a time range for a specific value. One of the benefits that we try to bring to our customers is a standard suite of reports, basically for a market sector but it is one that we know well, and we back that set of reports with database configuration for that plant. That way the reports are standard, but the way the data is grouped or the different sensors/totalizers are included all sit in a database.

For the totalization, we have tried the following with various levels of success/failure.

  1. Log the raw value of the totalizer into the historian and then use the database query language and window aggregates to retrieve the differences between the different data points and then sum up that data for reporting. This is extremely messy with the native Ignition historian.

  2. Log the raw value of the totalizer into the historian and when the data needs to be reported on, use a custom tag history aggregate to calculate the differences between the values and sum up the total. The tag aggregate method is a little obfuscated, and can be difficult to troubleshoot.

  3. Use a gateway timer script that looks at all the current totalizer values, compares them to the previous then uses system.tag.storeTagHistory and logs the difference between the previous and current. When pulling the data back out in a report we can use the system.tag.queryTagCalculations to sum of the data.

  4. Log the raw value of the totalizer into the historian, and then when the data needs to be reported on, query the historian using the system.tag.queryTagHistory pull that raw data and then process the dataset for a python script that will calculate the differences between the data values and sum up that data to calculate the total. I’m worried that this method will soak up a bunch of memory if the number of data points gets large, ie querying for data across an entire year etc…

  5. This idea is in progress and not tested, but the theory is to calculate the differences between the current and previous in a tag change script in a UDT. Then that ensuing value is logged. This keeps the logging in a UDT so any tag of that instance will just automagically work.

  6. Use transaction groups or a gateway timer script to log the data directly to a special table that window functions can be used in the database to calculate the totalization. A transaction group would be more manually generated, a script could be database backed.

Are there any other options that I’m missing? We are trying to get a solution that is easy to implement and allows to the easiest data retrieval across date ranges. We are also trying to make the solution able to tolerate the totalizer resetting back to 0… hence to reason to calculate the difference and then sum up.

1 Like

In my opinion, there isn’t any good method that can prevent bogus values from a PLC download. If the current accumulated value is 1234 and the user downloads an old copy with a value of 2345, how do you handle that, perhaps a transaction group?

My personal go-to is store values in tag history and use difference over interval, which looks like your option #2. The method I use does allow for rollover or reset to 0.

The theory is that we can utilize the previous value along with the quality of that value to determine if it is valid or not.

Basically, the logic that we have used is as follows:

Check 1
Is the previous quality good and the current quality good? If so then compare toe values. If the previous is less than the current then log the difference and store the current as the previous, else log 0 and store back 0 as the previous. Also log the current quality back as the previous quality.

Check 2
If the previous quality is not good, but the current quality is good OR the previous is good but the current is not good, then discard the previous and make the current the previous. So we can detect coming back from a bad state.

Check 3
If both the previous and current quality is not good, do nothing.

For this method the idea is to offload the final processing for the reporting onto the SQL Database so that large queries are done in that environment and then dumped back to Ignition. But it requires processing on the tag side…

I’m not certain for other PLC brands because we only use AB. However, what we do is use a GSV to track the AuditValue in the controller. This allows us to detect when a download has occurred and then act accordingly.

You could just set the data value to 0 simulating an operator interaction which is already handled. Or you could get more sophisticated and load a stored value so that the data value remains where it is.

At the very least this lets us determine if the data change was due to some operator/PLC expected interaction or an unintentional change due to download.

I had actually never thought about that. That is a really good idea. Matter of fact, setting to zero on first scan might be the easiest if you always want to reset to zero.

Yeah, unless you don't want a reset after a power cycle.

The biggest thing is a download for us.

Many of the devices, typically belt scales, are just a pulse and we one-shot a counter then at a pre-determined or manually it is reset. A power outage resetting back to 0 would be annoying for the operator.

@lrose Can you expound on the AuditValue and how you use that in the PLC? We primarily use ControlLogix as well… however our solution for this also needs to be as flexible for other PLCs as well.

Sure.

You can use a GSV to get the AuditValue attribute from the Controller. This value will change (FW: 20+) when specific events occur. Some of those events may change the data value that you are interested in and some may not. For instance it can be configured to change when an online change is finalized. The AuditValue is a DINT.

We monitor this from the SCADA system and then on a change we first verify that the data value is changed from what is expected (usually a historical value). If different you can take the step appropriate for your application.

If we are only interested in the increment then we will use any change of this value at the device level to set the count to zero. This is then handled in the logic knowing that value will only ever increment, and a jump to zero is not a true change. It should be said, that in this case we handle the rollover and force it to 1 instead of 0.

Information around this can be found here, in Chapter 4.

I can’t speak to if other PLCs have a similar function.

Thanks I appreciate your help.

I thinking that in the long run possibly leveraging this with my above option 3 or 5 will result in the most reliable logging of data. I just wish there was an easier way to deal with this type of data internal to Ignition without dealing with all of this. Maybe it is time to learn java and create a module for this… sigh.

1 Like