dont know how to log tag data continuously even if the value of tag doesnt change (remains constant).when the value of tag changes then data is getting loged but how to log tag data if value remains constant ?
In the History settings of the tag you can set the Max Time Between Records. This will force a write to the historian even if the value hasn’t changed.
Use transaction group if you have more than one tag you are looking to monitor. You can set a timed interval at which it will run at.
I have a similar question, but I want to log only based on a time interval, irrelevant of the tag value change.
I’m assuming that I can set the Historical Deadband to some ginormous number and set the Max Time to my interval value.
Would this work?
I don’t really want to go down the transaction group road as this is a rework of existing content and it would mean setting up new SQL tables, writing new queries to get the data from different said tables, setting up the TGs, and modifying considerable screen content. much more work for which we do not have the time/money to do.
I have done a little testing.
What I did was setup an Expression tag where the expression is toMillis(now(100)). The idea is to increase the value of the tag about every 100 ms by that value. Hence there should be numerous tag value changes inside the 1 second interval that I want to log. The Historical Logging is setup with a Historical Deadband of 5000 and a Max time between records of 1000 ms.
On the Easy Chart where the data is being displayed, the Aggregation Mode is set to Time Weighted Average, in hopes to get only one value, averaged over the 1 second interval.
When the chart’s data is saved to XLS, what I actually see is a seeming interval of 1400 ms.
I am attaching an image showing the configuration and the resulting data.
The data that we want to actually log, visually represent, and export to XLS is flow data where the sensors are setup to provide the data based on gallons per minute. The we want to log values in gallons per second via scaling. with the data logged every second this provides a value of gallons. The customer wants to export this to Excel and using some analysis/comparisons there with other data be use it for their intent.
This is all predicated on us being able to log these values every second and present that data on 1 second intervals.
So my question is, can this be done with the built in tag historical logging?
If so, can someone provide me with a “how to” on that?
Missing in your screenshot are the history settings on the chart itself - not inside the pen, but in the chart’s properties:
It is configured just like the screen cap you provided.
I assume these are the defaults. I did not setup this page originally, but I doubt these parameters have been changed, especially since they match your values.
Those are the defaults, yes - and also likely the source of your problem. The ‘Fixed’ with a count of 300 means that no matter the timespan visible on the chart, the history system will only return 300 points to actually display on the chart. You can try changing the aggregation mode to ‘natural’ (meaning return one data point for every [possibly theoretical] scan class execution) - so one data point per one second, or ‘raw’ (meaning return every actual data point in the database for the chart’s timespan).
I tested with both the Natural and Raw Resolution modes.
This is how the data presented once exported from the Easy Chart to Excel:
In both cases it looks like the chart is providing data samples every 10 seconds, not 1 second (or 1000 ms).
So I’m still at an impasse. Ideas?
Your tag is still using the ‘Default Historical’ scan class - a 10 second rate. This is admittedly confusing, but setting a ‘max time between records’ does not force the historian to ‘backfill’ records - only ensure that upon a historical evaluation, even if otherwise it would not log a value at that point in time, if the ‘max time between records’ has elapsed, it will log.
Said another way - the most important setting in the history configuration is the ‘historical scan class’ - because the historical scan class is what drives the evaluation of all the other settings.
For what you are trying to do, you should use a 1s historical scan class, and set the ‘max time between records’ to 1 execution. However, at that point you’re barely using Ignition’s historian, and may want to look at transaction groups for your purposes - the historian excels at compressing linear trends, while SQLbridge/transaction groups are perfect for regulatory compliance/fixed time schedule exports. Transaction groups also make much easier database tables for external tools/regular humans to use.
Yep, that gives me the results I am looking for.
As previously noted, I’d prefer to stay away from changing the logging scheme to use Transaction Groups as this involves existing content. To utilize TGs would require new tables in SQL for which new queries would have to be generated. Additionally, there are other tag quantities being logged that are included in the chart. Using TGs would require some way to marry those to data sources on screen and the screen rework, even if all the tags were logged via the TGs, would require more effort that there is time or money (essentially none) to perform. Adding a new historical scan class and adjusting logging configuration and chart configuration will take a fraction of the time as it does seem to provide the needed functionality.
I’m trying to change the existing content now to log every minute (turns out that the data coming in is gallons per minute and the customer is ok with a granularity of 1 minute). To that end I created a 1 minute historical scan class. Configured the logging for the flow tags to use this scan class, set the deadband to 9999 (don’t know if this is necissary), and set the max time value to 1 minute.
I set the resolution mode of the chart to natural.
There are other trends in this chart for tags that use the default, 10 second, historical scan class.
When I export the chart data to Excel all of the values are now showing in a single tab of the spreadsheet (before, in my testing with 1sec historical data, I was getting two, one for the 10 sec data and one for the 1 sec data), so that all the data is being shown in 10 second intervals as opposed to some at 10 and the flow data at 60 seconds.
Is my approach correct?
If so, any idea what is going wrong that I’m not getting the resolution on the flow data that I’m expecting?
Here’s some visual of what I’ve got. Note that the table is something I through on a temporary screen just so I could isolate one of the flow tags so that I could see how it was logging.
That table is configured as follows:
One more point. I noticed after changing all the tags that are in the chart to the 1 minuter historical scan class that the data for all started populating so that the export has 1 minute intervals. I’m confused as to why when I had tags at the default (10 sec) and some at 1 sec that the export had 2 datasets, one for the 10 second data and one for the 1 second data. Why does it not behave this way when I have 10 second and 1 minute data logging in the chart?