I have a tag change script that logs tag values to a database. It's set up to run on each tag's tagChange script.
Of course, this is fine for sensor values that are constantly changing, but some tags stay the same value for long periods of time and only change occasionally. How can I set up what would effectively be a "maximum sample time" so that I get a database entry at least every, say 3 seconds?
Put the script into the project's Gateway Scripts.
Call it from the tagChange event.
Create a Gateway Events Scheduled script and call it again from there.
I did this for a production system where we wanted to log machine counter values on run / stop transitions but also for shift change (and the shift patterns were flexible). We scheduled it for every 15 minutes.
But why every 3 s? It seems excessive for a value that isn't changing. Why not an hour, for example?
Let's see your query. There must be an SQL way of retrieving the last records before the start time UNION the records in the period of interest UNION the most recent record after the end time.
Not specific to Vision. Wherever Perspective uses time series that do not come from the historian, this same issue is present. If there's no data for a tag in a charted span, there's no trace.
For historian pens in Analog deadband style, historian queries inject the most recent sampled value as a virtual row, even if not yet recorded. (See the historian's docs for Analog deadband style.)
For historian pens in discrete mode, most of my clients specific a relatively short (one minute-ish) "max time between samples" to ensure there are chart traces for tight zooms.