Vision Easy Chart - Correctly show a value change

I have some data being trended using an EasyChart in Vision.
The data is set to only log on change, the problem I’m finding is the trend will only show the logged data points as-is.
For example, if a tag (Lets say a tank level) is at 1ft all day, then rises to 2ft around 6pm, the trend will connect a linear line between 1ft and 2ft making it seem like the tank was gradually being filled throughout the day, like so:

How do I make the chart accurately represent the fact that the tank was at 1ft up until 6pm, like so:

Without knowing how you have your easy chart set up right now its going to be wild guesses on what you need to do. How you gather your data also matters. You said your logging it on change but what did you set our deadband to before it is allowed to log?

From what your showing it looks like its doing a tag history interpolation. I would assume that your not using a tag history resolution mode of raw and allow tag history interpolation is set. With that, it will use the resolution you set to return values. If it only returns a 1 at the start of your time range and a 2 at the end because you only had 2 points return, then it will interpolate how the change was made which would cause the straight line between the two.

If you have other points in there that show it starting to change then it would give you what your describe that your looking for. Without knowing more about how the data is logged and how your easy chart is set up though, it really is a guessing game though. If your only logging in 1 foot increments then your tag settings would be the issue. If your history has more values logged in the range that your showing then you need to play with the resolution/mode to give your chart the resolution your looking for.

Thanks for the reply!

The logging is on-any-change (deadband = 0), with a minimum time of 10 seconds between samples.
The above trends are showing all that’s stored in the database:

Timestamp, Value
2020-10-12 00:00:00, 1
2020-10-12 18:00:00, 2
*This is just test data using a memory tag

I did play around with the “Tag History Resolution Mode”, interpolation, and various aggregates on the pen but couldn’t achieve the desired results.

I do get why the chart is showing a straight line, purely looking at the data points of 1 and 2 a straight line makes sense.
But including the extra information that the tag is only logged on change adds a different implication of what the values 1 and 2 really mean.

Either either set your pen style to ‘digital’ to get a staircase plot or set a maximum time between samples so that the interpolation periods are sufficiently small.

First option is best for set points, while the second option is best for measurements.

I have tried that, and it works in this very specific scenario, but doesn’t look right when the analog value gradually changes.

I just addressed this in my edit, actually. The line pen just draws a straight line betwen points. If you want to show the pen doing something different, you need to add more points in between.

I see, musta replied before the edit.

The “set a maximum time between samples” sounds like what I want, but where do I find that option?
EDIT: Is that referring to the tag’s logging deadband? I don’t want to have excessive duplicate data points in the data log for performance’s sake.

Can I insert my own data point via scripting?

The “set a maximum time between samples” sounds like what I want, but where do I find that option?

It’s the “Max time between samples” property in the tag editor under history.

EDIT: Is that referring to the tag’s logging deadband? I don’t want to have excessive duplicate data points in the data log for performance’s sake.

All rendering between actual logged values is a lie, regardless of whether you do a line or digital plot, so it really comes down to how accurate you want the interpolation to be. More accuracy required means more data points so that there is less interpolation. Less accuracy required means you can reduce the number of points you log.

Can I insert my own data point via scripting?

You could in theory write a script that parses the internal Ignition historian tables looking for value changes, checks what the previous value was and inserts a fake point some pre-defined time prior to the current value. There are a lot of technical challenges to overcome in doing this, but the biggest challenge would probably be explaining to the client or to maintenance how it works and why it’s set up like that. In realtime mode, the chart is going to show it at 2ft for that whole time then suddenly rewrite the last few hours of history at 6pm, so it will degrade trust in the system.

Are you sure logging more data actually causes the performance impact you’re anticipating?

Yes, there are a few thousand tags in the system and if they are generating data points say every 10 seconds, instead of on change, then each tag would produce 8,640 samples a day, for a total of at least 8,640,000 daily samples which is a bit excessive. I know the database can handle it, but having 3,153,600,000 yearly samples for only a 1000 tags is too much. Especially considering the overhead is just to address a simple data point insertion into a trend.

I don’t think trust would be degraded. The trend would show a flat line of 1ft because the tag is realtime and the tank is indeed at 1ft. Then when it goes to 2ft the line would jump up to the 2ft grid line, indicating that the tank’s level just changed.
If anything, trust would be degraded when going back to look at a historical log of 12am to 6pm and see a linear increase of the tank’s level, implying it was slowing filling up whereas in reality there was a sudden fill up around 6pm.

Funnily enough, this problem has come up because the client asked why there’s linear interpolation on their tanks during a period of time where the level didn’t change.

I feel like this kind of interpolation should be built-in to Ignition as my trends above show a huge difference in what really happened to a tank.
Regardless, I will start digging into what’s needed for the scripting.

Right now your testing this using test data that you said was put in through a memory tag. That did a sudden jump from 1ft to 2ft. I’m assuming your actual sensor has better resolution? If so then you will end up with a lot of entries based on your settings. If this is an analog sensor that provides the data, I would bet there would be at least a little variation each time the tag is scanned. This would mean every every time it scans you will get an entry.

This is where the deadband setting is important. How big of a change do you want it to make before it logs an entry. With the example you gave, if you had your deadband set to 0.1 then you would of had entries at 1.1, 1.2, 1.3, 1.4, … up to 2.0. This would provide you with the resolution your looking for. Am I wrong in assuming your sensor will provide more resolution than your memory tag for testing?

Sorta, the data is coming from analog sensors but the resolution won’t necessary create a fine trend for two reasons:

  1. The tanks can be relatively static for a while and follow up with a quick discharge, additionally the sensor isn’t 100% accurate and has a fair amount of noise. To prevent excessively large data logs from the noisy sensor, the dead band is set to approximately 1/4 a foot which often results in a flat line from a single sample as the tank is static.
  2. The data is being transmitted over a radio system that has large delays in reads, so the samples don’t catch fine changes well due to the delays and timing involved with the radio.

This is asking the software to be able to figure out what your system is doing with no data. There isn’t a way for it to do that. How would it know if it was a steady increase or a sudden jump with only 2 points to go off of?

Even doing it on change, if your system only shows the sudden jump, then the software has no way to know that it just did that jump when you don’t have the resolution to let it know that’s what happened. It uses the data you provide it to take a best guess at how to display the data but when 2 points, unless it is digital, I would assume there was a slope between the points. If I didn’t have data to show me how big of a slope it is, I would have to assume it was a direct line between the two points.

1 Like

Because we have the tag to only accept samples every minute, so it can deduce from our tag’s configuration that any change that occurred more than a minute from the previous was “sudden” and shouldn’t have a linear line. Whereas, any change that occurred in about a minutes time (Some amount of padding would be involved to address network latencies) would be considered a gradual change and operate with the standard interpolation.
*This is assuming the network is functioning and the gap of no new values (between 12a and 6p) is caused by not meeting the sample requires as opposed to network loss

I’ve implemented this in other SCADA systems before, not necessarily with relative ease, but the results work well and have yet to have a client who’s confused by it.

Even if it is providing data at 1/4 of a foot that should still provide more data than 2 points. With your second trend where you showed how you wanted it to go up. I would still expect that you would end up with 5 points total at 1.0, 1.25, 1.5, 1.75, and 2.0 This would still show more of a curve. Going from 1.0 to 1.25 you would still have a line that goes direct between the points but shooting up the rest of the way would be more accurate because of the added points.

To be honest, this sounds like you want the software to solve a hardware issue which is unrealistic. If the level transmitter isn’t accurate and is affected by a lot of noise, you can filter some noise out using deadbands but the software can’t make up for a lack of accuracy from the sensor that provides it data. Data will only ever be as good as the source that provides it.

The delays in data you mention I wouldn’t think would be a big issue most of the time. I would assume they would be consistent and as long as everyone understand the delay, as long correct data is getting there then it shouldn’t be an issue. If the delay also causes data to no make it then it can create gaps in the data causing trends to look incorrect. But gaps in the data is also not something you can expect the software to be able to fill in with 100% accuracy. It can make assumptions but it can’t determine what it should of been with 100% accuracy.

If you logged it every minute then yes it can deduce it by that but. How does the easy chart know how your logging it. It is looking at the available data and displaying it. It tries to interpret how to connect the dots but it still only uses the data its provided. It would have to have an entry each minute that it fills in the gaps between to be able to do what your asking for.

I figured since EasyChart is part of Ignition and the tag it’s rendering is part of Ignition then it could look up the tag’s configuration and use the configured sample information. Or have a manual entry of “interpolation time minimum” or something.

I think this horse has been beaten bloody, I will have to look into the scripting side to get the custom data point inserted.

I could see that turning into a mess. Since its looking at historical data it would have to track what your tag configuration was at the time on the entry other wise it could misinterpret data if you change your tag configuration.

You can script something but I would look at the sensor your using to see if there is anything you can do to get better data from it. Even if its every 1/4 foot that you see a change, that would still provide you better data for most of it.

Fair enough, incorporating the configuration into the pen itself would also suffice

Regardless of the accuracy of the instrument, we don’t want a ton of samples, only significant ones as a tank fluctuating between 8 and 8.25ft all day isn’t meaningful and just clutters up the trending and database.

What you’re basically saying is that you want the easy chart to figure out for each point: “if the tag was configured in this way at this time then the change can only have occured between times x and y so I’m going to assign time period x to y as the uncertainty and all other times as certainty, then plot uncertain periods as a linear interpolation and certainty as a flat line.”

This is a pretty complex way to make a staircase chart look a bit smoother. What if you have your tag group set up to be driven? Or Leased? What if the tag group or tag rate changes? Does the historian then need to log the tag group configuration along with the data so that the easy chart can look backwards and figure out how to act at various different time points. What happens if your deadband type is analog instead of digital?

I understand the complexity of looking up the tag’s time-based configuration changes and agree that it’s excessive, but I mentioned that just having a static property configured in the trend’s pen itself would suffice.
I’m not looking for hyper-precision, just when looking at a entire day’s/month’s worth of data I don’t want to see incorrect lines that span hours or days where the value wasn’t changing.

Similar to how I did other systems, I’d imagine implementation for a historical trend would be as simple as:

  1. Iterating over data points to add to trend:
    2. If time between current sample and previous > 1 minute (Configurable property for the pen):
    3. Insert data point at current sample - 1 minute, using previous sample’s value

    // Continue with existing logic
    // Don’t fully understand how indenting works in these posts…

Making it real-time would require a little bit extra