Chart Strange behaviour

Hello everyone,

I have had a problem for a few days. Some of my charts and the records of the historian tags in the database don’t make sense at all. To explain my problem, here is an example : the Tag TT105. If you have a look at the chart, you see that the value is frequently zero which makes the chart go “up and down”. However, if i look at the chart drawing (live) the value is never zero and only appears like this after a few minutes. Moreover, if i look in the database, i see that not only the value is never zero. This problem affects most of my charts except a few (one example under).

You can also find a screenshot of the settings of the tag and of the scan class i use.

Capture2 Capture3

Does anyone see what the problem could be ?

Thank you in advance,

@nagewaza1009,
Can you try something? In the advanced settings for the chart, there should be an option to ‘Disable Tag History Cache’. Checking that will mean that your database and charts have to “work” harder to render data, but it may help keep data appearance accurate. Also, which version of Ignition are you using?

@PGriffith,

Thank you for your answer. I forgot to mention it but i am using Ignition v7.9.4. Unfortunately, i believe this option only exists on Easy Charts.

Also, everything has been fine for the last two years that i have been using Ignition (and the problem appeared overnight) so i guess it probably stems from human error (i recently created new scan classes so maybe i clicked somewhere by mistake)…

Maybe this can be a clue : the chart are all running realtime and only the time-weighted average ones are affected. Changing to closest value seems to solve the problem but i would prefer to find a solution to the initial problem as i am also computing averages with queryTagCalculation and i fear that it may affect them as well.

Moreover, if i display the data in an Easy Chart i can clearly see that the Chart changes from a line to points as you can see on the screenshot below.

What’s the load factor on your PLCs (check the gateway status section). Also, in either the tag history query (if you’re using a classic chart and querying via a binding or scripting) or in the easy chart settings, set the flag to ‘validate scanclass executions’ to false - meaning you don’t want the gateway to validate scan class executions. In normal operation, the historian inserts a record into a table to indicate that a particular scan class’ execution took longer than it should have, which means that data from that scan class may be stale. With default settings, any queries will reference that table and not display data from “stale” scan class executions.

I have different devices but they all have a load factor between 40% and 55%.

I did that and it makes most of the gaps disappear (and especially the returns to zero). This seems to prove that the problem comes from the scan class itself. I tried changing the stale timeout but it seems to have no effect at all.

The problem with that solution is also that my charts are very rectangular now, so quick changes in the values result in unrealistic display.

@PGriffith,

After further research, i found out that one of the devices load factor is more than 300%.

Looking at the executions of the scan classes i discovered that two scan classes i created recently have incredibly high number of executions. These are “van 1 pac a prod ecs” and “van 1 pac b prod ecs”. The two others in the list starting with “van1” have once be created but were deleted (supposedly, but the number of executions is still increasing).

These scan classes are driven scan classes and were used by 10-15 Tags as both “normal” and historical scan classes.

This is probably linked but i cannot figure how to completely erase the two previous scan classes and reduce the number of executions for the two others. Do you have any idea ?

It looks to me like the scan rate is not what you intended:
For example, your Fast Rate is shown as 1 000 ms. I think the system in interpreting this as 1 ms, ignoring the zeros following the space.
Try entering the Fast Rate as 1000, and the slow rate as 10000 (no spaces in the numbers).
If these are temperatures you are monitoring, even 1000 ms is far faster than necessary, in most industrial processes, temperatures don’t change that fast, a scan rate of 3000 - 5000 ms is usually fast enough.

@harold.bruulsema,

Thank you for your answer !
Unfortunately, when you validate the scan class with the rate as 1000 (without space), it automatically changes it to 1 000… Also, the default scan class is with spaces.

However, i think you are absolutely right when saying that the system is interpreting this as 1 ms (actually 10 ms because i only recently changed it from 10 000 ms to 1 000 ms to make some tests). The only difference between my new scan class and the others is that this one is in Direct Mode.

Sounds like a bug related to your locale. You might want to contact support to confirm.

Perhaps try 999 for now?

1 Like

Thank you for the tip. 999 works for now, but i will have to find a better solution to be able to scan at 10000 ms later.

Sure. How can i do that ?

For now, i deleted the new scan classes and all the tags linked to them. However, the load factor is still extremely high sometimes. I get this kind of logs :

Any idea what could cause that ?

https://support.inductiveautomation.com/

1 Like