I've opened a case with support to see if they have any way to make it work. The ignoreBadQuality setting sounds like what I'd want but not sure why it doesn't seem to do anything. Maybe it only removes bad values if your date rage includes some good ones? If the range has no good values is just gives you the last value, bad or good?
If this was working, not sure if it's best to POST the last known good value or to not POST anything for that time.
It's still using the old 7.9 quality Codes. So it changed the 512 or 524 to 600 or unknown. You have to use 0 to make it BAD.
if({[.]Apparent Power (Primary) OPC} = {[~]MemoryTags/forceQualityCheck}, forceQuality({[.]Apparent Power (Primary) OPC}, 0), {[.]Apparent Power (Primary) OPC})
Not sure if it now logging the Quality as 0 in the tag history database is an issue or why it changed but when I run the system.tag.queryTagHistory script in the Script Console it returns an empty dataset for the last few minutes if the tags quality has been bad for longer. I think that is what I want. When I set the start time back far enough to see when it went bad, I will see the bad value returned if ignoreBadQuality is off or 0. I can then set ignoreBadQuality on and will not get the bad value returned.
But when I run the same script in a project library defined script it not only gives me a value for the same tag that has been marked bad quality, it gives me a value for a tag that has historization currently turned off.
I'm using the Discrete Deadband Style. I'm wondering if it's a bug or if there is a reason the ignoreBadQuality option should not work when used in a project library defined script vs. the Script Console.
I've done my best to make them the same and they do not behave the same. I thought there were a few scenarios where scripts work in one and not the other.
In this case, its a gateway timer calling the script. Are they often run from the same scope and is that not often an/the issue?
No, the script console is designer scope, a variant of Vision Client scope. Not gateway scope at all. Any project library script called from designer scope runs in designer scope, while when called from gateway scope, it runs with gateway scope. That's why it is called a library script.
If you need to test something intended for gateway scope, call it from a gateway message handler, and invoke that gateway message handler remotely (like from the script console) with system.util.sendRequest(). You need messages to cross the network between scopes.
Support has been looking into it but they are saying this is normal. Using system.tag.queryTagHistory will always return a value for a tag. The one thing they are looking to fix is that when using ignoreBadQuality it still gives you the bad value. They say to add includeBoundingValues as a workaround but that just sets the value to 0.0 when returned.
I feel like the Script Console output is better, not creating a row in the output dataset when the value was bad or there is none or historical logging for the tag is now off. If you are needing these historical values for a trend or report, wouldn't you want no value returned? If I just get 0.0, how do I know if that is a real value for that time or some bad values changed to 0.0?
To get the library version of the script to work and ignore values with bad quality you have to use the returnSize set to -1. Unfortunately this adds many rows to the dataset returned. I found some posts and it looks like there is no way to fix this. It's creating a row each time it finds a value for a tag so you get many rows with the same timestamp and only values for some tags. There are a few that even show a different value for the same tag with the same timestamp.
That's a normal behavior of wide return format when timestamps for specific values aren't exact. You get repeats of prior values in the affected columns to avoid nulls.
Simply do not use wide return format when trying to retrieve non-interpolated or "as stored" values.
I have the new tags based off a UDT and they are working. The expression tag is marking quality bad if the value is 999 but now I see that not all bad values from the field devices are 999. It looks like there are points with implied decimals for some values. So, they are shown as 99.9. I've not seen 9.99 or 0.999 but they may also be out there.
This seems like a not so good way to do this but they have many systems set up this way. I do not like it as I'd think there could be a real value at 99.9 or 9.99 and it would be removed.
In the expression tag, I was simply checking if the value from the OPC tag was equal to a memory tag with a current value of 999.
if({[.]Apparent Power (Primary) OPC} = {[~]MemoryTags/qualifiedValue},
qualifiedValue({[.]Apparent Power (Primary) OPC}, 'bad', 515, {[~]MemoryTags/qualifiedValue_BadQualityMessage}),
{[.]Apparent Power (Primary) OPC})
I was just going to try and make them strings and remove the "." in the string but the floats come into Ignition as 99.900001526 not 99.9. Currently using the indexOf function to see if number is found in the start of the value from the OPC tag.
if(indexOf(replace(toStr({[.]Apparent Power (Primary) OPC}), ".", ""), toStr({[~]MemoryTags/qualifiedValue})) = 0,
qualifiedValue({[.]Apparent Power (Primary) OPC}, 'bad', 515, {[~]MemoryTags/qualifiedValue_BadQualityMessage}),
{[.]Apparent Power (Primary) OPC})
This seems to be working and covers 999, 99.9 and 9.99 but not 0.999. Just wondering if there would be a better way to do this.
Fix the source data to be unambiguous. Seems to me it is time to push back against the crap the existing system is producing. You can't always get what you want with a software magic wand.