I'm having an issue that despite being set to 'On Change' and the value of a tag seeminly not changing, its HAMMERING storing the same value in the database.
Multiply this by 70000 tags and Houston, we have a problem!
The tag is part of a UDT. There are 3 OPC tags under sources... and these are then read by the Derived tags via jsonGet / jsonSet.
The tag we're going to look at is Q_AH, part of the OUT source.
Its a discrete tag that is either true or false. I have set the config as per below params:
Update: If I set Deadband mode to Absolute with a value of 0.2 (Assuming for a discrete its treating it as a transition from 0 -> 1 -> 0) it seems to produce the expected behaviour.
This then opens 2 more questions - Does 'On change' not account for an update of timestamp with no value change (Assuming the jsonGet is running at the IO prodiver speed)? Is the deadband the correct way to do it for discrete tags?
UPDATE: The plot thickens! I have another tag that is still going for it in one of the UDTs. This is 'derived' from one of the other tags via expression (see config below). Its still logging even with the deadband! What algorithm is this historian using?!
Thanks @Duffanator - I've just found the below post from vision which also agrees with that logic - But its not changed so far and is still logging at once every few seconds There doesnt seem to be an 'Ignore bad quality box' either!
Can you give us a little more detail about the source tag? Namely:
Is the timestamp changing on that tag as well?
What type of tag is the source? (OPC, DB, Memory, etc.)?
Is the source tag subscribed/polling?
The derived tag should be generating a subscription to that underlying tag (including the value, quality and timestamp) and processing history accordingly.
The source tag is configured as per image below. Its a Document tag retrieved via OPC from a Siemens S7-1500 PLC. It sits within a Global DB (We dont have option to enable PUT/GET due to security so cant use the siemems driver).
Source is set to leased @2000 / 500 and subscribed with opimisitc writes as true (This is set to read only)
The timestamp on it is changing in the diagnostics (I dont have history on for the document tag). Snippet of the returned raw is below:
[{
"Q_PV": 1.0153937,
"Q_AHH": false,
"Q_AH": false,
"Q_AL": false,
"Q_ALL": false,
"Q_BAD": false,
"Q_SIM_ON": false,
"Q_CHNL_DISB": false,
"Q_STATUS_WORD": 1837056
}, Good_Overload("Sampling has slowed down due to resource limitations."), Tue Jun 11 20:07:43 BST 2024 (1718132863618)]
After spamming refresh, I notice it is switching occasionally between 'Good' and 'GoodOverload' - Could this have an effect? Both seems to still have the required data.
I calculated the difference between some of the consecutive timestamps in your database table snip and they were all multiples of 5 seconds. Not sure if that means anything. Everything else looks ok to me at a glance... Shot in the dark, try changing this from 0 hours?
The only value I can see in your screenshots that matches that 5 second timestamp increment mentioned above is here. I haven't played much with these particular settings but could be worth investigating.
After a couple hours of trying to replicate this issue, I haven't been able to replicate exactly what you have reported. The only thing that comes close is I am seeing multiple values getting stored to history when all of the following occur:
There is more than one derived tag subscribing to the source tag within Ignition
Those derived tags have a Tag Group with Optimistic Writes enabled
Ignition writes a value to one of the derived tags.
This can cause a race condition that cascades throughout all the linked tags.
At this point I would advise contacting support so that they can work with you and see your entire setup and how things are put together. It sounds like there is something configured in a manner that I have yet to replicate to get the results you are seeing.
Had a super awesome call going through and debugging the problem with Ryan at support and thought I'd post a quick follow up incase anyone else comes across this post with the same issue.
It all seems to boil down to tag quality codes.
My setup is using a S7-1500 PLC and connecting to the device using its onboard OPC UA server. This OPC UA server sometimes reports quality code GoodOverload (Link for detail) if the timing of its processing has been a bit too long.
Why does this matter?
It seems that if you're operating 'On the fringe' between Good and GoodOverload, then the quality switch causes the tag to log, specifically if it is a discrete tag, no matter what deadband is configured. In the backend, ignition doesnt (currently) report a different quality code in the database when this happens (code 192 for Good and GoodOverload, even though it should be something else like 2xx).
The temporary solutions to this are:
Absolutely hammer the OPC server so it sits in GoodOverload all the time. Sounds stupid right? Well turns out in the first project we did the PLC was always in GoodOverload and thats why we never observed this issue there!
Optimise the tags to ensure quality remains as good. Ironically doing this is exactly how we found the issue, guess we need to go and do some more!
Set the historical logging mode, even on discrete tags, to Analogue. This will mean graphs are kinda funky, so its not the nicest work around! (CurveStepAfter in perspective helps), but it might just save you from 86million new records in 2 days like we have
Long term, hopefully this gets fixed in one of the ignition releases. I've added a feature request here, and if its causing you bother too, be my guest to upvote it
Thats all from me!
Kudos to Ryan from support and everyone here for helping try to replicate and figure this one out
we have status Good <=> Good_Overload with a Siemens CPU1515. This add a lot of change in history. Did you find a more long term solution to avoid this ?
If you are overloading the PLC, all of your functionality assumptions about value reporting pacing go out the window. Overload is never an acceptable production situation, IMNSHO.
The cpu communication load was set to 20%, we have adjusted this parameter to the max 50%, and now it's ok. CPU cycle time stay around 20ms and is ok for our kind of usage.
Hopefully the long term solution will be the Siemens driver in Ignition 8.3 so we dont have to rely on the slow and heavy onboard OPC UA server.
The solution we implemented short term was to switch to using KEPServer. We couldnt slow the OPC UA server to a suitable speed that was acceptable for site operations, so had to make this change.
TLDR, dont use OPC UA servers with Siemens if you want anything remotely fast, or with more than a handful of tags. Sorry!
We need to use an external tools SiOME beside TIA portal to configure items to be accessible with a string node Id (default is numeric node id)
New firmware > 3.1, support a virtual IP for opcua server, we don't need to connect to the server of each cpu in case of a redundant cpu.
In 8.3, S7+ will be a good alternative. I Hope it could manage the redundant cpu too.