Historian DB Hammering Tags

Hi all,

I'm having an issue that despite being set to 'On Change' and the value of a tag seeminly not changing, its HAMMERING storing the same value in the database.
Multiply this by 70000 tags and Houston, we have a problem!

The tag is part of a UDT. There are 3 OPC tags under sources... and these are then read by the Derived tags via jsonGet / jsonSet.
The tag we're going to look at is Q_AH, part of the OUT source.
Its a discrete tag that is either true or false. I have set the config as per below params:

With 'On change' and Discrete datatype, I'd expect behaviour to only log when it changes. But the database disagrees!
See a snapshot below:

image

What am I doing wrong here?
I've read the history docs so so many times and still seem to not be able to get this right! :slight_smile:

Managed to tank the VM it was on today with over 10 billion items in the store and forward! Woops!

Thanks!
Alex

Update: If I set Deadband mode to Absolute with a value of 0.2 (Assuming for a discrete its treating it as a transition from 0 -> 1 -> 0) it seems to produce the expected behaviour.

This then opens 2 more questions - Does 'On change' not account for an update of timestamp with no value change (Assuming the jsonGet is running at the IO prodiver speed)? Is the deadband the correct way to do it for discrete tags?

Edit: This has not worked. Its still hammering.

UPDATE: The plot thickens! I have another tag that is still going for it in one of the UDTs. This is 'derived' from one of the other tags via expression (see config below). Its still logging even with the deadband! What algorithm is this historian using?! :rofl:

Anytime I store history on discrete tags with "on change" I will set the Historical Deadband to 1.

What does your tag group configuration look like?

Hiya!

Thanks @Duffanator - I've just found the below post from vision which also agrees with that logic - But its not changed so far and is still logging at once every few seconds :confused: There doesnt seem to be an 'Ignore bad quality box' either!

@bschroeder see below :slight_smile:

Thanks both!

Can you give us a little more detail about the source tag? Namely:

  • Is the timestamp changing on that tag as well?
  • What type of tag is the source? (OPC, DB, Memory, etc.)?
  • Is the source tag subscribed/polling?

The derived tag should be generating a subscription to that underlying tag (including the value, quality and timestamp) and processing history accordingly.

Garth

1 Like

Sure! :slight_smile:

The source tag is configured as per image below. Its a Document tag retrieved via OPC from a Siemens S7-1500 PLC. It sits within a Global DB (We dont have option to enable PUT/GET due to security so cant use the siemems driver).
Source is set to leased @2000 / 500 and subscribed with opimisitc writes as true (This is set to read only)

The timestamp on it is changing in the diagnostics (I dont have history on for the document tag). Snippet of the returned raw is below:

[{
    "Q_PV": 1.0153937,
    "Q_AHH": false,
    "Q_AH": false,
    "Q_AL": false,
    "Q_ALL": false,
    "Q_BAD": false,
    "Q_SIM_ON": false,
    "Q_CHNL_DISB": false,
    "Q_STATUS_WORD": 1837056
}, Good_Overload("Sampling has slowed down due to resource limitations."), Tue Jun 11 20:07:43 BST 2024 (1718132863618)]

After spamming refresh, I notice it is switching occasionally between 'Good' and 'GoodOverload' - Could this have an effect? Both seems to still have the required data.

Thanks!

I calculated the difference between some of the consecutive timestamps in your database table snip and they were all multiples of 5 seconds. Not sure if that means anything. Everything else looks ok to me at a glance... Shot in the dark, try changing this from 0 hours?

Or maybe try giving this a numeric deadband mode (like Absolute, 0.0001 to match the Q_AUTO tag):

The only value I can see in your screenshots that matches that 5 second timestamp increment mentioned above is here. I haven't played much with these particular settings but could be worth investigating.

More info on optimistic writes here if it helps:

This is all just me spitballing; take with a grain of salt.

Thanks @Bushmatic!
I'll have a play and report back in the morning! (On UK time)

After a couple hours of trying to replicate this issue, I haven't been able to replicate exactly what you have reported. The only thing that comes close is I am seeing multiple values getting stored to history when all of the following occur:

  1. There is more than one derived tag subscribing to the source tag within Ignition
  2. Those derived tags have a Tag Group with Optimistic Writes enabled
  3. Ignition writes a value to one of the derived tags.

This can cause a race condition that cascades throughout all the linked tags.

At this point I would advise contacting support so that they can work with you and see your entire setup and how things are put together. It sounds like there is something configured in a manner that I have yet to replicate to get the results you are seeing.

Garth

1 Like

Hi all!

Had a super awesome call going through and debugging the problem with Ryan at support and thought I'd post a quick follow up incase anyone else comes across this post with the same issue.

It all seems to boil down to tag quality codes.
My setup is using a S7-1500 PLC and connecting to the device using its onboard OPC UA server. This OPC UA server sometimes reports quality code GoodOverload (Link for detail) if the timing of its processing has been a bit too long.

Why does this matter?

It seems that if you're operating 'On the fringe' between Good and GoodOverload, then the quality switch causes the tag to log, specifically if it is a discrete tag, no matter what deadband is configured. In the backend, ignition doesnt (currently) report a different quality code in the database when this happens (code 192 for Good and GoodOverload, even though it should be something else like 2xx).

The temporary solutions to this are:

  1. Absolutely hammer the OPC server so it sits in GoodOverload all the time. Sounds stupid right? Well turns out in the first project we did the PLC was always in GoodOverload and thats why we never observed this issue there!
  2. Optimise the tags to ensure quality remains as good. Ironically doing this is exactly how we found the issue, guess we need to go and do some more!
  3. Set the historical logging mode, even on discrete tags, to Analogue. This will mean graphs are kinda funky, so its not the nicest work around! (CurveStepAfter in perspective helps), but it might just save you from 86million new records in 2 days like we have :slight_smile:

Long term, hopefully this gets fixed in one of the ignition releases. I've added a feature request here, and if its causing you bother too, be my guest to upvote it :slight_smile:

Thats all from me!
Kudos to Ryan from support and everyone here for helping try to replicate and figure this one out :sunglasses:

Alex out!

5 Likes

we have status Good <=> Good_Overload with a Siemens CPU1515. This add a lot of change in history. Did you find a more long term solution to avoid this ?

The obvious long term solution is #2. :man_shrugging:

If you are overloading the PLC, all of your functionality assumptions about value reporting pacing go out the window. Overload is never an acceptable production situation, IMNSHO.

1 Like

We have the overload with only 1032 opcua items in a S7-1515R cpu :sob:

Slow down your tag groups.

The cpu communication load was set to 20%, we have adjusted this parameter to the max 50%, and now it's ok. CPU cycle time stay around 20ms and is ok for our kind of usage.

2 Likes

Hi @mazeyrat,

Hopefully the long term solution will be the Siemens driver in Ignition 8.3 so we dont have to rely on the slow and heavy onboard OPC UA server.

The solution we implemented short term was to switch to using KEPServer. We couldnt slow the OPC UA server to a suitable speed that was acceptable for site operations, so had to make this change.

TLDR, dont use OPC UA servers with Siemens if you want anything remotely fast, or with more than a handful of tags. Sorry!

Yes, opcua in siemens is a bit complex...

We need to use an external tools SiOME beside TIA portal to configure items to be accessible with a string node Id (default is numeric node id)
New firmware > 3.1, support a virtual IP for opcua server, we don't need to connect to the server of each cpu in case of a redundant cpu.

In 8.3, S7+ will be a good alternative. I Hope it could manage the redundant cpu too.

1 Like