OPC driver consuming a lot of CPU

Maybe someone could help me out on this.

I have written a driver that extends AbstractTagDriver. I add tags (approx. 15k) dinamically as data is coming in if the tag does not yet exist. Otherwise I set the tag value.

When the driver is running I noticed that the thread “[my driver] - ScheduledTagReads” eats a lot of CPU cycles.

Also, when I disable the driver in the gateway interface it takes several minutes to stop the driver. During the stop process I notice that the same task (thread) which eats up CPU, “[my driver] - ScheduledTagReads” is the one causing the delay (hang). Sometimes the thread won’t exit at all, causing the driver stopping process to effectively hang. I also noticed a lot of Clock Drift, degraded performance log lines.

As I mentioned there are around 15k tags, updating at a rate of maybe 10tags/sec.

Since this is a subscription model, I wonder where these ScheduledTagReads come from? I assume, since this much CPU load, ther must be some kind of loop reading tags?

The basic plot of my tag feeder is:

tag = findDriverTag(_tagpath_);

if (tag == null) {
tag = new DriverTag(_tagname_,_uatype_);
addDriverTag(tag)
}

tag.setValue(_value_,_status_,_time_);

Except for the high load and the problems described, the driver works as expected. I also noticed that there is the “buildNode()” method. I haven’t used it in my driver because I don’t understand the meaning of it. Should I also call buildNode() in addition to addDriverTag()?

A short example (or sketch) of how the AbstractTagDriver class works would be much appreciated.

I have a feeling that I may me missing something.

Can anyone of the developers shed light on this?

Thank you very much.

It looks like the task that continually runs and calls getValue() on your DriverTag instances is scheduled at a VERY fast rate…

Are you doing any blocking work in getValue() or just returning whatever the current value is?

Hi, sorry for the delay, I was unable to access the forum for couple of days…

I’ll explain my situation a bit further.

I wrote a driver for our custom TCP protocol. The basic plot is as follows:

  1. connection to the PLC is established
  2. PLC starts to send data to the driver spontaneously (no polling)
  3. As data arrives the driver dynamically adds driver tags with “addDriverTag” (if necessary)
  4. otherwise it sets the value which is a property of the myDriverTag (DynamicDriverTag)

So to answer your question…

The getValue() is just reading a property.

I guess my question should be:

What is the right approach if I am not polling data from the PLC, but the PLC sends whatever data spontaneously when it has something new.

There is an open source Ignition driver url=https://github.com/chi-Ignition/generic-tcp-driver[/url] that is made for custom protocols without polling. Maybe the source code will answer some of your questions.

I think if i I reduce the rate at which the those scheduled reads are happening the CPU usage will drop. I’ll let you know when that’s done and you can give it a try.

@chi

Thank you very much for sharing your driver.

I’ve shamelessly partly based my code on yours actually. Otherwise I would have a really hard time to write one from scratch.

Your driver is quite involved for me since it’s my first venture in Ignition driver writing and Java. I wanted to start simple to learn all the required knowledge step by step.

I think I am basically doing the same in my driver. With a few exceptions.

The main difference being that instead of implementing the Driver interface, I am extending AbstractTagDriver.

@both
My driver works, but with 15k subscriptions (tags) is really CPU hungry.

I’ve seen there is a buildNode function in AbstractTagDriver. Do I have to build UA nodes myself or is it done automatically in addDriverTag?

I can see/browse/use my OPC tags in Ignition without calling buildNode. So I wonder what is the purpose of it?

Is AbstractTagDriver the right class to extend for this purpose? I found it easyer to use compared to implementing the Driver interface. But is it OK to do so?

If only 10 tags are changing per second, shouldn’t the ScheduledRead only read those values? I must be missing something… :scratch:

AbstractTagDriver takes care of dealing with buildNode() calls, among other things, so you don’t need to worry about that.

Unfortunately AbstractTagDriver is very naive when it comes to scheduling - it is polling all your registered tags for the current value at a fixed and very fast rate, which is why you are seeing high cpu usage. It doesn’t matter that only a handful are updating every second.

The rate needs to be lowered to something more reasonable but it will still be a very naive polling mechanism. If you want to avoid the polling all together you’ll need to stop using AbstractTagDriver and instead implement the more complicated Driver interface.

OK, Kevin, thank you.

That’s the piece of information I was looking for.

I’ll have a look at chi’s sources again and modify my driver accordingly.

Thanks to both of you.

I’ve managed to rewrite my driver implementing the Driver interface as you suggested with success. Now CPU consumption (uptime) drops from more than 2 to almost 0. So I am very happy :smiley:

There is just another thing I am not sure how to handle.

When I disable the driver and then reenable it in the gateway, the subscriptions to OPC tags are not reestablished. I can immagine why this is happening…

For now my solution (workaround?) is to go through all tags with a script and set the opc item path to somethin “xxx” and then back to the real path. However, this procedure takes several minutes to complete.

anyway … if you could give me a hint on how to properly solve this I would really appreciate it.

So when you implement Driver you need to get the DriverSubscriptionModel from the DriverContext you’re given and add a ModelChangeListener so you receive subscription updates.

The SubscriptionModel lets you query the currently subscribed items and the ModelChangeListener delivers updates to the currently subscribed set.

You should also remove your ModelChangeListener during shutdown()

I’ve allready done all that. The driver interface logic is borrowed from chi’s open-source TCP driver.

But when I disable and enable the module, all the tags show bad quality. And tag values don’t change. However, the values are not null. This took me to the conclusion that the subscriptions were broken. The logs show a lot of this:

Client tried to access unknown item ‘[Status]/StateIndAutonum’

But this is normal after shutdown, I guess.

Anyway, this is not such a big deal since in production you will hardly ever reinstall a driver. It would be nice to know why this is happening though.

Ah - so you’re uninstalling and reinstalling the module, is that correct? Not simple disabling and enabling the device again?

I’ll have to take a look at this on Monday when I’m back in the office.

Hi,

the message ‘Client tried to access unknown item…’ should only be shown by the tag manager when a client tries to read or subscribe an item that does not exist in your driver. My guess is that subscriptions is ok, but your implementation does not add this items after a restart.
In an earlier post you mention that tags are added dynamically when data is coming in. How do you preserve those tags over a restart?

Hi,

those tags are not preserved. On every reconnect the driver sends the state of all tags thet it supports. For every tag the driver receives from the device it checks for it’s existence. If it doesn’t exist yet it adds it dynamically to the tag database. Do you think there is a problem in this strategy?

Thank you for your time.

The problem i see is that the tags do not exits in your driver until the device is connected. So all subscriptions or read requests will of course fail because the subscribed items not yet exist.
If i remember right, you should not accept subscriptions before the device is connected and you have to re-evaluate all subscritions after the device is connected. This definitely adds an additional layer of complication.

OK, I think I understand now.

After the subscription initialization phase, subscriptions are not retried if they fail at the first attempt. So if an OPC node is created after the subscription phase, ignition has no means of knowing it. Is that correct?

Should I add those missing (dynamic) nodes in the Driver’s buildNode() imprementation?

OK, now that I added all tags before subscriptions are enabled (like in chi’s TCP driver) all is fine.

Thank you both for the precious tips.