Is there a comprehensive guide on how to maximize asynchronous calls and thread management in Ignition? I make a lot of asynchronous calls because my code crunches a lot of SQL data which can take a bit of time. I realize that locking the UI isn't as much an issue in Perspective as it was in Vision but I still like to run asynchronous calls for more complicated calls. In fact I use Python decorators to easily change my functions to run asynchronously. And now I don't hesitate to make my functions run asynchronously. Also, someone in the office mentioned that Ignition will only run three threads and any additional asynchronous calls get put in a pool waiting execution. If so then what is the point of making an asynchronous call? I looked in the documentation and found very little detail about this topic.
That someone is mixing things up. Tag events (defined directly on tags) run in a thread pool of size three.
There's no limit to asynchronous threads, except that they consume CPU time without constraint.
Running asynchronously adds thread-switching overhead, each of which needs to start up a jython interpreter context. You should not be running asynchronously for most tasks. Consider deliberately creating thread pools of fixed size for specific purposes, and delegate tasks to those.
So, question for the group (not just Phil, who is definitely not a representative sample of our customer base ) -
What would 1. make sense and 2. be useful?
Some kind of project-scoped thread pool on the gateway, with a system.util.pushToTaskQueue(payload)
?
Some kind of system.util.threadPool([parameters])
function, that internally wraps a thread pool? Or maybe a project/gateway resource where you configure a thread pool, and then you retrieve it by name with system.util.threadPool([name])
, and push work into it?
I'm not interested in serializing code state or replacing the SFC module, but threads are (relatively) heavyweight, so it'd be good to have an nicer idiomatic way to use them than just invokeAsynchronous
. Plus scheduled thread pools let you run things on fixed delays, making for a very easy replacement for folks use of time.sleep
.
Hmmm. David and I are in this topic, but how many others will this question reach?
I like this one, I think it’s a good trade off between ease of use and discouraging creation of thread pools all willy-nilly.
In general, how would I create a thread pool? Say I was going to run 20 process heavy Python functions all at once, no more and no less.
There's an example in my later.py helper script that makes a single thread pool. Do similar, but set it to 20.
{ I probably aught to simplify that example a bit, and replace system.util.persistent()
with system.util.globalVarMap()
. }
I would vote for this one. I think it fits well with the current ecosystem.
Might be nice to have the ability to target a vision client scope as well.