Jython multi-threading

Am I correct in saying that a call to InvokeAsync will call to just 1 thread of the CPU?

Therefore if I had an intensive script that had 5 heavy functions, but all 5 functions could run at the same time, my best bet for minimum overall execution time is to call 5 Async’s at the same time?

Yes.

1 Like

Thanks Phil. I take it it would be rather silly to call 20 Async’s on a 16 thread CPU? Not that I am going to or need to, but what would happen, it would queue 4 until a thread becomes available? But also dangerous because the call might affect the main GUI thread?

No, threads aren’t tied or limited to CPUs. You may notice a slowdown but all will run effectively at once.

1 Like

Interesting, what is it limited by?

Technically… the stack size, but in practice you’ll probably find the effective limit sooner…

OK (as in, my ignorance doesn’t know what you’re referring to…)

Overall system (gateway computer) spec will help though, right?

I have two test workstations here; 1 I9-9900K and 1 I7-5960X. You’ve piqued my curiosity now and I will run the same script on both, with script execution time logging.

Number of threads depends on JAVA and the underlying OS not Jython and I think we need not be over concerned about it as developers as long as we are implementing some reasonable features with it! It also depends upon whether you are using it on client side or server side, lest you should block the server it self due to some bug it our code or implementing some compute intensive or blocking code ! Of course we should know the system limitations but they are practically unlimited for reasonable requirements.

1 Like

I9: 8 core, 16 thread, 16GB RAM, Win 10 Pro, Ignition 7.9.13, 1.8.0_221-b11
I7: 8 core, 16 thread, 32GB RAM, Debian Stretch 9 Headless, Docker, 7.9.13, 1.8.0_242-b08

Server side, Gateway scope

If you're interested in that topic, it's called CPU scheduling. It's effectively the responsibility of the OS, but applications can have an influence on it (especially to schedule internal threads like in the JVM).

Basically the OS will try to distribute the CPU time evenly over the different threads that need it, without waiting too long, and without causing too much context switches (context switches have even become more costly due to spectre, meltdown and similar CPU bugs). Those requirements are efectively contradictory: the longer you wait to switch, the less evenly you can distribute the time and the longer threads have to wait.

In Linux, these are all options you can configure to get your ideal working.

But default settings should be generic enough to have a good performance in most cases. You really need to study those options a lot and test them with trial-and-error to get a better performance from it.

2 Likes

Good to know because the final destination will be the Linux box.

I'll post some results of the execution times between the two, the I9 has a higher clock speed which might make up for Windoze issues

In case this wasn't clear, it's actually a rabbit hole. It's nice you can do those configurations. But unless it's really needed (like for supercomputer applications where every added bit of performance has high value), you'll likely to lose more time configuring it than you will ever gain from it.

1 Like

Also note that unless your individual tasks are truly compute-only, running them all simultaneously will allow the OS to use more of the idle time around file and network operations.

If the are compute-only, and you disturb the rest of the system, consider using a thread pool executor to limit the number of simultaneous threads.

They will call a store and forward DB insert, lot’s of rows

That’s IO intensive, in which case the CPU won’t be your bottleneck, but your IO devices (network, disk, …) will.

1 Like