When I looked online a few weeks back, I read that there were generally two ideas.
A connection per core*
Or 2 connections per core and then in some conditions an additional 2 connections*
The logic was like, smaller size reduces the amount of task switching, so things run smoother.
Today though, the DB was faulting for a gateway I know of, and when the pool size was moved from 8 to 64, the faults stopped. That CPU has only 8 cores.
Was it causal as a fix or coincidence?
I think some other things were going on, so I could not say for sure that the connection increase made the causal improvement.
64 is a lot more than 8 though which had been recommended from a few sources online.
So I want to ask what is the truth about pool sizes/connections?
How should the number of connections be determined?
(Seems like testing to see if the DB faults is one way, but not a good way.)
You can set the connection pool size to -1 which will allow it to make as many connections as needed. Then you could monitor it and see how many connections it uses.
It matters where you evaluate this. The point of these rules of thumb is to avoid crushing the database, not Ignition. In other words, the core count to consider is based on the database system, not the Ignition system. (If they are the same system in production, you've seriously screwed up.)
And you must consider what kind of workload you have. If cores on the database are mostly waiting on I/O from your disks, then more connections might be possible. But if the I/O wait time is exacerbated by too many queries hitting too many different parts of the disks at once, then fewer connections should be used.
Meanwhile, look for long-running queries. Ignition will show you offenders in the status page. You should go to great lengths to kill those off, so queries run in a few seconds at most, and mostly in a small number of milliseconds.
Finally, if stuck, ask a real DBA to study the situation and optimize the DB.