Database connections and pool sizes

I have inherited a ignition system that has been pieced together over the years. And it has many long running queries (more than a few that take like 30 seconds), but they work. As the company grows and have more vision clients being opened I have needed to increase the pool size to prevent the database connection from dropping. I would like to fix some of these queries and even move some to gateway tags to reduce the load, but who has time for that. My question is why would I not just give myself a real large pool to handle all the queries? What is my real resource cost per connection? My database is a mySql server running on a server by itself and it is just supporting ignition.

1 Like

This sounds like the client load for charting high-density real-time trend data from transaction groups. If that’s what it is, I solved this by writing a caching module. (-:

Seriously, though, lots of long-running queries often means your database server is underpowered and/or starved of RAM, causing result sets to be tossed out of the database’s own cache. It can also mean your original designer didn’t set up intelligent indices on the tables involved, causing the database to read each entire table every time even the smallest subset is queried. You should have your database log the long-running queries so you can examine the query plan generated for them.

Long-running queries are a huge red flag – don’t just leave it like this.

I think the reason the queries take so long is that the are full or joins and nested queries. Some of the queries themselves are almost two pages long. I do plan on fixing them but for right now they work and I have more urgent things to fix. My main question is about the connection pool and practical limits to its size. I have found 10 connections per open vision client works great, and at any given time currently I expect there my be 5 clients open. So currently I currently have a pool size of 50 and its working great. Obviously this is not the best solution, but the SQL sever seems to handle it fine. The CPU does get peg sometimes but never for two long. And there seems to be plenty of ram. Mostly I am curious of what the costs are to the pool size so I have a realistic understanding on where this should fall into my priority list. Any light on the subject is appreciated.

It’s likely that increasing your connection pool won’t do anything positive for you, and may even hurt performance. The only way you’ll know for sure is to run tests on your own system. :\

There are some minor costs in CPU and RAM for keeping more connections around. But the real cost will be on your DB. Up to a point, more connections allow things to go faster. This is because large DB queries usually alternate between being I/O bound and CPU bound, especially when you have large joins as you do. When you have more connections, while query1 is doing CPU intensive work, query2 can do its I/O intensive work, so things go faster.

However, when you have too many active connections, each connection tries to get some time. This means there’s context switching – query1 doesn’t get all its CPU work done before the context switches and query3 gets CPU time. This context switching comes at a high cost, and can make all the queries slower instead of faster. Most of the rules of thumb I’ve seen suggest your connection pool of 50 is already larger than optimal for 99% of db set ups I see for Ignition, but you can certainly try increasing it to see if your performance improves.

Like Phil said, the real gain here will be working on your queries and DB structure. Looking at the execution plan of your longest running queries and checking for things like full table scans will only take a short time and yield big improvements. I’ve had 30 second queries with a page full of joins go to milliseconds after examining the query execution plan and making a simple change to the db structure.

1 Like

Thank you Kathy. That was helpful information.