Gateway leaks database connections

We’re currently replicating this issue and looking into it. We’ll let you know when we narrow down what exactly is going on.

Any news on this?

Thanks.

Sorry, this ticket is still pending. This forum thread is attached to it so we’ll update here when we learn something.

To reproduce this bug, I set up mysql server v5.5.31 on an Ubuntu v12.10.1 vm. I’m using Ignition v7.5.8 with a mysql ConnectorJ jdbc driver v5.1.23. I then added a database, and made 9 connections to this db in Ignition. I then ran “service mysql stop” after the mysql service was started. The connections went from valid to faulted. Then, I ran “service mysql start” to start the mysql service again. I looked and the connections appeared valid again. Is there anything different I should be doing to reproduce this bug? Otherwise, I’m not seeing a bug here.

That depends what you mean by “made 9 connections”. The terminology used in the Ignition control panel is not strictly correct, which may cause confusion when discussing this bug. What the control panel refers to as a ‘database connection’, is really a pool of connections (maybe use the term ‘connector’?). To avoid ambiguity I will now use the terms ‘pool’ and ‘connection instance’.

A pool using the default settings will allocate up to 8 connection instances. When the bug is triggered, the pool somehow loses track of one of these, and although it still claims that it will allocate up to 8, it will now allocate less than this. As long as the pool still has at least one working connection instance available, it will show as valid.

The only way to find out if a pool has been affected is to start as many transactions as possible, wait for all the timeout errors and count how many were successful (see the test code posted previously).

A ‘service mysql restart’ or ‘mysqladmin shutdown’ both reliably trigger the problem (affected pools will lose 1 connection each time). However, it seems that only pools in use are affected. I tried this repeatedly with an open client and only a few pools were affected (one default, one explicitly in use, one with logging enabled in an unrelated project). Once a client was opened for another project, the pool for that project also started losing connections.

NOTE: it has previously been observed that a gateway with no open clients can leak on all pools.

We are currently using Ignition v7.5.5 (b1255).

We are still working on mocking this up and will post back here when we have some more information on the issue.

With the default settings , connection pools occasionally lose connections on the Gateway during mysql restart. It seems the problem is with DBCP configuration and not with how ignition handles mysql restarts. An exception is thrown internally when trying to retrieve the connection that reads “time out waiting for idle object.” Try setting the max wait to -1 which sets the max wait time to indefinite. Setting max wait to -1 worked for me. Also, have you tried setting “testOnBorrow” to true when configuring db connections on the Ignition web interface? You’ll probably have to add the ‘selectionQuery=“SELECT 1 FROM DUAL”’ as well.

commons.apache.org/proper/common … ation.html

I presume the selectionQuery is what is labelled as “Validation Query” in the Ignition settings. This is “SELECT 1” by default. (For anyone else who is curious, “DUAL” is a dummy table name, in MySQL it’s not necessary (“SELECT 1” is fine), but exists for compatibility with other databases).

testOnBorrow is true by default and the problem persists, although this should really be all that is needed to solve it.
testOnReturn is false by default but setting it to true doesn’t help either.

“timeout waiting for idle object” is the correct and expected exception when the pool has no free connections.

Setting maxWait to -1 does not solve the problem, it merely hides it, making transaction requests wait until another transaction is finished instead of running concurrently like they are supposed to. The pool is still leaking connections and once it runs out it will fail.

As previously described, the idle connection evictor doesn’t help either. It seems the pool is completely incapable of identifying these failed connections.