Bug: two database connections to the same PostgreSQL database with different schemas

Context

Due to some issues with pre-processing discrete and backfilled data, I needed to move some of our data to get historized separately. We are using PostgreSQL with a database per customer, so we decided it would be a good idea to

  • Create a separate schema inside that same database (called unprocessed_data)
  • Create a second database connection in Ignition, pointing to that same database but with currentSchema=unprocessed_data as an extra database connection property.
  • Pre-processing would not be enabled on this second database connection, so that we could avoid all of the reported issues.

Problem

Ignition does not create the necessary tables when the second database connection is used. This obviously causes all historization to fail, with the following logs visible:

Error forwarding data
ERROR: relation "sqlth_drv" does not exist

I spent a lot of think debugging this, thinking it was a permissions issue on my database in the new schema. But it turns out the issue lies within the way Ignition handles its historian state internally.

Reproduction steps

  1. Create a PostgreSQL database
  2. Create a second schema inside that database, e.g.: raw_data
  3. Create a user with all the necessary permissions on both schemas
  4. In Ignition, create a database connection to that database without any extra connection properties
  5. Create a tag in Ignition and enable history using the history provider of your database connection
  6. Validate that the necessary tables were automatically created in the public schema.
  7. So far so good.
  8. Create another database connection to that database, but with currentSchema=raw_data for the Extra Connection Properties
  9. Create another tag in Ignition, and enable history using the history provider of the second database connection
  10. Observe that the tables were not created in the raw_data schema, and the "Error forwarding data" log shows up with "ERROR: relation "sqlth_drv" does not exist".

Note that this is not an issue with the extra connection property. When you have only one database connection to the same database, it works perfect to create the tables and write the history to a different schema.

I have found one forum post of somebody with the same problem, but his workaround is not an acceptable solution when you have a database for every customer. Might be fine if you are only working with a single database, but we don't have the resources to maintain this solution for every new customer.

I have also reported this issue through a support ticket, but I wanted to document this bug on the forum as well for any other poor developer breaking his head on this.

Look at PostgreSQL schema search settings. IIRC, if an unqualified table is not found in the current schema, certain others will be searched. This mechanism would cause Ignition to find the sqlt* tables in the public schema, and therefore not create new ones.

(Not an Ignition bug.)

According to the documentation, that's exactly what setting the currentSchema connection parameter should be doing:

currentSchema ( String ) Default null
Specify the schema (or several schema separated by commas) to be set in the search-path. This schema will be used to resolve unqualified object names used in statements over this connection.

I can also confirm that running SHOW search_path; using a Named Query with that database connection will output only my custom schema, without any fallback to the public schema. So something is still going wrong somewhere else, maybe in Ignition, maybe in PostgreSQL.

Consider using a generic JDBC admin tool, like DataGrip, to replicate your connection settings and JDBC driver choice outside Ignition. (It could be a PostgreSQL JDBC bug.)

The support team also thinks the issue might be JDBC driver related. Here's their full response:

I worked through the issue thoroughly and was able to replicate the problem. I tested several potential solutions, including creating a new role specific to the second schema, adjusting the search path, using currentSchema, and applying other workarounds, but unfortunately, the issue persisted.

After further investigation, it appears that the root cause is likely related to the PostgreSQL JDBC driver. Possibly, the getTables() method does not automatically limit results to a specific schema, even if currentSchema is set in the connection properties. Instead, it retrieves all tables accessible to the connected user. This can create issues when trying to restrict results to a specific schema, as the JDBC driver does not inherently enforce this based on the currentSchema setting. During my replication, I have downloaded and tried the latest drivers available from PostrgreSQL to which the issue persisted regardless.

I also came across several forums discussing similar issues with PostgreSQL in various third party applications, confirming that this is a known behavior. Additionally, I reviewed past internal tickets that describe the same issue, where our developers have determined it to be an underlying limitation with PostgreSQL and the JDBC driver.

I truly wish I had better news, but based on all the information, I recommend reaching out to PostgreSQL for further clarification on this matter. I hope this helps and have a wonderful weekend!

Unfortunately, even on a basic level I cannot reproduce the issue.

Here's what I tried:

  • Create a secondary schema foo in a database, so that I have public and foo schemas.
  • Connect to the database using DBeaver (which uses the JDBC driver), with the currentSchema property set in the Driver properties of the connection settings to foo.
  • Create a table with a qualified name:
    create table if not exists public.asdf ()
    -> The table gets created in the public schema, as expected.
  • Create a table with an unqualified name, expecting it to use the currentSchema set to foo:
    create table if not exists asdf ()
    -> The table gets created in the foo schema, also as expected.

If creating the database works using an unqualified name, I'm assuming Ignition is using some other way to determine if a table already exists or not, and something is going wrong there? (Either in Ignition or in the jdbc driver for that logic).

Perhaps Ignition is querying the information_schema ? Because it is cross-DB-supported?

It looks like we're using DatabaseMetaData::getTables to list the tables: https://docs.oracle.com/en/java/javase/17/docs/api/java.sql/java/sql/DatabaseMetaData.html#getTables(java.lang.String,java.lang.String,java.lang.String,java.lang.String[])

AFAICT we pass null for the catalog and schemaPattern args, and it doesn't seem like the underlying PGSQL JDBC driver implementation cares about the connection string property, so maybe this is where things break down.

1 Like

Would it make sense on your side to include a field in the postgres database connection form to specify a schema, which would in turn lead to the currentSchema property being set on the connection, and allow you to set the schemaPattern internally when using the getTables method?

I think the most robust method is to run SELECT * FROM sqlt_whatever WHERE 0=1 on every standard table, and catch the exception to trigger creation.

It might, assuming this is actually the problem. I'm just kind of guessing here.

At this point I don't think we driver-specific settings available to us but :person_shrugging: