Adding JDBC Driver for Databricks in Azure

Yes, our team has connected Ignition to Azure Databricks. We are running Ignition 8.1.45

  1. Download the latest JDBC driver from Databricks and extract the files from the ZIP: https://www.databricks.com/spark/jdbc-drivers-download
  2. Login to the web interface of your instance of Ignition.
  3. Go to Config>>Databases>>Drivers and click on Create new JDBC Driver.
  4. Give it a name like "Databricks"
  5. In the classname line type "com.databricks.client.jdbc.Driver"
  6. In the JAR files click on "choose file" and locate the DatabricksJDBC42.jar file on your hard drive.
  7. Driver Type = "Generic"
  8. Default URL values and instructions etc
  9. Default Validation Query = SELECT 1
  10. Default Translator: POSTGRES (Note: we could have taken the time to create a new Ignition translator config that would work better with Databricks ANSI SQL but POSTGRES works for most queries.)
  11. Click on Save Changes.
  12. Go to Config>>Databases>>Connections and click on Create New Database Connection.
  13. Give it a name like "Databricks"
  14. JDBC Driver = "Databricks"
  15. Connection URL : Login to your Databricks workspace.
    A) Go to Compute>>SQL Warehouses
    B) Select the SQL Warehouse you want to connect to for compute.
    C) Click on Connection Details. Then copy the JDBC URL string to the
    connection URL in Ignition.
  16. SUPER IMPORTANT At the very end of the connection string after the final semi-colon add "EnableArrow=0;" This command disables a memory thing that doesn't play nice with the Databricks JDBC JAR and JAVA 17 that Ignition runs on. SEE: THIS URL for details
  17. Username = token
  18. Password = [Your Personal Access Token (Developer Access Token) from your Databricks workspace]
  19. Set Validation Timeout to at least 900000 so it only sends the keep alive every 15 minutes instead of the default 10 seconds.

That's all you should need to connect. Make sure your SQL statements include the full [catalog].[schema].[table_name] reference.