Best DB connection architecture for supporting a future change?

I am developing a system for a client which will read data from one set of tables (“In Tables”) and will write data to another set of tables (“Out Tables”). (And due to some requirements I think it the best way of implementing those reads and writes is via jython scripting).

For testing purposes, the system will initially be implemented at site on server “A”, which has a DB server that hosts both the “In Tables” (always) and the “Out Tables” (for this test only). However the final destination will be on server “B”, but the In Tables will still be on server “A”. And I currently have no idea where the “Out Tables” will end up being hosted.

In order to minimize the pain of moving the solution from one server to another, does it make sense to develop the test version with two different DB Connectors (one for the In Tables, and one for the Out Tables), even though on the test system they will be pointing to the same DB Server? So that when the move occurs I just need to edit the details of the DB connection?

Or will this introduce issues that I am not aware of? And if so, what would be a better way of doing things?

It makes sense to me.

I would make two configurations that are identical in all but name, maybe calling the first connection “Out_DB” and the other “In_DB”.

Now for your test system, both configurations will point to the same database. For all your scripting, I would take care to label which database you are writing or reading from, by filling in the database keyword argument in your runQuery and runPrepUpdate queries.

Then when you switch over, all you would have to change is the db configuration on the gateway, and all your scripting can remain in tact.

Absolutely the right approach, IMNSHO.

1 Like

I figured as much, but I have a client who is A) Complains about everything, B) Wants the world, C) Can’t make a decision about what they want and D) Likes to make last minute changes.

So I need to be flexible and robust in my solution :roll_eyes:

1 Like

A follow on question. Modifying the example take from the documentation

level = system.db.runScalarQuery(SELECT Level FROM LakeInfo WHERE LakeId=‘Tahoe’, ‘MySQLConnector’)

In my case this query would be attached to an OPC Variable Value Changed script buried in a UDT definition. There will be more than one OPC variable calling DB queries. So rather than having multiple instance of a magic string saying ‘MySQLConnector’, is it possible to define this as a constant attached to the UDT definition? And how is the best way to do this?

Heh. Not a fan of tag events, so no help for you on this one.

Peter, something to keep in mind if the connection becomes “remote.” Use the Store and Forward version of the script functions as they will handle the disconnections that will eventually come with remote systems, unless you think you have a design that can tolerate data loss. Sounds like you might just be syncing data that would update at the next available time, so maybe not as critical, but if data loss is a concern, then the SF version of the functions will come in handy.

The DB servers I will be talking to are all on solid networks and potentially running in VMs on the same physical hardware

FWIW my solution to the DB connector issues was:

  1. Created a “DbContext” parameter on the UDT which points to the DB Connector that I want to use
  2. Created a read-only “DbContext” string variable on the UDT, with the value coming from the parameter
  3. In my script I read the DBContext variable
  4. The value of DBContext variable is then fed into the DB calls I am making.

Thus the DBContext is defined in one location for all instances of the UDT. This does create an extraneous variable on each UDT instance, but I can live with that. Alternatively I could have made a self contained variable outside of all UDT instances that contained the DBContext, but I felt that it wasn’t a good idea to divorce the context from the systems that were using it.