I’m currently writing maintenance documentation for what is officially my first Ignition deployment in the wild. The system is a data concentrator that monitors multiple identical machines and writes a consistent set of data to two different DB tables - one table has one row per machine and is updated in real time with various state values. The second table archives other data on the completion of that machines current instruction, and has one row per instruction. There is no UI on this project, and it just sits there on a server and quietly does its job.
I’m documenting how someone can go about modifying the project by both adding new machines or by adding additional data points on each machine. I’ve optimized the project for ease of adding a new machine, but now I am second guessing the overall architecture as adding new data point involves editing multiple disparate parts of the project.
So I’m throwing open invitations to suggest how/if my architecture could be improved for ease of future maintenance. The current architecture does work, and I have no budget for a redesign, but I am looking to see what I could do better in the future. I’m even curious to know if all this could have been better done with a custom module (not that I know how to build one - but knowing the benefits of going down that path would be interesting)
OK … with that preamble over, here is a summary of the current architecture:
- Each OPC UA connection to a machine has a well defined name.
- A single UDT defines everything known about a machine.
- Each instance of the UDT expects to read data from an OPC connector with the same name as its own instance name.
- Data written is written into two separates tables in the DB through one of two Named Queries. An Update query writes the real time data into the “real time” table and an Insert query that writes the archive data into the “archive” table
- A Gateway timer script iterates over every instance of the UDTs, marshals the status data of each machine, and sends it to the Update query via parameters.
- A change value script (on the Instruction number) in the UDT marshals the archive data, and sends it to the Insert query via parameters.
Note that I chose to use Named Queries so that the SQL for the update and insert would be visible and in a well defined location in the project. I could have used Transaction Groups to define the SQL, but they (as of 8.1.3) apparently can’t be parameterized - so I would end up with one Transaction Group per machine per query type, which means that adding a new data item would involve individually editing every transaction group.
With this design adding a new machine involves
- Create the OPC connector with the defined name
- Add an instance of the UDT with the same name
And everything happens automagically.
However, if I want to add a new data item to the system, I need to:
- Add the item to the variables of the UDT
- Edit the value change script buried on one specific UDT variable to marshal the value of the new item to the Insert Named Query
- Edit the GW timer script to marshal the value of the new item to the Update Named Query
- Edit the Insert Named query to receive the new parameter and map it to the new SQL column
- Edit the Update Named query to receive the new parameter and map it to the new SQL column
I am unhappy with all the fiddling needed to add a new data item. EG manually editing two different scripts in two different locations, and then manually editing the two different Named Queries.
I could possibly see creating dynamic SQL in the scripts in order to eliminate the Named Queries. But I’m not sure if that is a responsible idea or not.
Another improvement could be actually having a UI somewhere that defines the mapping of what UDT item goes to what table. Thus adding a new UDT item to the queries would only require configuration rather than editing (I believe that this would also require going down the dynamic SQL route) (also, we never paid for either Vision or Perspective modules!)
What other suggestions would people make?