In my day job I work on multiple, unrelated projects writing PLC code that is shipped to customer sites. I have 2 actual PLCs in my office that I use for testing (and each PLC is at a static IP address). However, each project only involves a single PLC. So when I want to test project A, I load up A's code into a PLC and test away. Then when I move onto project B, I then load that project's code into the same PLC and code away.
While each project is unique, there is an underlying PLC architecture that is the same across multiple projects with consistent tag names1. However, there are also many unique aspects to each project.
1. For what I work on there is no naming consistency imposed across projects, even though there is supposedly a naming guideline "somewhere". While a single client might have consistent names across all the projects at their site, a different client could have a slightly different name for the same functionality depending who wrote the code. Yeah, I know. But it's out of my control and is what I have to deal with
My desire is setup an Ignition test rig that would allow me to easily construct screens to facilitate my testing, while both supporting the common architecture and the uniqueness of each project. Right now I am trying to decide on what would be the appropriate Ignition architecture to meet my goals.
So far I have considered architectures like these:
Big ball of mud
As there is only 1 PLC in play at a time, at a fixed IP address, use a single PLC connection. Then simply load every tag from every project, and every asset into a single Ignition project. Then draw up all the screens for each project, add a very high level navigation system, and hope for the best. This architecture has lots of problems like not being able to separate test environments and archive them with the matching PLC code. In addition, because Ignition tags are all read through a single gateway, there will always be tags that can't be read, and that the GW backup includes all tags from all projects.
You get a VM, You get a VM, Everyone gets a VM
Spin up a base VM with a minimal set of common assists in it, and then clone the VM for every new project and customize as needed. This allows me to separate the projects for archiving purposes. I have experimented with an Ubuntu server VM image, and have gotten the VM's size down to under 6GB. I'm sure this could be improved on, but that doesn't change the fact that I would be spinning up complete VMs (although I would only spin up a single VM at a time), or that software maintenance is now N times harder where N is the number of VMs that I have.
Load 'em if you've got 'em
Have a single VM, and each time I project swap, I simply load up the matching GW backup into the VM and away I go. In hindsight (after writing this question) this may be the cleanest of architecture for what I want to achieve as I now only have to do software maintenance on a single VM.
Docker to the rescue?
This is an area where I have zero practical experience, but from what I have seen on IU this may be a possibility. I spin up a single VM/Server, and have a docker instance for each project that I will test, plus a base project to hold common assets. Software maintenance is now reduced to a single system, and I can separate my projects. The only downside I can see is that I'd need to editing text files to only spin up the required docker instance. (Unless there is some sort of fancy Docker meta user control that I don't know of)
My ignorance is my problem
There is some other architecture that I have overlooked that solves in a simple and elegant manner. But I don't know what I don't know.
Does anyone have any advice/suggestions for me about this topic?