Ignition architecture for PLC testing?

In my day job I work on multiple, unrelated projects writing PLC code that is shipped to customer sites. I have 2 actual PLCs in my office that I use for testing (and each PLC is at a static IP address). However, each project only involves a single PLC. So when I want to test project A, I load up A's code into a PLC and test away. Then when I move onto project B, I then load that project's code into the same PLC and code away.

While each project is unique, there is an underlying PLC architecture that is the same across multiple projects with consistent tag names1. However, there are also many unique aspects to each project.

1. For what I work on there is no naming consistency imposed across projects, even though there is supposedly a naming guideline "somewhere". While a single client might have consistent names across all the projects at their site, a different client could have a slightly different name for the same functionality depending who wrote the code. Yeah, I know. But it's out of my control and is what I have to deal with

My desire is setup an Ignition test rig that would allow me to easily construct screens to facilitate my testing, while both supporting the common architecture and the uniqueness of each project. Right now I am trying to decide on what would be the appropriate Ignition architecture to meet my goals.

So far I have considered architectures like these:

Big ball of mud
As there is only 1 PLC in play at a time, at a fixed IP address, use a single PLC connection. Then simply load every tag from every project, and every asset into a single Ignition project. Then draw up all the screens for each project, add a very high level navigation system, and hope for the best. This architecture has lots of problems like not being able to separate test environments and archive them with the matching PLC code. In addition, because Ignition tags are all read through a single gateway, there will always be tags that can't be read, and that the GW backup includes all tags from all projects.

You get a VM, You get a VM, Everyone gets a VM
Spin up a base VM with a minimal set of common assists in it, and then clone the VM for every new project and customize as needed. This allows me to separate the projects for archiving purposes. I have experimented with an Ubuntu server VM image, and have gotten the VM's size down to under 6GB. I'm sure this could be improved on, but that doesn't change the fact that I would be spinning up complete VMs (although I would only spin up a single VM at a time), or that software maintenance is now N times harder where N is the number of VMs that I have.

Load 'em if you've got 'em
Have a single VM, and each time I project swap, I simply load up the matching GW backup into the VM and away I go. In hindsight (after writing this question) this may be the cleanest of architecture for what I want to achieve as I now only have to do software maintenance on a single VM.

Docker to the rescue?
This is an area where I have zero practical experience, but from what I have seen on IU this may be a possibility. I spin up a single VM/Server, and have a docker instance for each project that I will test, plus a base project to hold common assets. Software maintenance is now reduced to a single system, and I can separate my projects. The only downside I can see is that I'd need to editing text files to only spin up the required docker instance. (Unless there is some sort of fancy Docker meta user control that I don't know of)

My ignorance is my problem
There is some other architecture that I have overlooked that solves in a simple and elegant manner. But I don't know what I don't know.

Does anyone have any advice/suggestions for me about this topic?

Where the physical client systems have different hardware specs, I give them their own VMs. (I also give them virtual networks matching the plant, so a GW backup can just be dropped in.)

Where physical client system hardware is consistent, a single VM compatible with many gateway backups is reasonable.

I would treat docker and docker compose as local assets, not buried in another VM. Treat them as VMs for quick and dirty testing. It can be challenging to get docker containers to fully mimic a client site, though, unless they are also using docker.

(I am finally playing with docker for some module development scenarios.)

1 Like

Typically the systems I work on have consistent/similiar hardware specs. One of the PLCs I have is rack based while the other is rackless. I can run either system without any actual I/O (and I don't have any I/O cards in my office anyway). The only hardware consideration for the racked PLC is that if a different CPU is specc'ed for the project. In that case I just hack the test project to use the CPU that I have.

The reason that I was talking about (docker in) VMs is that I want to move the ignition setup off of my work laptop and onto a local TrueNAS server (which is FreeBSD based). So I would need to spin up a VM of some sort to host Ignition, because I don't want to be installing the Linux binary compatibility system on the server's base FreeBSD, and you can't install that layer into a FreeBSD jail. And see here for a discussion of Ignition on FreeBSD from almost 10 years ago.

But I don't need to replicate any client Ignition system, because there are none. All the HMIs that our clients use are CIMPLICITY based. Doing what I want to do in CIMPLICITY would be a herculean task.

So it's looking like the single machine and load GW backups is the way to go.