IA Webinar on Ignition+Docker (April 2023) Q&A

Greetings all! We recently did a webinar where myself, @joe, and @kgamble discussed and demoed some ways to leverage Ignition and containers for faster development. I'm posting here to present those questions and our answers to them for everyone to benefit from! Here are a couple links to get started with:

And now the Q&A (expand the region below to see the content)


Webinar Q&A

Q&A from Webinar

Design Related Question: What is the reason to have a gateway for Backend and one for frontend?

A(kcollins): There are many possible reasons for architecting your system like this. Load balancing tag and history operations to the backend and client servicing to the frontend is a common one. See more about this on our Architectures page of the website.

A(jdolivo): This can also be useful in a multi-environment (e.g., Dev, Test, Prod) setup where you want to decouple changes made to views from changes to device connections.

A(kgamble): This can also enable you to create a true scale out architecture, where when you have a higher number of unexpected clients, you are able to spin up identical copies of the frontend to help balance the resource load. This is how modern web-applications like social networks are able to scale to resource demands.


As a total novice on Docker, I see references to Images in the yml files. Where are the images sourced and where to they live locally that Docker is able to access them?

A(kcollins): Images (as in "disk images") originate from something called a registry. One of the more common registries is Docker Hub, and it is commonly the implied default registry when you see a many image references. For example, the image inductiveautomation/ignition:8.1.27 is actually docker.io/inductiveautomation/ignition:8.1.27. You can pull images from a number of other registries, including ones you host yourself. Once an image has been "pulled" (i.e. downloaded and extracted), it lives within the filesystem of your host system for use by the container runtime (Docker Engine).


How do you handle Ignition updates?

What is the process to upgrade Ignition to a new minor revision on a docker container?

A(kcollins): Upgrading an Ignition container can be done rather easily if you've made sure to map a volume to your Ignition data folder. Once you've taken and preserved a fresh GWBK (still a good idea!), simply update the image reference (changing the tag from 8.1.26 to 8.1.27 for example) and recreate your container (with the same volume that has your gateway state). Note: upgrading from versions prior to 8.1.26 may require some special steps since we changed the image recently to default to non-root operation to better align with good security defaults. More information on that here.

A(jdolivo): At 4IR, we use deployment pipelines to automate the process of taking backups before doing upgrades.


How do you define the resources needed by a docker container server for Ignition projects?

A(kcollins): Resource allocation for a server wouldn't be much different than working with a standard Linux server. After all, containers are simply processes on your Linux host (in the case of Docker Desktop, that Linux host is a virtual machine). If you run something like ps aux on your host after you've launched a few containers, you'll see the related processes there on your host.

By default, containers share all the CPU/memory/disk resources of the host, but you can apply additional constraints per-container to restrict the CPU usage and memory allocations if needed.

A(jdolivo): When using an orchestrator like Kubernetes, you can also set explicit requests (and limits) that will be used to schedule a workload on a particular host.


How do you handle backups? Do you backup each container?

A(kcollins): Every gateway I spin up always gets configured with Scheduled Backups as one of the first steps. One common pattern I've used successfully is to setup a bind-mount of a path on your host into your container and then point the scheduled backups at that container folder. This way you get GWBK's piped back out to your host.


What resources do you recommend me as a beginner to learn using containers for Ignition?

A(kcollins): We've recently put up a guided Elective Studies course for Ignition with Docker on Inductive University. It walks you through more of the basics and would be a good precursor to some of the content discussed in the webinar.


Do you support any Ignition version as a container?

A(kcollins): We've had an official Inductive Automation Ignition image since version 8.1.0. Features have been added along the way, though, so keep that in mind if something doesn't work as expected with some of the older versions. I've been maintaining an unofficial "community" image for quite some time that has tags for both 7.9, 8.0, up to the latest 8.1.x versions. You can also inspect how that image is actually built by checking out the Github repo here, too.


The image that is being loaded, is that a gateway backup? Or the image of a VM that has a gateway installed?

A(kcollins): No VM's were harmed used in the making of this demo. :grin:. In the webinar demo solution, you'll observe the project1-demo as our basic example. In there we're not loading a supplementary gateway backup--instead we're just launching a brand new Ignition gateway, equivalent to a fresh traditional installation of Ignition. You can see some patterns in the project2-scaleout and project3-iiot examples where the solution actually restores a baseline GWBK as part of the launch of the container. This pattern can be very useful in pre-loading baseline functionality into your own customized solution.


How are TLS certs managed and can they be updated automatically?

A(kcollins): We didn't have time to cover this in the demo solution, but one tool that I like to use for development and local TLS certs is Smallstep CA. The server component can run as a container and it has a CLI for easy management/creation of certificates. In the webinar demo, I had it tied into a wildcard cert that I created for *.localtest.me and mapped that certificate/key into the Traefik configuration. On my host system, I simply added the Root/Intermediate CA certificates (that were generated as part of setting up my step-ca container) to the OS keystore; with that, everything shows up as "Secure". :+1:

Traefik can also manage certificates with an ACME server (such as Let's Encrypt) automatically, which can be very handy (and somewhat magical :star_struck:)

A(jdolivo): At 4IR, we use the open-source cert-manager tool to obtain Let’s Encrypt certificates using a DNS challenge.

A(kgamble): Out of coincidence, I happened to be working on a process to automate this process for local development, I recently made a pull request to add more functionality for this into our public traefik-proxy repository using a lot of the functionality that Kevin mentioned! Enabled TLS Encryption by keith-gamble · Pull Request #1 · design-group/traefik-proxy · GitHub (Go judge my code and leave comments!)


Is the licensing & 3rd party modules (ie. MQTT Distrubutor, etc) persisted when doing an update by simply changing the image path and re-issuing the docker compose up command?

A(kcollins): Licensing in Docker Containers is a bit different from traditional installations. You can roughly equate starting a new container to spinning up a new virtual machine, installing a minimal OS on it, and then downloading and installing Ignition. That sequence trips up a 6-digit license key and you'll likely experience that if you recreate a container. The answer to that is our 8-digit leased activation licensing that will persist across container lifecycle events. Check out time index 25:48 of my ICC 2022 session for a brief overview.


Does the port for the container default to 9088? Asking based on the docker training that used port 9088

A(kcollins): Typically, you'll always have the container listen on the default port 8088/tcp (and others). The container itself has a virtual NIC attached to whatever network you attach it to (such as the default Docker bridge network or our proxy network we also defined). You can then choose to "publish" that port to your host--at that point you have to pick a unique port there that isn't already in use. Our example docker run on our Docker Hub (and our docs page) chooses 9088 but you can use anything you want on the "host" side.


Where is the traefik docker-compose?

A(kcollins): In the webinar demo solution, we stood up Traefik independently of the other solutions so it could persist. It is located here.


How much diskspace does each container use?

A(kcollins): When you launch a container, files that are changed or modified start to accumulate space. This can be very efficient versus allocating large blocks of storage for an entire virtual machine, for example.


How to enable gateway network,and approvals in the yaml?

A(kcollins): This can't be fully automated with the current available configurations (through environment variable, or other runtime/jvm/wrapper arguments). The techniques for achieving this usually involve a custom certificate authority and issued certs/keys (not self-signed generated ones) for Ignition Gateways.


Are there any restrictions connecting to field equipment (PLCs) or as long as we have networks well setup, docker containers behave just as host installs?

A(kcollins): By default, containers wouldn't have any issue routing out (through the host) to devices that the host can reach. In certain situations you do have to be conscious of the IP CIDRs that are allocated by Docker for networks (such as that proxy network) that are created, but that can be customized if needed.


What's the recommendation when running Ignition in Production with redundant Gateways?

A(jdolivo): I recommend ensuring that both gateways are running on different hosts, whether that means bare-metal instances or virtual machines, rather than running two containers on the same host via a Docker Compose file like we showed in the webinar. This can be achieved manually or via a multi-host orchestrator like Docker Swarm, Nomad, or Kubernetes.


With licensing for Ignition running on Docker working similarly to Ignition Maker licensing, is it still required to deactivate a license before upgrading or Gateways?

A(kcollins): When using 8-digit leased activation, the session state is stored in your data volume, so there is no concern with upgrades--it will pick back up where it left off!

A(kgamble): If you were to license a container with a 6 digit key, say something like developer license. You would need to de-activate and re-activate anytime you rebuild the container. Hence the addition of the 8 digit keys to simplify that requirement.


First, a plug for the Inductive University course on Docker containers - a great easy start on Ignition/Docker. Question: All of the containerized architectures I've seen are based around the idea of a single client operating an integrated environment with one or more facilities, gateways, edge gateways, etc. But what ideas are there for how integrators could leverage Docker for dev ops? For example, iterating on a standard product offering, then instantiating that across dozens of development environments representing dozens of standalone facilities operating some version of that standard offering running on Ignition?

A(jdolivo): This is challenging but achievable using a combination of project inheritance, version control, and CI/CD deployment pipelines. We are supporting exactly this use case with some integrators and end users at 4IR. Please reach out if you’d like to discuss further!


Can you use the Designer within the container and save and run projects? Without changing the live project.

A(kcollins): With the Ignition gateway running in a container, you'd typically be using GUI tools such as Designer on your host machine (connecting in either through the reverse proxy address like in our demo or the published port that maps into the container). At that point it would work like any other Ignition installation. Part of the joy of containers is how quickly you can spin up another instance of Ignition alongside another though, and perhaps do some isolated testing there.

A(jdolivo): As far as not changing the live project, this is where you could have a multi-environment setup (e.g., Dev and Production) and could manage changes to those via version control. Check out the presentation I did on this from ICC 2022: Git Serious: Hybrid Cloud Deployment with DevOps | Inductive Automation


what if update fails, are downgrades as simple as upgrades? Just change a docker image version down and rerun compose up?

A(kcollins): This is where you'd want to ensure you have a gateway backup like with any other upgrade. If you launch the Ignition container image against an existing volume of an older version, it will perform the upgrade as expected. However, shutting that down and trying to launch an older version of the container image against an existing volume that is of a newer version will refuse to start. There are some promising developments in the container world that might make it fairly easy to push your volume to a container registry in the future, which might be a helpful way to snapshot your volume prior to an upgrade. In K8s there are also conventions for volume snapshots that might be able to be leveraged as well.

A(kgamble): Part of this is where the linked developer docker image can come in handy, the way that it abstracts your project contents from your gateway config, can allow you to roll back gateways without needing to worry about internal database schema changes. This can obviously fall apart if you have gateway config you need to roll back as well, but if its primarily project content, tags, and theming, then easy to do this with that image.
GitHub - design-group/ignition-docker: A preloaded Ignition Docker Development Environment


How are changes to projects captured? Is git being used inside the containers? Or do you map the projects somewhere local and use git on the host?

A(jdolivo): You can either map the projects / data directory to a folder on the host, or run git inside the container. At 4IR, we use the latter option in production.

A(kgamble): With the linked developer docker image, it pushes relevant project content outside of the container and into the host file system. This allows you to track your project with Git, without the container even needing to know its being version controlled. This allows your host to do all of the heavy lifting and helps create some backwards compatibility with the official image. It pulls everything together in a format that can be packaged up for a production deployment on the official image.
GitHub - design-group/ignition-docker: A preloaded Ignition Docker Development Environment


Why do you use Traefik over NGINX?

A(kcollins): Traefik has some nice capabilities for dynamic route configuration based on container state, tying into the underlying Docker Engine socket. I've got nothing against NGINX, it works great too, but I'm not as familiar with any available auto-discovery/creation of routes, etc which made this demo pretty powerful. If you've got more information, please share!

A(jdolivo): At 4IR, we’re using NGINX in production in tandem with the open-source ingress-nginx controller.


Most of the applications using Docker that I have seen have a front end that is the same on all instances of the containers and (typically) a single backend database (or cluster) that all containers access. This allows for a "load balancing" accross all the frontend containers. With Ignition, we have so much dependency on "states" that it would seem not possible to really use Ignition in this "load balancing" environment. Is there thought being put into this application of containers in a load balancing application with Ignition (considering the Tag states, history, MQTT, alarms etc.)?

A(kcollins): Yes.


Can we run multiple verision of ignition on docker?

A(kcollins): Yes, you can run separate containers, each running whatever version you'd like!

A(kgamble): Leveraging the examples during the webinar, you can even run multiple different versions in the same Docker Compose stack!

Let us know if you've got any further questions or follow-ups! Thanks!

10 Likes

Great Webinar! Few questions from the novice (probably should have used google beforehand)

  1. Some of the Docker Images at the Docker Hub have Docker Official Image / Verified Publisher badges. Why Ignition doesn’t have any?

  2. If I using WSL2 with Docker Desktop my docker image will be located in the linux folder structure?

  3. And if I'm using WSL2 does it mean that docker runs kinda natively? Or it's still Linux VM with Docker Desktop for Windows?

There are some extra steps here to integrate with the Verified Publisher program. We'll get there!

When using WSL2 integration with Docker Desktop (what you should be doing on Windows), the Docker images will still be stored "behind the scenes" within the Docker Engine running on that Linux VM. The filesystem integrations that are provided allow you to easily bind-mount to/from files on your host, whether using the WSL2 filesystem of a given distribution, or using the regular Windows filesystem.

It's still a Linux VM at the end of the day. I believe there is deeper integration with WSL2 as it pertains to the VM resources--I think the resource allocation selections (CPU/Memory/Disk) that you make on Docker Desktop under macOS are not there on WSL2. Also of note is that "Docker Desktop" is available on Linux, but still runs on a managed VM. You can install Docker Engine directly on Linux though--that is where you're running natively.

1 Like

And that is the only way you should run Docker containers in production.

4 Likes

As I'm in the process of dockerizing my dev setup, that webinar was really interesting.

I've had a functional setup for a while, but I'd like to make it better, and one of the things I wanted to fix was the multiplication of exposed ports getting out of control. The reverse proxy solution seems great, but I'm struggling to get it to work. I've read traefik's doc, and tries a bunch of things, but nothing would work - I also don't have much time for this, it's really not a priority...
I'll try with the 'proxy' example you have on github as soon as I can, but I might have questions about the whole thing.

Why ? What would be the benefit of running gateway containers on 2 separate VMs ?
And what would be the point of containerizing gateways if we're gonna put them in VMs anyway ?

1 Like

Putting the containers on 2 separate nodes makes sure that if you have a hardware failure, you're less likely to lose both gateways at the same time. As to running them within VMs versus bare metal, it isn't uncommon for container workloads to be running on [Linux] virtual machines, within AWS EC2, for example.

2 Likes

Hi @kcollins1 and all IA team,

That great to have this follow up place to discuss technical webinars (or not). Keep going this way for the community!

2 Likes

Agreed - thanks for setting it up this way.

1 Like