Hardware recommendations to run Ignition in containers

Hello All,
I have been an Ignition developer since roughly 2017 but this experience has been almost strictly on hardware running Windows OS. Within the past 1-2 years I have dabbled with running Ignition on hardware that runs Linux OS. Most recently, I worked with my IT Infrastructure counterpart to deploy Ignition in AKS (Azure Kubernetes Service) and that has been very exciting to see how well it works. I am nowhere near where I need to be to really understand containerized deployments but can very easily follow along with the IA training to deploy Ignition via Docker Desktop through CLI or Docker Compose. Because of this, I am struggling to properly choose the right piece of hardware to accomplish what I want to do. I have been trying to take some formal Docker and Kubernetes training, but I am not finding enough time currently to stay engaged in that training. I still plan to dive into it, but I figured it is probably best to reach out to this community for advice/help. I have also sent a line out to some of my trusted integrators.

Essentially, I would like to be able to run something similar to AKS but on physical devices and centrally manage it all at scale. I also want to have minimal touch required for onboarding/loading Ignition to make it easier for our maintenance and controls teams to support so it is not just an "Engineering" support item. I have been told I can just run Kubernetes on bare metal but never understood how I can manage all the device OS's and container deployments at scale. We are currently testing a blackbox device that is managed by the vendor's software for onboarding/firmware updates/connection to Azure (can also connect to AWS). I am able to deploy Ignition to this device through Azure IoT Hub but the only time this process gets complicated is with 3rd party modules. Kevin Collins offered great suggestions on how to get around some issues, but I don't even know where to begin with that fix unfortunately.

Below are some of the requirements/desires for the hardware/software:
-Underlying device OS is Linux based (seems the best for container deployments)
-Device OS is managed by an OEM and they provide firmware updates when needed (somewhat nice to have)
-Device to be managed by a Management System for firmware/os patching/onboarding/etc
-Container deployments on each device can be managed by a central system either from the hardware vendor or another software vendor
-Device is industrial rated because it will be in a panel out on the refrigerated plant floor
-Device must have a minimum of 4GB of RAM (8GB preferred for buffer and growth)
-Device must have a minimum of 60GB onboard storage (could probably be less because it is only for short buffering of MQTT data when we lose connection to our central cloud gateway)

Thanks to anyone in advance for taking the time to read this post or offer suggestions. Please feel free to ask any questions that you have, I don't believe I am not doing anything proprietary with this solution. If anyone has thoughts but are not comfortable sharing them in the forum posts, please feel free to PM me separately.

I was looking into these to run Ignition before IT insisted on Windows.
TS-h886 | Intel® Xeon® D desktop QuTS hero NAS with four 2.5GbE ports, designed for real-time SnapSync data backup and virtual machine applications | QNAP (US)
It seems like their OS meets your requirements and is setup for Docker, but it isn't industrial rated hardware.

Hey Andrew,
Thanks for sharing that, they look pretty impressive. I should have specified a little more precisely in my original post that the devices will be somewhat standalone. Essentially, I have at least one Ignition Gateway running on the local subnet for every Production Line, rather than a host that can support many VM's from one place. We ditched the "data center" concept for Ignition gateways and just deployed them at the source but now we want to monitor and manage them at scale is if they are part of the same ecosystem. Running the gateways at the source subnet allows us to do distributed machine control much better now.

I didn't want to muddy the water too much with what I already tested because that can sometimes steer the conversation into making that solution work rather than listen to what others have done. But for sake of showing this a little better visually, I have included some examples of what I tested below.

OnLogic
For anyone that knows OnLogic, they have a fabulous array of PC's for just about any use case. Their pricing is pretty decent and their devices just run well (in my experience). We had been using an OnLogic model running Windows OS for years and so we tested doing basic container deployments on these and it works. However, when speaking to OnLogic support about trying to find a management tool to manage the Linux OS and container deployments they were not able to provide specific examples on how to get there. I do not fault them for it because they are a hardware supplier and they do that well. To be honest, if we can make the OnLogic solution work as the piece of hardware and come up with a management system for OS and containers, then that would be the route I go to instead. I am just clueless when it comes to that side of things.

Cisco IC-3000
image
This device is managed by Cisco FND (Field Network Director) but has been a little too clumsy and the hardwire has some very high lead times. The solution works though if something wanted to chat with me offline about it.

CloudRail.Box.Max
image
This device is based on the Welotec EG500 but is a locked down version for better security and specific applications. Their device management software works pretty slick and I can integrate back to Azure or AWS to push container deployments. They also provide a way to hook up IO Link Masters to then transmit that data to the cloud. They have taken all of the manual work and built it right into the tool.

Welotec EG500


This device could be used as a much more open source device than the CloudRail.Box.Max and they provide a management software as well. I have not been able to get one in for testing just yet but will hopefully test soon.

1 Like