Basic Segmented Network Architecture?

Hi,

I'm an automation engineer at my company and I am trying to design a new network for existing and future OT equipment. I'm familiar with some of the network engineering concepts like subnets, VLANs, routing, NAT, Purdue Model, CPwE, etc. but I haven't done anything like GNS3 labs or CCNA training, so my understanding of everything is still very rough. However, I do know that layering and segmenting a network provides benefits in both cybersecurity and broadcast traffic reduction, so I'm trying to get there.

I've been reading a lot of the Ignition system architecture articles, and I'm trying to achieve a basic, organized, segmented network where devices in a machine cell (switch, PLC, VFDs & sensors, and maybe local HMI/SQL) can communicate with each other but not to devices outside their cell. However, I do want our site's central Ignition gateway (running on a VM with virtual NICs) to communicate with all machine cells, so that Perspective sessions may be run everywhere.

As an example for the sake of discussion, I'm thinking of segregating each machine cell (physically or virtually) into separate VLANs and /27 subnets (subnet mask is 255.255.255.224, with 30 available IP addresses). I can fit eight /27 subnets into a /24 subnet (255.255.255.0), so let's just say I have 8 machines that I'd like to connect to Ignition. I know how I might create these separate /27 subnets for each machine, but I'm not sure how to allow a central ignition gateway to talk to all 8 of them simultaneously. Here is a diagram and chart of what I am thinking though (only 3 copies of machine cells are shown):

I know that Ignition supports multiple NICs, but will I be required to create a new NIC every time I want to talk to a new machine subnet? Planning for the future, I may have a lot more than 8 machines in total that I'd like to connect, so I was wondering if too many NICs is a bad idea due to computational expenses.

Is there another practical way to accomplish isolating machines from each other, but not from a central Ignition Gateway? Am I missing a concept like Inter-VLAN routing, VLAN tags/trunk ports, etc.? Also, have I correctly placed the DMZ or am I missing a layer in the Purdue model? Is a DMZ just a firewall or how do you implement a DMZ in the real world? I'm not sure; my diagram shows Ignition, SQL, and all corporate laptops in the same layer, but maybe there needs to be another firewall between Ignition and laptops?

Finally, I am thinking that each machine cell might get an industrial PC running both a back-end Ignition gateway (just for I/O tag provider and historian purposes) and a SQL database (for things like storing machine parameters and recipes). Is it the right idea (for cybersecurity) to spread out the eggs into different baskets and not store all machine databases in the site's central database? Apologies if I misused any terms or if I'm super far off on anything I said.

Thanks so much everyone. Any help is greatly appreciated,
-Austin

Number one, don’t use publicly assigned subnets in a private network, “20.0.0.x” is not for use on a private network. Second, use your router. You are defining a gateway address on each network, you would then define ACLs and routes on the central router that is listed as the gateway. Only set the Ignition server/s up with one NIC, and put it on a server vlan and subnet. Each PLC would have the gateway address set up in it, and the router would see the request to get to the PLC from the server, and route it via the gateway address to the PLC, the PLC would respond via the gateway to the server. Set the ACLs up so the routing only accepts traffic to and from the server, then you are running.

3 Likes

Thank you! This sounds way better than the multi-NIC method. I've got a few L2 and L3 switches with Cisco firmware to play with, so I think I'll be able to test out the routes and ACLs.

Edit: Thank you for letting me know about the allowed private IP address ranges. I'm noting that private networks can only use these addresses.

  • 10.0.0.0 - 10.255.255.255
  • 172.16.0.0 - 172.31.255.255
  • 192.168.0.0 - 192.168.255.255

Pro-Tip: Don't use switches for routing. They are very slow. Use your DMZ firewall for the routing, they are normally built for full hardware offloaded routing. Also, the firewall will give you really nice access to the rules for routing and the statistics of usage and failed requests etc.

Also:

This is normally where I would put Ignition Edge, and no database. Cache any recipes you need inside edge if needed, and use the main gateway as the path to the SQL server.
Combined with this, you really don't want to ever run SQL and Ignition on the same server. They use resources very differently. Add in EAM for managing these edge gateways and you have a nice secure system using the Gateway Network with TLS to pipe all the traffic through the router to the main gateway. If the network or the gateway drops for any reason, datalogging and HMI functions can happily run off the edge gateway for the machines.

1 Like

Also, this is a go-to design for DMZ design for OT systems.

2 Likes

This is a high level design of what we like to do. We put all the PLCs on 2 networks. One for the OT network and one for the I/O network. The I/O network is essentially airgapped and not accessible from any other network. Then our Ignition servers and local HMI clients also connect directly to this OT network. Essentially, this is done so that there's no firewall between the PLCs and the Ignition servers, so there's no chance for a firewall issue to prevent control of the systems. Then in the DMZ, we keep the local database/historian, an engineering server/workstation/VM (not shown) for engineers to RDP into to get access to programming software which has firewall rules to allow it direct connection through the firewall, and an RDP server for remote users to connect into if they want to run an Ignition client. You could also set this up as a front end server, but since we use Vision, and we want remote access for other users to be easy, they just need an RDP connection and can launch the client from that connection rather than having to have a Vision client installed on their computer.

2 Likes

Ah, I'm too used to Windows software firewall as my understanding of what a firewall is. So the DMZ firewall is a physical (Layer 3?) device that can also perform routing, and we will need 2 of them to create the DMZ zone at the company site, in addition to the existing IT firewall between company and internet. Thank you also for the clear diagram. It looks like the goal of a DMZ is to create a sort of data/firewall "airlock" between IT systems and OT systems.

It'll be good to look into Gateway Networks and EAM if we decide to get Edge or I/O gateways. I had a SQL DB next each Edge gateway, since a lot of the Ignition system architecture diagrams showed it that way, but now that you said no DB, I'm thinking differently. In a hypothetical cybersecurity breach, machine recipe or tag historian data isn't particularly useful to someone looking to steal data for ransom, so I probably don't need to bury it inside the machine cells. It's more important that we can keep running the plant on the OT side alone, which is the point of the Edge gateway cache you mentioned. In that case I'll keep all the databases on the central database within the DMZ instead.

Thanks for providing the Vision version! If you have your Ignition gateway sitting under the DMZ in the OT network itself, RDP/VPN makes total sense for accessing Vision clients from outside the OT network.

If I wanted to place my Ignition gateway inside a DMZ instead, but still account for the DMZ firewall potentially preventing control of systems, would it be a good idea to use a redundant gateway pair, where I place the master gateway inside the DMZ, and the backup gateway underneath inside the OT network? I guess this is similar to the example diagrams above, where a main Ignition gateway sits inside the DMZ, while Edge gateways sit inside individual machine cells.

That's because current guidance from the United States cybersecurity agency indicates the servers needed for production, including databases, belong in the plant network, not in the DMZ. (IIUC, these agency recommendations for general industry are legal obligations for critical public utilities in the U.S.)

Companies fond of Microsoft Windows-based products want to put them into the DMZ instead, in a tacit acknowledgement that those products aren't secure enough to be placed in the plant network.

1 Like

There's no reason to put the Ignition servers in the DMZ in my opinion. The only things in the OT network should be those critical to the operation of the facility/plant. The reason for this is that due to constantly evolving threats and IT managing the firewalls in the plants I work with, even with redundant firewalls, they don't want a bad rule/update to bring down the plant. The database for historical data isn't critical for the operation of the plants I'm at, but if it stores recipe data, etc that Ignition needs to push into the PLCs to keep operations going, I would probably move that DB into the OT as well.

The DMZ is for data that isn't critical to operations and a "locked down" area of the overall network where somewhat untrusted users can get the access they need. I say somewhat untrusted because we don't trust them enough to be connected to the OT network, but they need just enough access to get the data/reports/etc that they need. Anything beyond the firewall separating DMZ is essentially untrusted as the IT network shouldn't be considered any more trusted/secure than the internet.

3 Likes

Phil, do you have a link to this guidance? I'm currently working up a recommended architecture and I'd like to reference this. Unfortunately, this customer is 100% a MSFT environment. No chance of changing their minds, so I'd like to make sure that I'm recommending as much as possible a secure and usable architecture.

Perhaps I could convince my company to switch production to Linux (I'm using perspective on PopOS for one HMI currently), but I'm not sure if my future inevitable replacement will be comfortable with maintaining Linux systems. If it was up to me (and maybe it is), I would love to run Ignition gateways on headless ubuntu servers haha.

We only have one physical server. Is it possible to run some VMs on the OT network, underneath the DMZ, while other VMs run on the IT network above the DMZ? Is this another problem solved by routing? Otherwise, I might be back to placing DBs next to the Edge gateways in each machine cell. Thanks!

DHS/CISA presented a few (couple?) times at ICC and handed out hardcopies of confidential guidance. Most of that is supposedly online--I found it once but the website keeps changing. Start here:

Update:

Found it here:

TL/DR: See the diagram on page 17.

3 Likes

While not ideal, yes, you can run VMs on separate VLANs and even physical networks by connecting multiple ports on the host to the various network switches. You just have to be aware though that a misconfiguration of the VM can put it on a network that you may not want or may bridge across 2 networks bypassing the firewall if someone just adds a 2nd NIC to a VM and puts it on the 2 networks (called dual homing).

1 Like

Gotcha. I'll have to look at databases on a case-by-case basis and determine where they need to be, but I'll keep production-critical ones down in the OT network. And I'll probably keep databases that facilitate "business operations" in the DMZ and not on the IT network.

For Ignition's placement, I put it in the DMZ because my company uses Ignition for both production and business operations. I am thinking of migrating production applications down to the OT network, but we're also highly integrated with a cloud ERP system (SaaS), and we need some operations that turn on motors to also send post or get requests to our ERP's REST API. For this reason, I was thinking placing Ignition in the DMZ was a requirement for us, but is it possible to have Ignition talk to the internet via outbound rule or TLS or something while being on the OT network? Thanks!