Basic Segmented Network Architecture?

Hi,

I'm an automation engineer at my company and I am trying to design a new network for existing and future OT equipment. I'm familiar with some of the network engineering concepts like subnets, VLANs, routing, NAT, Purdue Model, CPwE, etc. but I haven't done anything like GNS3 labs or CCNA training, so my understanding of everything is still very rough. However, I do know that layering and segmenting a network provides benefits in both cybersecurity and broadcast traffic reduction, so I'm trying to get there.

I've been reading a lot of the Ignition system architecture articles, and I'm trying to achieve a basic, organized, segmented network where devices in a machine cell (switch, PLC, VFDs & sensors, and maybe local HMI/SQL) can communicate with each other but not to devices outside their cell. However, I do want our site's central Ignition gateway (running on a VM with virtual NICs) to communicate with all machine cells, so that Perspective sessions may be run everywhere.

As an example for the sake of discussion, I'm thinking of segregating each machine cell (physically or virtually) into separate VLANs and /27 subnets (subnet mask is 255.255.255.224, with 30 available IP addresses). I can fit eight /27 subnets into a /24 subnet (255.255.255.0), so let's just say I have 8 machines that I'd like to connect to Ignition. I know how I might create these separate /27 subnets for each machine, but I'm not sure how to allow a central ignition gateway to talk to all 8 of them simultaneously. Here is a diagram and chart of what I am thinking though (only 3 copies of machine cells are shown):

I know that Ignition supports multiple NICs, but will I be required to create a new NIC every time I want to talk to a new machine subnet? Planning for the future, I may have a lot more than 8 machines in total that I'd like to connect, so I was wondering if too many NICs is a bad idea due to computational expenses.

Is there another practical way to accomplish isolating machines from each other, but not from a central Ignition Gateway? Am I missing a concept like Inter-VLAN routing, VLAN tags/trunk ports, etc.? Also, have I correctly placed the DMZ or am I missing a layer in the Purdue model? Is a DMZ just a firewall or how do you implement a DMZ in the real world? I'm not sure; my diagram shows Ignition, SQL, and all corporate laptops in the same layer, but maybe there needs to be another firewall between Ignition and laptops?

Finally, I am thinking that each machine cell might get an industrial PC running both a back-end Ignition gateway (just for I/O tag provider and historian purposes) and a SQL database (for things like storing machine parameters and recipes). Is it the right idea (for cybersecurity) to spread out the eggs into different baskets and not store all machine databases in the site's central database? Apologies if I misused any terms or if I'm super far off on anything I said.

Thanks so much everyone. Any help is greatly appreciated,
-Austin

Number one, don’t use publicly assigned subnets in a private network, β€œ20.0.0.x” is not for use on a private network. Second, use your router. You are defining a gateway address on each network, you would then define ACLs and routes on the central router that is listed as the gateway. Only set the Ignition server/s up with one NIC, and put it on a server vlan and subnet. Each PLC would have the gateway address set up in it, and the router would see the request to get to the PLC from the server, and route it via the gateway address to the PLC, the PLC would respond via the gateway to the server. Set the ACLs up so the routing only accepts traffic to and from the server, then you are running.

3 Likes

Thank you! This sounds way better than the multi-NIC method. I've got a few L2 and L3 switches with Cisco firmware to play with, so I think I'll be able to test out the routes and ACLs.

Edit: Thank you for letting me know about the allowed private IP address ranges. I'm noting that private networks can only use these addresses.

  • 10.0.0.0 - 10.255.255.255
  • 172.16.0.0 - 172.31.255.255
  • 192.168.0.0 - 192.168.255.255

Pro-Tip: Don't use switches for routing. They are very slow. Use your DMZ firewall for the routing, they are normally built for full hardware offloaded routing. Also, the firewall will give you really nice access to the rules for routing and the statistics of usage and failed requests etc.

Also:

This is normally where I would put Ignition Edge, and no database. Cache any recipes you need inside edge if needed, and use the main gateway as the path to the SQL server.
Combined with this, you really don't want to ever run SQL and Ignition on the same server. They use resources very differently. Add in EAM for managing these edge gateways and you have a nice secure system using the Gateway Network with TLS to pipe all the traffic through the router to the main gateway. If the network or the gateway drops for any reason, datalogging and HMI functions can happily run off the edge gateway for the machines.

1 Like

Also, this is a go-to design for DMZ design for OT systems.

2 Likes

This is a high level design of what we like to do. We put all the PLCs on 2 networks. One for the OT network and one for the I/O network. The I/O network is essentially airgapped and not accessible from any other network. Then our Ignition servers and local HMI clients also connect directly to this OT network. Essentially, this is done so that there's no firewall between the PLCs and the Ignition servers, so there's no chance for a firewall issue to prevent control of the systems. Then in the DMZ, we keep the local database/historian, an engineering server/workstation/VM (not shown) for engineers to RDP into to get access to programming software which has firewall rules to allow it direct connection through the firewall, and an RDP server for remote users to connect into if they want to run an Ignition client. You could also set this up as a front end server, but since we use Vision, and we want remote access for other users to be easy, they just need an RDP connection and can launch the client from that connection rather than having to have a Vision client installed on their computer.

3 Likes

Ah, I'm too used to Windows software firewall as my understanding of what a firewall is. So the DMZ firewall is a physical (Layer 3?) device that can also perform routing, and we will need 2 of them to create the DMZ zone at the company site, in addition to the existing IT firewall between company and internet. Thank you also for the clear diagram. It looks like the goal of a DMZ is to create a sort of data/firewall "airlock" between IT systems and OT systems.

It'll be good to look into Gateway Networks and EAM if we decide to get Edge or I/O gateways. I had a SQL DB next each Edge gateway, since a lot of the Ignition system architecture diagrams showed it that way, but now that you said no DB, I'm thinking differently. In a hypothetical cybersecurity breach, machine recipe or tag historian data isn't particularly useful to someone looking to steal data for ransom, so I probably don't need to bury it inside the machine cells. It's more important that we can keep running the plant on the OT side alone, which is the point of the Edge gateway cache you mentioned. In that case I'll keep all the databases on the central database within the DMZ instead.

Thanks for providing the Vision version! If you have your Ignition gateway sitting under the DMZ in the OT network itself, RDP/VPN makes total sense for accessing Vision clients from outside the OT network.

If I wanted to place my Ignition gateway inside a DMZ instead, but still account for the DMZ firewall potentially preventing control of systems, would it be a good idea to use a redundant gateway pair, where I place the master gateway inside the DMZ, and the backup gateway underneath inside the OT network? I guess this is similar to the example diagrams above, where a main Ignition gateway sits inside the DMZ, while Edge gateways sit inside individual machine cells.

That's because current guidance from the United States cybersecurity agency indicates the servers needed for production, including databases, belong in the plant network, not in the DMZ. (IIUC, these agency recommendations for general industry are legal obligations for critical public utilities in the U.S.)

Companies fond of Microsoft Windows-based products want to put them into the DMZ instead, in a tacit acknowledgement that those products aren't secure enough to be placed in the plant network.

1 Like

There's no reason to put the Ignition servers in the DMZ in my opinion. The only things in the OT network should be those critical to the operation of the facility/plant. The reason for this is that due to constantly evolving threats and IT managing the firewalls in the plants I work with, even with redundant firewalls, they don't want a bad rule/update to bring down the plant. The database for historical data isn't critical for the operation of the plants I'm at, but if it stores recipe data, etc that Ignition needs to push into the PLCs to keep operations going, I would probably move that DB into the OT as well.

The DMZ is for data that isn't critical to operations and a "locked down" area of the overall network where somewhat untrusted users can get the access they need. I say somewhat untrusted because we don't trust them enough to be connected to the OT network, but they need just enough access to get the data/reports/etc that they need. Anything beyond the firewall separating DMZ is essentially untrusted as the IT network shouldn't be considered any more trusted/secure than the internet.

3 Likes

Phil, do you have a link to this guidance? I'm currently working up a recommended architecture and I'd like to reference this. Unfortunately, this customer is 100% a MSFT environment. No chance of changing their minds, so I'd like to make sure that I'm recommending as much as possible a secure and usable architecture.

Perhaps I could convince my company to switch production to Linux (I'm using perspective on PopOS for one HMI currently), but I'm not sure if my future inevitable replacement will be comfortable with maintaining Linux systems. If it was up to me (and maybe it is), I would love to run Ignition gateways on headless ubuntu servers haha.

We only have one physical server. Is it possible to run some VMs on the OT network, underneath the DMZ, while other VMs run on the IT network above the DMZ? Is this another problem solved by routing? Otherwise, I might be back to placing DBs next to the Edge gateways in each machine cell. Thanks!

DHS/CISA presented a few (couple?) times at ICC and handed out hardcopies of confidential guidance. Most of that is supposedly online--I found it once but the website keeps changing. Start here:

Update:

Found it here:

TL/DR: See the diagram on page 17.

4 Likes

While not ideal, yes, you can run VMs on separate VLANs and even physical networks by connecting multiple ports on the host to the various network switches. You just have to be aware though that a misconfiguration of the VM can put it on a network that you may not want or may bridge across 2 networks bypassing the firewall if someone just adds a 2nd NIC to a VM and puts it on the 2 networks (called dual homing).

1 Like

Gotcha. I'll have to look at databases on a case-by-case basis and determine where they need to be, but I'll keep production-critical ones down in the OT network. And I'll probably keep databases that facilitate "business operations" in the DMZ and not on the IT network.

For Ignition's placement, I put it in the DMZ because my company uses Ignition for both production and business operations. I am thinking of migrating production applications down to the OT network, but we're also highly integrated with a cloud ERP system (SaaS), and we need some operations that turn on motors to also send post or get requests to our ERP's REST API. For this reason, I was thinking placing Ignition in the DMZ was a requirement for us, but is it possible to have Ignition talk to the internet via outbound rule or TLS or something while being on the OT network? Thanks!

This is my humble opinion.

  • iDMZ should be the place for communication IT and OT host but using intermediate hosts Those intermediate host should be for each of the services you want to interface (historian data, human interactive remote access, ERP integration, real time data collection etc...)
  • Don't expose PLCs/Layer 1 to any iDMZ host. No host able to communicate directly with the PLCs from the iDMZ.
  • If possible always using encrypted protocols when connecting with iDMZ hosts (outbound and inbound connections).
  • The iDMZ is not part of your core elements to protect, the iDMZ host are beyond your perimeter to protect, you should be able to design your DMZ in way that you can run your plant without iDMZ hosts (accepting some kind of degrading mode). Think in the Island mode concept, if there is a attack in the IT corporate, you should be able to easily and temporary break the IT/OT connection switching off all the interfaces in iDMZ, so isolating your OT assets.
  • I don't see the rationale to put Ignition Gateway in the DMZ, neither Historian. Specially if your Gateway is your SCADA/Layer 2 system.
  • Same for engineering workstations, they are part of the crown jewel for the hackers, they should be inside of your OT perimeter to protect.
  • In my opinion the fact that the same port that Gateway expose to the Engineering workstation is the same you expose to the SCADA clients (http/https) is a challenging vulnerability that Inductive Automation may improve in the future. Security Zones feature is good but maybe not enough.
  • How to exposure your Ignition data to the IT corporate? My suggestion would be to put a NGINX reverse proxy in the iDMZ, so your IT web clients interact with the proxy but not with the Gateway. For Historian data you can put an Mirror Historian and so on for other services as ERP integration, mqtt data to the cloud....

I'm actively doing this design without too much experience with Ignition. The above text is my current draft, really willing to receive feedback. I'm learning from all of you in this chat. Thanks

1 Like

Your thoughts track pretty well with the ICS-CERT document I posted.

1 Like

Thanks for sharing Sergio! So it looks like the consensus here is that Ignition should not live inside the DMZ, but underneath it on the OT network as best practice. I took a look at the CISA document Phil posted, and it looks like they have a diagram of the Purdue Model. I'm guessing that a site-central Ignition gateway should be placed on level 3, while Ignition Edge or local HMI & I/O gateways could be on level 2 in each cell.

I was under the impression that the DMZ was a zone with 2 barriers between IT and OT, but had certain holes to allow IT and OT to communicate, but it looks like those holes can be made smaller if Ignition is on the OT network. And it makes sense use the DMZ to cut off all connection between IT and OT and still be able to run. I just thought we would seal off the IT-DMZ barrier, instead of the OT-DMZ barrier (or both DMZ barriers).

I am still concerned about moving Ignition into OT, as it would isolate Ignition from the Internet, which we need to integrate with our cloud ERP (the code was developed while we assumed a basic flat ignition architecture, instead of a multi-level network). I know you mentioned the NGINX reverse proxy to sit in the iDMZ, instead of Ignition, but does this allow Ignition on OT to send something like a api post request directly to the cloud?

I do a lot of programming for our HMIs that involve moving data around (to different databases or to a cloud database). Let's say I have a packaging machine that has a PLC with a tag to count each time a case is produced. I will then program a tag change script on that tag (or it could be a manual button script if no counter) using Ignition's system.net.httpClient to send the data to the cloud, and then 1 case is added to ERP inventory. At the same time, I may also have it write a similar version of the data to a local database (which is good for tracking failed post transactions if any). Doing it this way lets us store everything in the cloud as a single source of truth, and it simplifies the code for readability and troubleshooting.

Would this code still work if I placed Ignition into the no-internet OT network and connected Ignition to IT via reverse proxy? Or does this change require me to now store everything onto a local database on OT, mirror it to a database in the DMZ, and then write a "sync program" in the DMZ (maybe another ignition gateway) to check periodically and send unsent data from the DMZ database to the cloud? Also, if I rely on getting data from the cloud (like production schedule or part SKUs), do I also need to periodically sync the other direction, from cloud > DMZ db mirror > OT db?

I've written a few sync scripts before and sometimes they are not pretty. I think this is the hardest problem to figure out for us. Thanks so much.

That might be too strong. There are many Microsoft fans in the control systems world. See again my comment #9.

Consider putting a tight proxy in the DMZ, with strict rules in each direction. Or, perhaps, put another Ignition gateway in the DMZ to be a rendezvous point.

You should do this anyways, to handle WAN breakage, whether failure-induced, or deliberate isolation in a crisis.

Your OT gateway should accumulate data in a local database, which then streams through the DMZ when the WAN is up. (I prefer actual DB streaming replication over scripts, but requirements vary.)

1 Like

Hi Austin, I'm not able to help you with your questions. Just let me add some more generic considerations.

  • Those iDMZ + Purdue Model architectures (as the one shared by Phil or a similar one as the Figure 3 - ISA-62443-2-1 (99.02.01)-2009 Figure A.8) are generic one also to apply to critical infrastructures as power plants, water utilities, etc....on those domains they do not have the inbound ERP IT interface from ERP that is fundamental and key in some of the manufacturing spaces. So this is clear the main challenge we have and not sure there is a consensus about how to implement this interface in the OT Security community.
  • My proposal for the NGINX reverse proxy was just to relay/broker the Web service so for IT corporate Perspective clients connection. We should implement in the iDMZ as many brokers as service we require to expose (e.g. RDP interactive remote access, sql databases data, OPC UA, DNP3, Malware protections updates, file transfer, etc...) For the ERP interface, if you can host your code in an intermediate host in the iDMZ, and this Host is communicating with Ignition and with the ERP cloud server, you will be compliant with this iDMZ concept, isn't it?

Not sure if this is helpful.

1 Like