Hello,
We have 'Controls Networking Standards' we've used to segment traffic amongst multiple pieces of equipment in a system. Typically, we've had the PLC, HMI, drives, IO of each piece on it's own class B segment, 172.25.X.X. For example, 172.25.110.x, 172.25.120.x, 172.25.130.x, etc, (255.255.0.0). I'm not a strong networking expert, so I can provide more detail, as well as our standards document, on request.
Now we start using Ignition HMI clients to a Gateway, communicating to our PLC's.. 4 clients, (5, one client on the gateway computer), talking to multiple, typically 3 PLC's; hundreds of tags across all of them.
Is there further information, best practices, etc to optimize network traffic on different segements with this setup? I've dug around a little here and elsewhere not stumbling on it yet. We're looking to revise our standards with this new setup. Thanks much.
Ignition clients do not talk directly to your PLCs, and do not have to be in a subnet that can reach the PLCs. All such traffic routes through OPC tags in your Ignition gateway. Multiple clients looking at the same OPC tags does not make more PLC traffic.
You may have nothing to do, besides making sure the Ignition gateway itself can reach the PLCs. It is not unusual for the Ignition server to have multiple NICs for different subnets.
OK, thanks @pturmel . I knew the clients didn't communicate directly to PLC's, but the concern primarily was with the gateway communicating to the multiple different PLCs/traffic. The gateway will reach the PLC's, but what's best traffic wise. Now, multiple NICs for the multiple sub nets, I like that and there is a good one.
Meh. Standardizing all of the address ranges seems like a pointless exercise. Consider, instead, standardizing a hierarchy of fully-qualified host names. Let a combo DNS+DHCP server hand out fixed addresses for the names it knows, and a range of dynamic addresses for "uncommissioned" devices. Then, later, when a device needs to be replaced, just give the replacement the same name and it will magically assume the IP address.
For devices that are not DHCP-capable and therefore need a fixed address, still give them a name in the DNS, so they'll play nice with the rest.
Also consider not configuring an IP Gateway address on devices in your control network, other than carefully managed devices that handle software update tasks. Make all external access into your control network originate in your DMZ, via a NAT firewall. Control devices can respond to the NAT firewall without using a configured gateway. Have that NAT firewall not use a common router address and not respond to pings or routing packets.
A key advantage of this approach, when pushed down into your equipment vendors, is that the actual IP addresses become easily changable. This means that your vendors can configure their DNS+DHCP kit to be compatible with their infra for testing and acceptance, then you just need to fix that one configuration file when you bring into your plant.
{ I recommend the dnsmasq tool in Linux, as it accepts a folder of text config files in git-friendly "hosts"-like format, and picks up runtime changes. You can put each production line's devices in its own config file in that kit. }
Thanks much again @pturmel . There is a lot of good stuff, and a bit here to digest : ), as we will discuss internally. I may be back for some follow up questions.