Network Setup Best Practices

Does anyone have any information/guides on best practices for setting up the network that Ignition and all other industrial ethernet equipment will be connected to? Basically looking for a guide on how to setup all the server(s), switches, ethernet cable, UPS, etc, in such a way as to maintain as high up-time and reliability as possible.

One concern my company has from moving away from an HMI such as FactoryTalk ME, which isn’t reliant on a central server since it runs in it’s own run-time, is that a single point of failure (such as a switch or cable between the PLC and Ignition server) could cause us to lose all controls related to that PLC resulting in us needing to do an emergency shutdown with a hard-wired switch.

Thanks!

I wouldn’t use it as an HMI from the main server. Consider using Ignition Edge Panel for that.

There are plenty of resources on the Web for network topologies, if one wants to use their Google-fu. That being said, take a look at the different architecture example from the main site:

https://inductiveautomation.com/ignition/architectures

2 Likes

Cisco has a few architectures they recommend. But like @JordanCClark mentioned, there are several places where you can find info. on industrial networks.

https://literature.rockwellautomation.com/idc/groups/literature/documents/td/enet-td001_-en-p.pdf

1 Like

Yes, there are lots of resources online. Another consideration: FactoryTalk ME hardware can fail too. Frequency should be low, but the same applies to quality hardware used with Ignition.

In my experience, a central HMI server in a plant serving up HMIs to multiple machines can work very well and is much more maintainable than the hodge podge of proprietary HMI hardware it replaces. With shielded armoured cables installed in appropriate locations (where they won’t get hit by machinery) with quality tested terminations, and quality switches, network failures ought to be rare–probably more rare than other hardware failures. I would not recommend relying on internet connectivity for control purposes (serving up HMI from server at another location).

Consider static IP addresses for control devices if there’s a chance a DHCP server they would otherwise rely on may be down when the plant should be up (for example, DHCP server managed by IT in office that opens at 09:00 while production starts at 05:00 after a power failure during the night).

3 Likes

I’ll check out some information already published on how to maintain network up-time. I know there is lot’s of information already out there for network topologies, but I suppose what I was more curious about is if there is anything more that comes into play when dealing with an industrial environment versus an office building.

Our current setup would be a single Ignition server (will be adding a redundant server in the near future) connected to ~4 industrial PCs running the Ignition Clients. Our plant network is an isolated LAN, all hard wired, in which all connected components have static IPs. We also have UPS for the server and ethernet switches.

witman, you make a good point that all hardware could potentially fail.

My main goal right now is to gather some information on plant network reliability so I can assure our older techs that moving to a server based system, if properly setup, does not introduce any more risk of failure than just a client runtime on the HMI itself, like FactoryTalk ME. Which sounds like it is true assuming we use quality equipment such as shielded/armored cables, switches, servers, etc.

Thanks for the input.

Make sure all of your switches support Rapid Spanning Tree Protocol “RSTP” and have it turned on. With that on, you can wire multiple paths from machine switches to infrastructure switches without broadcast storms from unmanaged loops.
In a Rockwell environment, also make sure the switches support Internet Group Multicast Protocol “IGMP”, and have it turned on.

3 Likes

I always try to make sure the IT department divide’s the network into smaller subnets, This way I can keep all of the industrial equipment away from the office network, This has worked well for me.

1 Like

We are in the process of doing that now.

I run my local clients just fine from the main gateway. Then for a virtually insignificant price, I have the “Local Client Fallback Gateway” on the local machine. Just copy the project and tags over from the big server to the local client, configure it, and you have immediate fallback to the local client if I snip the cord.

1 Like

I hope this info can be helpful for you.
Did you look to build up a redundant network (to maintain as high up-time and reliability as possible)?
I hope the network structure can meet your requirements.
Here is some reference for connecting all of the network devices via ERPS and connecting with an industrial ethernet switch.

I’ll keep an eye on this subject. If you have further questions, feel free to let me know.

I would also like to learn what are the network architecture/technology available for redundant network, that could solve single point of failure in network connection.

If you have, please share document or websites about this.

A couple primary searches for you:

Popular on manufacturing netwoks: Device Level Ring

Popular on wider networks: Rapid Spanning Tree Protocol

Thank you for fast reponse.
Googling STP and Single point of Failure worked for me.

Do keep in mind that Rapid Spanning Tree Protocol exists because plain Spanning Tree Protocol is unbearably slow to "heal" from breakage.

1 Like