To start, I want to say that I have looked at the guidance here and so I’m not just looking for the most basic concepts.
The existing deployment here doesn’t really have a lot of governance on it and we’re looking to establish a prod environment for some more crucial projects. As part of this process we’re trying to figure out how to handle things like Dev environments and testing.
My background is in Web Development so my expectations about what can be achieved with testing seem to be a little at odds with what’s available, and I’m wondering if I’m missing some tricks. It doesn’t seem like there’s a way to, for example, automatically test changes to a UDT or a template that feeds many instances. I just have to make the change and try to inspect all 70+ instances. I have pondered pointing a linter at the exported file before pulling to master in order to “freeze” certain components of the configuration. I’ve also pondered capturing “sessions” in historian and trying to script a playback of that data and check the results for known inputs. It was even suggested that we could do some rudimentary pages that could expose some tag data so a GUI testing tool could inspect them as well, although it’s a little janky.
I’m just not sure what makes sense.
Additionally, I can’t figure out how to manage a test/dev environment and be sure that your dev environment matches live without having to send every single machine tag to both gateways, and then try to change which gateway is being mirrored as projects hit that stage. I just don’t know what that workflow looks like in practice.
I hope it doesn’t look like a complaint, I’m mostly convinced I’ve just missed the boat on the best way to carry this out and I’d like to understand what I’m missing.
The Dev/Testing options suck. And it isn’t Ignition’s fault. Industrial plant floors are dominated by devices with fixed IP addresses, often no DNS available, and sometimes no IP gateway set up (in the really security-conscious places). Since Ignition running in such environments has to use fixed IP addresses instead of DNS names, a true plug-and-play development environment must emulate all of Ignition’s communication targets. Including DB connections.
This situation has contributed to the urgency of some of my module development (Logix processor emulation and Modbus-based PLC emulation). When I have cause, I mimic a customer’s entire kit in my lab, with real PLCs as available, VM-based emulations otherwise, and live copies of relevant DBs. All occupying the same IPs on a VLAN as if in the real facility. At this level, a client gateway backup can be dropped into a VM without using restore-disabled. And run. And development can produce an updated gateway backup that you can be relatively confident will run when dropped back into the client’s plant.
Sorry to come back literally a month later, but I had a bit of a flash of inspiration and I’m curious as to your take on it.
Supposing I can control the VM running my gateway, could I not create and manage a hosts file(I could even check it into source control!) in order to decouple Ignition from the IP addresses? If I deployed the thing on a test platform that had the union of all host files, I could direct them all to appropriate real or fake devices at will, per-gateway. I think.
It is a real brain melter trying to figure out how to do work in this context that wouldn’t be offensively irresponsible anywhere I’ve worked before.
Sounds reasonable. Might need client computers to run off those names, too, though. I tend to customize DNS with
dnsmasq or similar tools.
I must have missed an architectural detail there, I thought clients interacted with the gateway for everything. If it has to be on clients too then it’ll get unmanageable in a hurry.
On the other hand, I considered trying to establish OPC connections to the OPC server on the prod gateway. Done right, with my current understanding, we could design against the same opc addresses and expect them to be present on both. I’m experimenting with it at least.
Any way I can get my development environment to see the same devices using the same address(preferably without perforating the firewalls at every location) seems like it could at least enable a form of testing without needing to deploy a dev gateway to every plant and try to keep them in synch