Hi all, I'm wondering what has been a successful Azure/Git structure for anyone working as a small team, mixed discipline
I have a small team of one, sometimes two engineers, and for most of my ignition development was using a test, and production repo. I'm getting our IT involved so now moving to dev/test/production and doing development on our individual gateways.
My Git repos have been at the top projects level, but that's getting unwieldly due to the .resources file changes for let's say 20 projects obscuring the meaningful changes.
(I'm going to try this: Using Git with Ignition projects - #4 by Jay_Baker)
I'm considering changing to a repo per project to silo off project changes and make it cleaner, but then I would have to make ~60 repos for dev/test/prod.
Does that in the end work well for anyone? So if had dev/test/production environments, you then have 3 separate Azure 'projects' for Dev/Test/Prod with pipelines at that project level?
I feel like someone who has a more complicated setup than me has likely locked this in as a best practice, care to share what's worked?
The way we've done it is we've got each application in a separate project(with a git repo per project), each of which inherits the Core project (which has a whole bunch of reuseable/shared code, views, database queries etc). Everything is developed on the developer's Dev environment (and pushed to/pulled from Github). Then, to go to test and prod, I export the project from my dev box and import it on test/prod.
No, I just keep version numbers in each project, which I update each time I'm going to do a release to test/prod (which I have as part of a data structure in a script, as shown in the image below).
I then create a git tag with that version number at the point of release, so I have a history of released versions. I also then, when I export the project on my dev box, name the file with the version (eg for this example, file would be called Dashboards_v107.zip). So I keep all the exports of the versions, just for completeness, but I can also pull each version out of git if needed.
Awesome, - I had my release number internally but was also tracking my 'pushes' to production with a repo on my production projects folder. Which really, is extra.
I have one main project, and then projects per application, although my 'main' project has become one huge perspective site which I need to butcher into some independent applications, which would further increase my project count.
For a repo with each project, eliminating my test repos and my production repos would make this manageable.
We don't have test/prod repos as there should be nothing changed in the code between dev/test/prod - the way we do our setup, all configuration is held in databases. But if you're manually configuring tags etc in projects so that the project actually differs on the different environments, I can see you mightwant separate test/prod repos to back up that config information.
Yeah, I think I can still work around that, or keep with my one repo for each environment, but for development individual repos for my projects. So instead of N*3 repos, I'll have N+ 1 (test repo) + 1 (prod repo)
Could you not use a single repo for each project and three primary branches (dev/test/prod) in that one repo?
It seems to be a universal truth that each dev has their own GW. The issue I see with that is what if there are many gateways with different configs running the same core project? In that case, a container getting spun up with the project’s targeted gateway and the checked out branch would be best/most efficient.
DevOps pipelines would be configured to sync from dev to test, then test to prod.
I don't see a reason why I couldn't other than having to configure more pipelines.
Is that working for you?
I recall starting that way as it would be how I would set up a 'conventional' software project, but I think I was thrown off by the files like .resources that were changing, and the aspect of gateway tags etc. that I felt like I needed to capture the state of my production gateway in a repo.
Why I started this thread, I feel like I'm missing an obvious setup architecture that plays nice with Ignition's architecture in a 'standard' setup.
I wish we had DevOps setup. We have no version control or CI and it’s just a bear to keep multiple gateways “in-sync”.
If I were to set it up myself, I would have gone with single repo per project with dev/prod/test branches. Then only create branches off the dev branch then merge to test and then to prod.
“There are many ways to skin a cat, but few produce fine fur.”
I see more often people sharing all sorts of methods and ways to accomplish VCS and CI, but there has to be just a few shining examples that everyone should try to work towards. In my opinion, it’s often shrouded around a veil of “it depends”. Maybe it does depend on the architecture in place, or some other variable. Still, I think there should be one shining example that works for single gateway/single dev applications and many gateway/many dev applications.
We are too deep into the woods now to set it up with 8.3 on the near horizon.
single repo per project with main (prod), develop, occasional “feature” branches
dev branch commits deployed to test environment, main branch commits deployed to QA/QC environment first, then prod after sign-off
“parent” repos (for gateway configs) with submodules pointing at all the relevant project repos (this will be so much more useful in 8.3)
and critical for me: .gitattributes file in every repo root with * -text to avoid CR/LF headaches and resource.json modified by:external in multi-platform environments
This is exactly how I felt as well. I thought I could wait until 8.3 but I have a new project I'm starting now and it's just going to be a mess without organizing my repos.
I tried out single repo per project with dev/test/prod on a test project and it feels good.
With the same respect I feel like the Azure DevOps & Ignition structure is likely pretty ubiquitous, there must be a pipeline setup that's pretty well the standard too.
And maybe that's just what's covered in here?: https://www.youtube.com/watch?v=jT0FIr9L47E&t=334s
If I may ask, how do you guys go about keeping your core projects up to date across gateways, while allowing end-users to make alterations or add content?
For instance, we deploy a core parent project to every plant and then create a child project from that core project. We instruct the end-users to do all their custom views or alterations in the child project. This way, we can update the core project without overwriting the work they have done. Sometimes, they need to alter the core code or views for niche' cases. Those are usually done by overriding the parent resource.
This is the only logical method we could come up with that allows us to keep the core project up to date, and allow end-users to make their own views.
If their changes are vetted, then they're exported into my 'Dev' main project, added as a feature, and then they can change their resource without it overriding parent and see if it still works.
I'm in a different boat where I'm my own end user, but a couple other people have worked on projects in some way (usually just graphics design)
at the moment, the main purpose is a templatized “development” base gateway config + some of the individual projects (as submodules). This allows a developer to clone the “develop” parent repo to their local machine and be mostly ready to go (all the important/inheritable projects come with - no need to clone them individually). More user-friendly dev onboarding experience I suppose. NOTE: there are plenty caveats to this method (e.g. excluding any gateway config or projects that could interfere with prod if accidentally spun up on a prod-connected dev machine). most of these caveats can be avoided with considered infrastructure/project architecture. I don’t advise this method for monolithic projects. This is also intended for “traditional” (Microsoft Server) environments. Containerized enviroments are better served by derived images with env variables and/or automated gwbk restoration.
To answer your question directly though, “parent" repos could technically have anything that’s not a project - modules, user-lib, config.idb, etc. judicious use of .gitignore is necessary, and even then IIRC it’s pretty much impossible to avoid a “dirty” parent work tree after starting the cloned gateway on a dev box. Current solution is to just deal with it, and avoid ever commiting/pushing the parent repos back to VCS (unless it’s a purposeful base config change - and then only after approvals - push permissions should generally be strictly access-controlled on parent repos). It’s not really ideal, but will presumably become more useful with 8.3 (tags in json instead of config.idb, environment-specific configs, etc)
I’m farily certain resource.json is a non-issue if all development work is done in designer (and CR/LF is properly handled in multi-platform environments). Only when resources are modified externally does this problem crop up, and thusfar it’s seemed acceptable (to me) to reference git commit author/date in cases of modified by:external. one could of course add resource.json to .gitignore and avoid that altogether I suppose. perhaps a better case for global git exclude. good to hear it may be otherwise handled in future!
To close the loop on this thread, what I'm rolling with now that feels good, and is pretty much what the best practices guide, and suggestions from here (thank you!). Here's a simplistic breakdown for posterity.
Three development environments:
Dev - (Individual dev computers)
Test - (VM copy of Prod, with copies of databases, populated at time of copy. Tags are imported based on the project and level of
testing exposure. IE: we're building a machine right now, I have the live tags in the Test environment pointing directly at that machine)
Prod - Live
I have a repo with Dev/Test/Prod branches for each development environment, with each brand active on that environment. They're storing:
Exports of tags (I export if I have done any significant work on a project that may have touched tags)
Gateway Backups for config (.gateway_backups)
Images
Each project then has a repo with Dev/Test/Prod branches.
I think one aspect of this where I got off the beaten track is I had independent repos for my dev/test/prod environments, all running off a main branch for... some reason. Early on with Ignition I was getting my head around what was gateway scoped/what was local, the script project, tag change scripts etc. Being concerned that I might have critical functionality anywhere, I was trying to capture it in the most direct clear "hey here's a single possible thing that changed" at the gateway level. With one project that was fine.
That project got huge, and was joined by many others - spring cleaning. Thanks for getting me back working and not thinking about my environment.
In your setup, did you setup remote gateway connections between dev/test/prod at all, or are they each their own without knowledge of the other?
When importing your tags for dev, do you change them all to memory or read-only so that if writes occur, they don’t affect the PLC?
Does your dev connect to same db as test?
To sync prod to dev pc, is it a gateway backup that you import?