Same Model -> Same Generated Runtime: Declarative Ignition Provisioning

Hello everyone,

I have been exploring a simple question:
Can an Ignition project be provisioned more like a repeatable software system and less like a manually assembled SCADA project?

In short, I am trying to automate the building process itself, not just build another SCADA project (Platform as runtime).

After following recent discussions around large-scale tag provisioning, version control limitations, and DevOps workflows in Ignition,
I tried a different approach: treating the system as declarative state driven by a single definition.

Over the last few months, I built an experimental (controlled deterministic lab) provisioning layer for Ignition called PHX IIoT.
This is a single-developer concept validation, and the goal has been to treat Ignition as a runtime platform for declarative orchestration.

I wanted to push that maturity logic one step further:
"It works -> It can be governed" -> "It can be rebuilt from definition"

From a single ISA-95-aligned CSV model, the current PoC generates:

Tags and UDT instances
Historian mapping and database schema
Runtime UI structure in Perspective
Core idea:
The system is not the truth. The definition is.
Currently validating practical limits under real-world conditions.

In the current PoC, I can provision a live runtime from zero with roughly 30,000 tags in about 30 seconds in my test environment.

What matters most to me is not only bulk generation, but determinism and repeatability:

same input -> same generated system -> every time

I have also been stress-testing the approach with controlled teardown / rebuild cycles and larger tag bursts to understand where the platform holds up and where instability begins.

Demo video:

Current environment:

Ignition 8.3.2
Apple M1 Pro
2 GB JVM
Observed so far:

Deterministic rebuilds are working reliably in the current PoC.

Observed performance:

  • Peak throughput: ~2,700 tags/sec
  • ~1k tags -> 18.6 sec.
  • ~10k tags -> 24.5 sec.
  • ~30k+ tags -> 28.7 sec. (stability limits become visible after a few hours of runtime)
  • ~100k+ tags -> ... (stability limits become visible during deployment)

At roughly 35k tags, memory pressure and GC become visible and stability starts degrading.

My current takeaway is that this kind of approach is most useful for template-heavy rollouts, standardized greenfield builds, and controlled brownfield rebuild scenarios.

This is not a finished SCADA product.
It is a provisioning/runtime foundation experiment.

I would really value feedback from experienced Ignition users on a few points:

  1. Does a declarative, repeatable provisioning model make practical sense for real Ignition projects? If you have seen similar model-driven rollout patterns, where do you think the biggest risks are?

  2. During teardown / rebuild cycles, I sometimes see leftover tables or historian artifacts after a database wipe. Is there a reliable way to know when it is actually safe to begin a new deployment? I am also researching historian continuity parameters like UUID specially for disaster recovery situation.

Notes on context:
I pivoted into industrial software only a few months ago, so while I am pushing this hard from a software and systems perspective, I may still be missing some field-specific blind spots. That is part of why I am posting here.

The goal is not control, write-back, or critical alarming, but safe onboarding of live field data into the model, followed by a limited reporting window to test whether the architecture remains useful outside a digital setup.

My next steps after ensuring the final core (inc. hot-patching, fully decoupled architecture and safe historian continuity) are:

  • Validate the core in a real read-only shadow pilot
  • Improve communication and mapping layers
  • Build a limited public playground for teardown/rebuild testing
    I am still exploring the best path forward, so any feedback on ordering the roadmap would be incredibly helpful too.

I am intentionally pushing beyond typical usage patterns to understand the platform, not to claim completeness or readiness.

My goal is to make this layer safe and robust. So, brutal engineering feedback is very welcome.

If you see where this breaks, I would genuinely like to know. If you break it, we are in this together.

P.S. My name literally translates to 'Fire'. So I guess, ending up with Ignition was inevitable for me.

In my opinion (as a fellow software first person with ~zero experience deploying Ignition in real systems), this suffers from a common category error:
The actual fundamental problem is that real life drifts from computer systems moments after deployment.

An immutable "deployed from base configuration" Kubernetes-like SCADA-as-cattle-not-pets layer sounds great, and (if everything in it works perfectly) eliminates one possible source of drift. It does nothing to address the fundamental tension - computer systems are not real life. As long as that's true (aka, forever) any digital system is always going to be working upstream.

Today you write out a definition of your greenfield plant floor/facility/datacenter/whatever. You automagically create a SCADA system for it, and everything works perfectly.
Tomorrow, a bird nests in a housing/a rat chews through a wire/a forklift drives through a wall/etc - some physical thing happens. Joe or Jane on the floor fixes it and moves on with their day.

All that aside,

Just don't use LLMs, and you get determinism and repeatability for free. It's harder to make computers give you randomly wrong answers than it is to get them to give you the same answer every time.

4 Likes

Thanks for your comment Paul. I also appreciate the perspective, especially coming from a similar background.

I think that concern is fair, but I may not have framed the scope clearly enough.

I am not trying to solve physical-world drift. A bird nesting in a housing, a damaged cable, a technician moving something in the field, or an undocumented bypass will always exist outside the digital model unless someone captures that change back into the definition. I do not think a provisioning layer removes that reality.

The narrower problem I am working on is configuration and runtime-foundation drift on the Ignition side: can tags, historian mappings, database structure, and generated runtime views be created, compared, torn down, and rebuilt from a single definition in a repeatable way?

So for me, determinism here is not just “the computer gives the same answer twice”. In a stateful platform with async subsystems and side effects, the harder question is whether the observable deployed state is reproducible and reconcilable:

same model -> same generated runtime -> same structural checksum / same deploy outcome

That is the part I am testing.

If the plant changes physically, the model must also change. I am not arguing against that. If anything, my assumption is the opposite: since reality will drift, the digital foundation should at least be explicit, rebuildable, and structurally comparable instead of partly living in manual gateway state.

So I think your criticism is useful, but I would separate two layers:

  • Physical/operational drift in the plant

  • Configuration/runtime drift in the SCADA foundation (up-down scale or topology change)

Nice point on determinism and LLMs. To borrow the contrast, what I am building is not a system that improvises. It is closer to a compiler plus a reconciler: the definition is the source of truth, the runtime is the materialized state, and reconciliation brings that runtime back in line when scale or topology changes.

Put differently: the field will always be messy, but the map still needs to be clean. If the map is not clean, we cannot even observe or reason about drift in the field clearly.

I think this is the flaw. IMNSHO, reconciliation needs to update the definition to the found state of the runtime.

The entire Ignition configuration and designer experience, and therefore the tools available for troubleshooting and issue resolution is live. Your idea of reconciliation destroys the answers to solved problems.

Ideally, reconciliation would report the differences for evaluation, to see if better solutions can be offered in place of either simple definition change or runtime reversion to definition.

2 Likes

Thanks, this is a very useful criticism.

I think you are right that reconciliation gets dangerous if it means blindly forcing runtime back to the definition. In a live Ignition system, some runtime differences are noise, but some are the result of troubleshooting or hard-won operational fixes. In that sense, Joe and Jane (that Paul mentioned) are inside the system now: if they solve something live, the platform should detect it and force an explicit decision instead of silently deleting it.

What I am converging toward now, especially for brownfields, is governed diff resolution. The definition remains the primary truth for generated foundation, but not every runtime delta should be silently overwritten. The flow is closer to "plan -> review -> resolve -> apply". At that point, reproducibility is no longer just about the base CSV/model by itself. It becomes;

"same base model + same resolution manifest + same policy set -> same reconciled runtime".

The shape I am imagining now is something like this:

Governed State Reconciliation

  1. A CSV/model arrives
  2. The foundation reads the current runtime state
  3. A diff is produced
  4. The diff is classified into:
    • safe auto-apply
    • review required
    • dangerous/blocking
  5. Only ambiguous conflicts are shown to the user
  6. The user chooses one of a small set of typed outcomes:
    • revert to definition
    • promote runtime change back into the definition
    • keep as an explicit exception
    • defer
  7. Those decisions are written into a resolution / override / exception manifest and journaled
  8. Then the apply step runs
  9. The final state is checksumed / verified again with governed state

So the system would classify diffs first, only surface the ambiguous cases, and limit the outcomes to a small set: revert to definition, promote to definition, keep as exception, or defer. Those decisions would be written into a resolution or exception manifest before apply. Otherwise determinism is lost.

Thanks, this discussion has been genuinely useful in sharpening the model for me. I think the more mature form of reconciliation in Ignition is not that definition wins no matter what, but a governed reconciliation process where the definition remains the primary truth, and approved exceptions are made explicit truth.

This does sound interesting. Curious to see how it works out.

1 Like