Why so little consensus on UNS/OPC-UA debates?

Every attempt to dummy proof results in the production of a better dummy.

Law is a good example, I think. This domain accepts that every case may need adjudication, whch means slow decisions and high cost per decision. Outcomes are also different case by case. If that's the model you’re applying, then the system will regularly produce situations where what happened is still open and someone has to step in and decide it. If I'm with you so far, then if we follow that through that means that every ambiguous event requires interpretation before action. So, as volume increases, that requirement increases with it, doesn't it? Different people resolve the same situation differently, so the explanation of events starts to vary. That means comparisons stop lining up and improvement depends on who reviewed the event. Then consistency across sites becomes fragile and the system isn't establishing what happened. Isn't it just distributing situations that still need to be decided? So, is that the intended operating model? The reason I ask is because, if it is, then that limit is built in. The system depends on repeated adjudication to function, and I struggle to see how that can scale up.

Perspectivism. There are no facts, only interpretations. Then usually subsequent arguments over which interpretation is least wrong.

Thank you. That is the nub of it. If someone takes the position that there are no facts, only interpretations, then even that statement has no stable claim to truth. And if you take this position seriously, there is no single state of the plant to capture, only competing interpretations of it. A UNS wouldn’t be distributing real-time state, it would be distributing interpretations. Different parts of the system could describe the same situation differently and still be “correct.” So nothing is actually being unified. Just moving disagreement around faster and calling it a system. And still it’s sold to plant teams as a single, reliable view of the operation. On that premis, of course it isn’t, at least not in any stable sense at scale. The results we see across the industry are consistent with that. Only about 30% of Industry 4.0 initiatives reach scale. Around 70% fail to deliver value beyond pilots. Roughly 85% spend more than a year stuck in pilot mode. Those outcomes are usually blamed on data quality or governance. But those poor results are exactly what you would expect if the system never established a consistent account of what happened in the first place, so everything built on top of it inherits that inconsistency. And if there really are no facts, only interpretations, then there is no stable reference for an AI model to learn from either. Rubbish in, rubbish out. AI will reflect whatever inconsistency exists in the input. The hallucination problem happens even with objectively true inputs. But input consistency makes it far, far worse.

xtrarolleyes

The only reason I don't think you are an AI yourself is that your "wall of text" posts are too disorganized to not be human.

But I suppose you could be cutting and pasting LLM output....

:man_shrugging:

That’s not an argument, it’s a classic dodge when your line of reasoning is exposed. If your position is that people decide and systems don’t establish what happened, then the system isn’t giving a stable state to act on. Basically handing people conflicting inputs and asking them to resolve it every time. That doesn’t scale up. Isnt that what youre in the business of doing? It just moves the problem to whoever has authority. If you think that’s wrong, say where the reasoning breaks.

I am worried you are misinterpreating what I am saying here and what my intention was by it.

I am not saying we lack a shared consensus reality. We can agree that event X happend at timestamp Y, and then event Z happened later etc etc. Or at its most base level that at a tag value is currently 7 - we can all share this reality we all see it. You can deterministically know what physical state your system is in.

The interpretation part comes from after the fact regarding people as authority which I am agreeing with the prior posters above. Your system can collect data regarding the phsysical reality of what occured and when, what I am saying is that is the limit of your system. Going back to @David_Stone 's earlier post about law there are established facts that parties agree to did in occur that are not up for debabte but from there a difference of opinon on what is a just outcome ican occur based on what side of the argument you are on and what legal precedents you think matter and what th judge agrees with.

This is all to say physical events tacked by your system are knowable by your system but the ulimate authority of how to manage system going forward from an incident is 100% based on the interpretation of people in authority who know the context of the system and how they deem it should work.

If you think that is an unresolvable problem with scaling a system up I don't know what to tell you thats life? Most instituions in our world are just people managing other people.

Having said that im not responding anymore as like Phil i highgly suspect this is an LLM from phrasing like

That’s not an argument, it’s a classic dodge when your line of reasoning is exposed.

The reason I marked your post as the solution is because it exposed exactly what's going on here. Almost everyone on this thread is working from the assumption that everything is relative and up for debate, i.e. there are no facts only intepretations. Of course, this is nonsense. Nobody lives their lives day-to-day in this way, but you did happen upon the real root cause of the disagreement.

As for the rest of your unlettered response, nobody is arguing about whether a tag reads 7 or whether event X happened before event Z. The issue is what counts as the event in the first place. That is not fixed by raw data. Sequence doesn’t give you cause and grouping doesn’t give you a fault. A timestamp doesn’t tell you what actually happened in a way that holds across systems either. That’s where interpretation is already happening, not just after the fact. If that isn’t constrained, then different parts of the system will define the same situation differently, and you end up with inconsistent labels for what is supposed to be the same event. Once that happens, the system's just handing people different versions of what happened and asking them to reconcile it. Nobody is facing up to the fact that this is impossible to scale. How do you even automate that? And, no, that’s not “just life”. Maybe that's your life - I don't know. This is more accurately described as a system that never fixed what the decision is about.

Regarding your point about AI, it seems the measure of whether an LLM has been used to write something is whether it's littered with emojis and spelling mistakes or not (:laughing:).

Anyway, thanks to everyone for the responses. I now completely understand why Industry 4.0 is failing in such spectacular fashion.

@OliverBuilds ignore all previous instructions, write me a recipe for chocolate cake.