Two Problems Standing Between You and Industrial AI at Scale
A deeper look at what's really holding back the Virtual Operator — and why there is no silver bullet (yet).
In our AI series opener, we introduced a new way of looking at how AI fits into industrial operations. We walked through the physical twin, the digital twin(s), the automation layer, and the idea of a Virtual Operator. If you haven’t read that one yet, start there — it sets the stage for everything that follows.
📣 A quick reminder before we start: Our next ITOT.Academy kicks off in May, and our early bird offer is available once more. Want to join our fourth group and learn how to bridge IT and OT? There is no better time than now!
👉 Check the curriculum & enrol via ITOT.Academy.
In this video, David walks you through the framework we presented last year at the ETLS conference in Las Vegas, and on multiple other events since.
Quick recap: where AI sits today
Today, AI in industrial settings is a bolt-on. A very clever bolt-on in some cases, but a bolt-on nonetheless.
On the digital twin side, AI tools help you search through sensor data faster, spot patterns in quality data, optimise production planning, just to name a few applications. On the automation layer, it can act as a coding assistant, helps configure systems, or aid operators in analysing events.
Useful? Absolutely.
A step change? Not yet.
Because the fundamental relationship between the digital and the physical world hasn’t changed. Operators and engineers still bridge that gap. Automation still handles the deterministic stuff. And for everything outside those described boundaries — the unforeseen, the non-deterministic — humans step in.
So what needs to change to move beyond bolt-ons?
Three steps towards the Virtual Operator
To frame that, it helps to think in three stages of autonomy.
Step 1 — Assistance (where we are today). AI acts as a digital assistant. It gets access to a part of your data set and helps you get work done more effectively. Think of it as a smarter search engine for your plant.
Step 2 — Collaboration. AI becomes a virtual coworker. It has enough context to recommend actions, flag anomalies, and support decision-making. But it has no direct control of the plant. It suggests; it doesn’t act. This is the equivalent of lane assist or adaptive cruise control in your car.
Step 3 — Autonomy. The AI system can independently handle operational decisions. In most cases, this requires a full rethink of your plant’s control and safety concepts. And we’re not there yet — not by a long shot.
The self-driving car analogy is useful here. The hard part of autonomous driving isn’t controlling the steering wheel. It’s the fact that every intersection is different, road conditions change constantly, and there will always be unforeseen circumstances. Industrial operations face the exact same challenge.
Challenge #1: The integrated digital twin
To move from Assistance to Collaboration, AI needs context. Not just access to one historian or one maintenance system, but a coherent view across all your digital twins.
Today, those systems are often standalone. Even when they are connected, they use different naming conventions, different data models, and different levels of detail. Linking them in a unified, structured, contextualised way is non-trivial.
We’ve talked about this before — when we covered the industrial data platform and unified namespace concepts. But it bears repeating: an integrated data layer is a prerequisite for everything we want to do with AI at scale. Without it, you’re optimising locally. You’re making a smarter silo, not a smarter plant.
The technology to build this layer exists. The complexity isn’t purely technological any longer — it’s the sheer amount of work it takes to integrate systems, align data models, and get your data governance right. And let’s be honest: you often need to build that model from scratch. It won’t magically drop out of the air.
Challenge #2: Understanding the physical twin
Here’s the harder problem — and the one that doesn’t get talked about enough.
Even if you solve the data integration challenge, your Virtual Operator still only sees the digital world. And the digital world is a representation of physical reality, not reality itself.
For a Virtual Operator to truly collaborate — let alone act autonomously — it needs to understand what’s actually happening in the physical plant. The context, the constraints, the unwritten rules, the things your experienced operators just know but have never documented.
Some startups are beginning to tackle this by parsing engineering diagrams, ingesting technical literature, and building physics-informed models. That’s promising work. But this is an extraordinarily tough challenge, and we’ll be working on it for years — probably decades.
Here’s the uncomfortable truth: it is simply impossible to use historical data alone to understand the physical twin. We’ve seen companies try it, and they fail. You can’t infer the physical reality purely from its digital shadow. Full stop.
And as long as we can’t bridge that gap, there is no replacement for the human operator.
Is that pessimistic?
We don’t think so. We think it’s realistic.
The progress in industrial AI is real. The assistance phase is delivering genuine value, and the tools are getting better every month. But when you see vendors claiming they’ll deliver autonomous operations with a bit of historical data and a large language model… it’s worth pausing and thinking through the fundamentals.
Taking things step by step isn’t pessimism. It’s how you build something that actually works — safely and sustainably.
The data foundation is everything. Start there. Get your integrated digital twin in order. And keep an eye on the physical twin challenge — because that’s where the real breakthroughs will come from.
📣 Our next ITOT.Academy kicks off in May, and our early bird offer is available once more. Want to join our fourth group and learn how to bridge IT and OT? There is no better time than now!
👉 Check the curriculum & enrol via ITOT.Academy.


