Manufacturers are investing heavily in AI, yet many initiatives stall after early pilots. Models generate insights, dashboards light up, but teams hesitate to act. On the shop floor and in control rooms, the outputs often feel disconnected from how operations actually run.
At Operations Calling, leaders from ZS and AWS came together to discuss how manufacturers are preparing data for AI at scale. Across perspectives spanning manufacturing strategy, cloud architecture, and frontline operations, a consistent pattern emerged: AI doesn’t fall short because models are weak, it breaks down when underlying data lacks shared operational context across IT and OT systems
This blog unpacks why context is the missing prerequisite for manufacturing AI. We’ll outline a practical data readiness playbook that shows how organizations can move from raw signals and siloed systems toward AI-ready architectures that support scalable, human-in-the-loop decision support.
What “context” really means for AI in manufacturing
In manufacturing, data already exists across machines, systems, and applications. The challenge is to extract the meaning from it. Context is what explains what the data represents and why it matters for a specific decision.
“AI does only as good as its data, right? And it’s not just the data. It goes beyond just the data. It goes into data context, right? Context is king.”— Suraj Pai, Principal, ZS
Knowing a machine is running isn’t the same as knowing what process it should be running, for which product, and under what conditions. Without that connection to assets, processes, and products, AI systems can surface signals but struggle to support action.
Context is also role and question-dependent. What matters to an operator, a quality lead, or a supply chain planner is different, even when they’re looking at the same underlying data. That’s why context can’t be assumed or hard-coded, it has to be designed.
Finally, context isn’t only machine-generated. Experts emphasized that human inputs still fill critical gaps where systems fall short. For AI to work in real operations, it needs to account for both system data and human judgment.
Understanding manufacturing data readiness
Manufacturing data readiness means operational data is defined, structured, and connected so it can be reliably used across IT and OT systems.
In practical terms, data is ready when:
Assets, processes, and products are clearly defined.
The relationships between them are modeled.
Naming conventions are standardized.
Context stays attached to the data as it moves across systems.
This ensures that the same data represents the same thing everywhere it appears - in operations, quality, engineering, or enterprise systems.
Standards like ISA-95 or Unified Namespace provide a structural starting point. When this foundation exists, new analytics and AI use cases can build on shared models instead of recreating context for each project. It also means data doesn’t need to be manually explained every time someone new uses it. The context is already built in.
When data is ready, new dashboards, analytics, or AI tools can use it immediately. When it’s not, teams spend most of their time figuring out what the data actually means.
Stages of Context Maturity in Manufacturing
Context doesn’t appear all at once. It develops in layers, and most manufacturers operate across several of them at the same time. The key is understanding where context exists today and what’s required to move forward without overengineering.
“These agents don't work well nor will give you the right information or make better decisions …without the proper data.”— Venkat Gumatam, Partner Solutions Architect, Amazon Web Services
Stage 1: Asset-level context
At this stage, data is tied to individual machines, lines, or sensors.
You can see equipment status, which says running, stopped, or faulted, as well as basic metrics like cycle time, temperature, or output count. The data tells you what the asset is doing at a point in time.
What it does not tell you is whether the machine is running the correct product, following the right process parameters, or meeting performance expectations for that specific job. The data is isolated to the asset itself.
This is where most IT/OT integrations begin: connecting machines and collecting signals. It provides visibility, but not full operational meaning.
Stage 2: Process context
At this stage, data is tied to the process step the machine is performing.
You can see which operation is running, the workflow stage, and the defined setpoints or limits for that step. Performance data is no longer just a machine signal, it’s evaluated against how the process is supposed to run.
You now know whether the operation stayed within its expected parameters.
What you still don’t fully know is how this performance affects a specific product, order, or batch. The data reflects process behavior, but not complete product impact.
This is where questions shift from “What happened?” to “Did it run as expected?”
Stage 3: Product and batch context
At this stage, data is linked to specific products, recipes, or batches. Performance, deviations, and quality outcomes can be traced back to what was being produced at that moment.
This makes it possible to understand how process behavior affects a particular order or batch, not just the machine. It’s especially important in regulated and high-mix environments, where the same asset may run different products with different requirements.
Stage 4: Cross-functional context
At this stage, shop floor data connects to quality, engineering, and supply chain systems. A problem on a machine can be linked to its impact, such as scrap, delays, rework, or missed shipments.
Data is no longer just about how a line is performing. It shows how operational issues affect the business.
This is where decisions move beyond fixing a machine. Teams can understand the broader impact and act across departments, not just within one area.
Stage 5: Human context
Even in advanced environments, people are still part of the process. Operators record notes, explain issues during shift changes, handle exceptions, and make judgment calls when something doesn’t go as planned.
Systems don’t capture everything. Human input often explains why something happened, not just what happened. Strong data architectures treat this input as valuable operational data, not something to ignore.
“Not every shop floor is going to have every single aspect of it digitized. You still have humans in the loop that are capturing data and capturing elements of the manufacturing process.”— Suraj Pai, Principal, ZS
AI readiness depends on how well data can move through these layers without losing meaning. Problems arise when organizations try to deploy AI before the necessary context is in place.
Designing context without over-engineering
Preventing AI breakdown starts with deliberate data design. The goal is not to create a perfect model, it’s to create a usable one.
Start with shared definitions. Clearly define assets, process steps, products, and key entities so they are consistent across systems. Adopt common naming conventions so the same term means the same thing in operations, quality, and enterprise tools.
Next, model the relationships that matter. Link machine events to process steps. Connect process steps to products or batches. Tie operational events to quality and supply chain systems. Focus only on relationships required for your priority use cases and not every possible scenario.
Use standards like ISA-95 and ISA-88 as reference points, not blueprints. Borrow the structural clarity they provide, then adapt them to match how your operations actually run.
Finally, design for evolution. Ontologies allow you to define entities and relationships in a way that can expand as new products, workflows, or sites are added. This keeps data structured without locking it into rigid hierarchies.
When manufacturers take this approach, AI systems don’t have to reinterpret data each time it crosses a boundary. New use cases build on shared foundations instead of creating parallel models. That’s how context scales and how AI continues to work as complexity grows.
Preparing for agentic AI with humans in the loop
Much of the first wave of AI in manufacturing focused on insight generation through dashboards, predictions, and recommendations. Agentic AI changes the expectation. These systems don’t just analyze data; they can take action, trigger workflows, and coordinate decisions across systems.
“We are entering into [an] iron man type of world going forward, where we are talking to the systems and the systems will be talking to us.”— Venkat Gumatam, Partner Solutions Architect, Amazon Web Services
That shift raises the bar for data readiness. When AI moves from suggesting to doing, gaps in context become risks. An agent needs to understand not just what happened, but where it happened, why it matters, and what constraints apply. Without that grounding, automation becomes brittle instead of reliable.
This is why human-in-the-loop design is essential. Agentic systems work best when humans validate decisions, handle exceptions, and provide judgment where data is incomplete. Rather than replacing operators or engineers, these systems are designed to augment them by surfacing context, accelerating decisions, and keeping accountability with people.
“At the end of the day, there’s got to be a deliberate intentful design with human-in-the-loop.”— Suraj Pai, Principal, ZS
For IT and OT leaders, preparing for agentic AI isn’t about deploying agents first. It’s about designing operating models and data foundations that assume collaboration between systems and humans from the start. When context is in place, agentic AI becomes a practical extension of operations. Without it, even the most advanced models struggle to earn trust.
How Tulip enables data context at scale
Tulip supports manufacturing data readiness by capturing context where work actually happens. As a Frontline Operations Platform, Tulip brings together human inputs, machine data, and process workflows in a single environment, allowing operational meaning to be embedded as data is created and not reconstructed later.
By digitizing frontline processes, Tulip helps standardize how assets, operations, and products are represented while remaining flexible enough to reflect site-specific realities. Engineers can define workflows, data collection, and naming conventions that align with broader IT and OT architectures without forcing rigid, top-down models.
This contextualized data can then flow into enterprise systems, analytics platforms, and cloud services, supporting AI use cases that require consistent structure across lines and sites. The result is a scalable foundation for human-in-the-loop AI, grounded in real operations, adaptable over time, and designed to support both local execution and enterprise decision-making.
Digitally transform your operations with Tulip
See how systems of apps enable agile and connected operations