Artificial intelligence is dominating conversations about the future of industry, but turning experimentation into real operational deployment takes time. With data showing up to 95% of AI pilots never reach production, it’s clear most organizations are still in the early stages of their AI journey.
During Operations Calling, Natan Linder, Co-Founder and CEO of Tulip; Tom Bianculi, Chief Technology Officer at Zebra Technologies; and Alexandra Francois-Saint-Cyr, BD Executive, Industrials at AWS had a conversation focused on what AI adoption actually looks like inside operational environments. While experimentation is accelerating, the organizations seeing results are focusing on practical applications by embedding AI into workflows, capturing operational knowledge, and improving decision-making on the shop floor.
In this article, we explore where AI is already delivering value in operations, how teams are adopting AI on the shop floor in manufacturing, and what the next phase of operational AI may look like.
Edge Computing and Intelligent Operations
As organizations begin exploring AI in operational environments, two concepts often appear together: edge computing and intelligent operations.
Edge computing refers to processing data close to where it is generated, like on machines, cameras, sensors, or industrial devices on the shop floor. Instead of sending all operational data to centralized systems, edge infrastructure allows certain analysis and responses to happen locally, reducing latency and enabling faster reactions during production.
At the same time, cloud platforms provide the scale needed for advanced analytics, machine learning models, and long-term data storage. When edge systems and cloud systems work together, organizations can combine real-time operational data with large-scale analysis.
This combination enables what many organizations describe as intelligent operations, an operational model where data from machines, systems, and frontline workers is continuously captured, contextualized, and used to improve operational decisions.
Why AI Adoption in Operations Can Be Challenging
Many companies are experimenting with AI, but turning those experiments into systems used in daily operations is still difficult. Operational environments are complex, and several factors can slow adoption.
Disconnected operational data
Operational data is often spread across machines, legacy systems, spreadsheets, and documentation. When this information is not connected, AI tools struggle to understand a complete picture of what is happening on the shop floor.
Loss of operational knowledge
As experienced workers retire, organizations risk losing valuable knowledge that exists only in people’s experience rather than in structured systems or documentation.
Security, compliance, and reliability concerns
In industrial environments, new technologies must meet strict requirements for security, governance, and regulatory compliance. This can make organizations cautious about introducing AI into production systems.
Organizational hesitation
Leaders and employees may also be uncertain about how AI will affect jobs and workflows. Without clear guidance and trust in the technology, adoption can slow.
Where AI Is Delivering Value
Organizations that are successfully adopting AI tend to start with specific operational use cases rather than broad transformation initiatives. Instead of pursuing sweeping change from the outset, they target high-impact, well-scoped problems where AI can deliver measurable results. This pragmatic approach reduces risk, accelerates time to value, and lays the groundwork for broader transformation over time.
Workforce Knowledge and Troubleshooting
Many industries are facing workforce shortages as experienced operators retire. AI tools can help capture operational knowledge from experienced workers and make it accessible to new employees.
For example, organizations are using AI-powered assistants to make troubleshooting documentation and operational insights searchable, allowing operators to quickly access relevant information during production. Increasingly, they’re also using AI to translate existing SOPs and work instructions into structured, interactive workflows, embedding that knowledge directly into day-to-day operations so new employees can learn in context rather than relying on static documentation alone.
Enterprise Search and Knowledge Retrieval
Many organizations already possess large amounts of operational data collected before the rise of AI systems.
AI-powered retrieval systems can unlock this data by enabling enterprise search across documentation, historical data, and operational records. Instead of generating answers from scratch, AI copilots retrieve relevant operational knowledge and surface it at the moment it is needed.
Machine Vision and Operational Automation
AI is also being applied in areas such as:
machine vision for quality inspection
automation of repetitive workflows
real-time validation of operational processes
improved inventory and asset visibility
These use cases embed AI directly into operational workflows rather than treating it as a separate analytics capability.
Approaches That Help Organizations Adopt AI
Successful AI adoption in operations typically follows several practical principles.
“It really starts with understanding the problem you’re trying to solve before thinking about the technology.”— Alexandra Francois-Saint-Cyr, BD Executive, Industrials, AWS
Start With One Use Case
Organizations that see success often begin with a single operational use case and expand from there. Demonstrating value in one workflow helps build confidence and enables teams to scale AI across additional processes.
Build Solutions With Frontline Workers
AI tools are most effective when they are developed with the people who will use them. Designing systems around existing workflows improves usability, builds trust, and increases adoption.
Choose the Right Technology Partners
Operational AI often requires expertise across infrastructure, industrial systems, and data platforms, making partner selection a critical factor in success. Rather than trying to build capabilities across all of these domains internally, many organizations are finding that working with partners who bring both technical depth and domain-specific experience can significantly reduce risk and accelerate progress. In practice, this often means prioritizing partners who have already solved similar operational challenges and can apply that experience directly to new use cases
“Picking partners that have that specific use case in their domain and you're twice as likely to be successful than if you're kind of trying to roll your own.”— Tom Bianculi, Chief Technology Officer, Zebra Technologies
Leadership Drives Adoption Culture
Leadership behavior also plays a role in adoption. When leaders actively use AI tools and encourage experimentation, it signals that these technologies are part of the organization’s future operating model.
“If you're a leader and sitting in this room, you want your organization to use AI. If you're not doing it yourself, no one's going to follow you. Full stop.” — Natan Linder, Co-Founder and CEO, Tulip
Managing the Cost of AI Applications
AI adoption also raises important questions around cost and infrastructure.
Organizations must monitor factors such as compute usage, token consumption, and model selection. Without governance, AI usage can grow rapidly and become expensive.
Many companies are addressing this challenge through hybrid architectures, combining edge computing with cloud infrastructure. Processing data locally while leveraging cloud systems for large-scale analysis helps balance performance and cost.
Infrastructure Impact: Energy and Water
As AI adoption grows, the infrastructure supporting these technologies is also expanding.
Large-scale AI models require significant computing resources, and modern data centers consume substantial amounts of electricity and water for cooling and operation. Advances in hardware and infrastructure design are helping improve efficiency, but these considerations are becoming part of the broader discussion around AI deployment.
The Future: Domain-Specific AI Models
One of the most promising developments in operational AI is the emergence of domain-specific models.
“This idea of domain specific LLMs… that's the next big thing because the foundational world models aren't really going to know what good looks like in one of these kind of operational environments” — Tom Bianculi, Chief Technology Officer, Zebra Technologies
General-purpose AI systems trained on internet-scale data may not fully understand complex industrial environments. Domain-specific models, trained on operational data and industrial workflows, can develop a deeper understanding of how these systems operate.
Organizations are also exploring techniques such as synthetic data generation to improve model training when real-world datasets are limited.
Over time, these developments may enable AI systems that understand operational context and support more intelligent decision-making across production environments.
How Tulip Enables AI Adoption in Operations
Tulip helps organizations operationalize AI by connecting shop-floor data, workflows, and enterprise systems into a unified operational environment. By capturing contextual data from machines, devices, and frontline workers, Tulip creates the foundation needed to build AI-enabled applications that support real operational decisions.
With this contextual layer in place, teams can introduce AI capabilities such as copilots, analytics, and automations directly into operational workflows while maintaining governance and human oversight. Rather than running isolated experiments, organizations can build solutions that integrate with existing processes and scale across production environments.
This approach helps organizations move beyond AI pilots and begin applying AI where it matters most – everyday operational work.
Put the power of AI in your team's hands
Assist your workforce with AI tools to help answer questions, explore data, and develop tools to streamline workflows.