Most MES adoption failures get blamed on people. The rollout team blames training, the line lead blames operator buy-in. Neither perspective is necessarily wrong, but they are both incomplete.
When system adoption stalls across multiple sites and departments, the variable that usually explains the gap is architectural. The system can't adapt at the speed and granularity the work requires, so the workforce adapts around it instead.
McKinsey reports that at least 70% of manufacturers are stuck in "pilot purgatory," unable to scale digital initiatives beyond their initial implementation. Despite improvements in how companies view and address change management, that number has remained relatively consistent.
In this post, we’ll walk through the most common areas where we see MES adoption break down across multi-site deployments, why the underlying cause is almost always architectural (not behavioral), and what good adoption looks like when the system is built to meet the work where it already is.
By the end, you'll have the vocabulary to explain to your steering committee why the next rollout could stall in the same place as the last one, and a checklist of architectural questions to put on your next vendor RFP.
How MES adoption failure usually gets diagnosed
If you’ve been a part of an MES rollout before, you probably know what the standard adoption approach looks like. Most vendors and integrators offer a similar playbook, and the change management guidance focuses on the same recommendations: define clear objectives, phase the rollout, invest in role-specific training, communicate the change, secure executive sponsorship, and bring relevant teams and stakeholders into the design conversation early.
Across industries, projects with strong change management programs are roughly five times more likely to land on schedule than those without, and clear communication alone has been associated with measurable lifts in success rates. Training, communication, sponsorship, and cross-functional involvement all help.
But what the playbook leaves out is the upstream half. It treats adoption as a downstream effect of how the rollout is managed, when adoption is also a product of how the system is built.
Run the same change management program against two different MES solutions and the outcomes can be wildly different. One company scales the rollout to ten plants while the other struggles to move beyond their first.
What usually explains the gap is whether the architecture can flex to the kind of variation that real production environments see day-to-day. When we work with operations leaders looking back at a stalled deployment, the challenges they describe rarely map to problems that a better change management plan would have fixed. They map onto places where the system couldn't keep up with the work.
Where MES adoption breaks down
We’ve found that stalled MES rollouts tend to fail in similar ways. We've worked with multi-site manufacturers across a variety of complex industries, and the four problems below come up in almost every post-mortem we see. They all trace back to the same architectural issue: the platform can't change at the rate the work changes, so the team on the floor is forced to build workarounds.
From the project plan upstream, those workarounds look like training failures or culture problems. But up close, they're the rational response to a system that doesn't fit the team's needs.
Dated interfaces and the new workforce
Most legacy MES solutions were designed in an era when the manufacturing workforce stayed in their role for years and built menu-diving expertise. The interface was clunky, but operators learned it and there wasn’t necessarily a better alternative.
Deloitte and the Manufacturing Institute project the U.S. manufacturing industry could need 3.8 million net new workers between 2024 and 2033, with up to 1.9 million positions potentially unfilled. The new generation arriving on the floor expects an experience more similar to consumer-facing apps: clean, touch-first, immediately learnable. Legacy MES gives them menu-driven enterprise screens that were literally designed a generation ago.
The mismatch shows up as longer onboarding curves at every site. Time-to-proficiency on legacy MES screens runs weeks instead of days, and plants cover the gap through extended overtime on experienced operators or productivity drag during ramp-up. As turnover increases, both costs compound.
The Workaround Economy and the cost of change
Workflows on the floor change over time. Take a Quality team that runs into a new non-conformance pattern the existing disposition form doesn't capture. In a legacy MES, the form change is an IT ticket, a vendor engagement, and a release cycle. After a few rounds of "we'll get to it next quarter," the Quality team stops asking and starts working around the system.
Six months after go-live, you can usually walk into a plant and see what we call the Workaround Economy: a parallel set of spreadsheets and tribal-knowledge processes that were developed around the MES because the official system couldn't keep up.
The Quality team is running a non-conformance log in Excel, the maintenance shop has its own work-order tool, and other departments have their own version of the same. The MES is technically deployed, but the actual work is happening in the parallel systems that each team built.
As long as the cost of changing the official system is higher than the cost of the workaround, teams will pick the workaround every time. More discipline about using the official system won’t change that.
Multi-site rigidity
Multi-site rollouts run into a different problem. Each plant operates differently from the others: different products, different equipment, different regulatory exposure, different operator cultures, different levels of digital maturity. A single MES template usually can't fit that range of variation at once, but a lot of legacy templates are designed under the assumption that it can.
The working rule of thumb on multi-site rollouts is that a global template typically covers 70-80% of the deployment, with the remaining 20-30% reserved for site-specific factors. Push the standardization past that ceiling and the site has to either accept a workflow that doesn't match how the plant operates, or fork the template into a one-off configuration the central team has to maintain separately.
Composable architectures resolve the trade-off by letting each site adapt the solutions to its operating reality while sharing the data model, governance, and compliance posture that has to be standard across the company.
Implementation timeline is another compounding factor. Legacy MES deployments tend to run 18 to 36 months to first-site go-live, with multi-site rollouts extending years beyond that. By the time the rollout reaches the third or fourth site, the original deployment is already misaligned with current operations at the first.
Why real-time visibility doesn't reach the floor
Legacy MES are built to be a system of record. The data the operator enters at each step gets recorded, packaged into reports, and pushed up the chain to the supervisor, the plant manager, and the executive dashboards. Compliance gets what it needs, key metrics are calculated, and the audit trail is created.
What doesn't always happen is the reverse trip. Many MES solutions aren’t designed to surface real-time, contextual information back to the floor. An operator who wants to know whether their line is on pace right now may find themself reading a whiteboard the floor lead updates once a shift.
The same gap shows up at the supervisor level, where decisions get made on information that's hours late, and at the process engineering bench, where root-cause work happens against batch exports rather than live data.
The architectural choice underneath is the system of record. Legacy MES is built to record what happened, not to help the next decision get made. A system of engagement closes the gap by surfacing the same data the system is collecting back to the people who can use it on the floor. Real-time visibility stops being a phrase corporate uses about reporting and becomes something the shop floor experiences.
Why the architecture is the underlying cause
All four problems come back to architecture. Legacy MES was designed for a world where the workforce stayed in role, workflows didn't change much, operations looked similar from plant to plant, and data only had to flow one direction up the chain. None of those assumptions hold for most manufacturers today, and these older systems weren’t built to flex to the way today's plants run.
Most traditional MES platforms are built as monoliths, with every function hard-wired to every other function inside a single system. Adding one data field to a quality check ripples through the codebase, which is why even small adjustments can require weeks of development, validation, and a coordinated rollout. The same coupling that produces internal consistency and governance is also what makes the system structurally expensive to change.
A composable MES is built differently. It's assembled from an ecosystem of apps and services instead of as a single block, which means workflows can be added, modified, or removed independently. A change to one quality workflow doesn't require revalidating an unrelated maintenance workflow. Operations teams can adjust the apps closest to their work without forcing the whole stack through a coordinated release cycle.
The architectural difference shows up in the cost of change. Legacy MES typically uses proprietary coding languages and complex underlying data schemas, which forces manufacturers to rely on third-party system integrators for even minor adjustments. Most teams know it as the integrator tax, and most underestimate it when they're calculating total cost of ownership. Every workflow change has a real cash cost and a real calendar cost, which is why manufacturers tend to stop changing the system at all and let workarounds carry the load.
A composable system architecture doesn’t necessarily guarantee adoption. Bad implementations of composable platforms still fail, usually when the team treats the platform as a blank slate and hopes adoption emerges.
Composable architecture does make adoption easier, though. Whether it works out depends on the operating model the team builds on top of the platform.
What good adoption looks like at scale
We find plants with the greatest success in scaling digital operations across departments and sites tend to have made three architectural and operating model choices that the legacy playbook usually overlooks. Each choice addresses the challenges diagnosed above directly, and together they describe what we mean when we talk about a system designed for adoption from the ground up.
Standardization with local flexibility
What fails most rollouts is standardizing the wrong things. In the rollouts we've watched scale, the answer is what we often refer to as global standardization with local flexibility: tight standardization on data models, security protocols, audit trails, and platform-level governance, paired with workflow-level configurability that lets each site shape the apps to its products, equipment, legacy systems, and operator culture.
This usually requires a federated governance model. A Center of Excellence (COE) will own the things that have to be the same across plants. Site teams own the things that have to be different. This split lets the global team protect what audit and security require without forcing every plant to ship the same workflow against operational reality on the floor.
Governed citizen development
The ‘agility vs. control’ trade-off becomes most evident when governance is enforced at the project level. Every workflow change has to go through review, approval, validation, and rollout, which means the rate of change is bounded by the rate the central team can process change requests. The trade-off disappears when governance moves into the platform itself.
Governed citizen development is an operating model where domain experts closest to the work (engineers and operators) build and modify production-ready manufacturing applications directly, with platform-level governance, permissions controls, approval workflows, and audit trails enforcing the boundaries that IT and compliance have set. The change happens at the speed of the work because the person doing the work is also the one building and iterating on the solutions, and the boundaries hold because they're enforced as platform features instead of negotiated project by project.
IT teams raise a fair objection to this model that's worth noting. Citizen development can quickly turn into shadow IT when the platform doesn't have real governance built in. Generic low-code tools with no manufacturing context, no approval workflows, no audit trail, and no permissions discipline do produce that outcome, and we've seen IT teams burned by it in the past.
Tulip addresses the concerns directly. Governance lives in the platform's permission model, version control, change-approval workflow, and audit trail, which means an engineer building a workflow on the line is operating inside a sanctioned envelope by default.
The model works for adoption because the people who understand the work are the ones building the workflow. In our experience, rollouts that stall tend to be the ones designed by someone two steps removed from the operator, and the ones that scale incorporate feedback directly from the team members doing the work.
A system of engagement, not just a system of record
The data layer is the third piece. In a system of engagement, data flows both ways. The operator generates data through the workflow, and the operator also gets data back from the system to make the next decision. Corporate still gets access to the data and the audit trail upstream, but the operator and the supervisor and the process engineer get the visibility they need to make real-time decisions on the line.
Data is captured passively as a side effect of doing the work, through sensor readings, machine signals, and operator confirmations baked into the step the operator was running anyway. There's no separate form to fill out and no second step to remember.
A practical playbook for driving MES adoption
Most rollouts kick off with the architecture and operating model already decided. What's left to decide is tactical, and these five areas are where we focus early in all of the implementations we work on. The answers compound through the rest of the deployment.
Start with one operator, one station, one task
The most reliable first deployment in any rollout we work on is a single composable app built for a single operator at a single station. Pick a specific task the operator runs every shift, ideally one with friction the team has been working around with paper. Build a workflow that helps them run that step better, deploy it, and let the operator and supervisor judge whether it's worth using before scaling to more apps, more stations, or more sites.
The bottom-up approach is helpful because adoption gets decided one operator at a time. Once a single app earns its place at one station, the team has the credibility to deploy the next one. Once a few apps earn their place across a department, the team has the foundation needed to deploy across a whole site.
Pick a pilot site that represents the rollout
The most agreeable plant tends to be the worst pilot. The friction the rollout will eventually face is the friction the pilot is meant to surface, and a pilot at the easiest plant produces results that won't generalize.
A more useful pilot site is one that exposes the variation the rollout will run into later: a plant with a legacy MES still on the floor, or a regulated production environment with its own audit cycle. The pilot's job is to find the breakage points before plant three or plant four shows them at higher cost.
Give departments ownership of their workflows
Oftentimes, departments don't share workflows. Quality's non-conformance disposition logic doesn't really translate to production scheduling, and Maintenance's PM cycle has its own labor-and-parts logic that Quality doesn't need. Trying to standardize the floor on a single form structure usually fits one department reasonably and the others poorly.
Composable apps help solve for this. Each department can adapt the app it runs within the platform-level governance discussed earlier. There's no customization ticket for every workflow change because the change happens inside the boundaries the platform already enforces. The team gets a system that does what their work requires, and the central team still has the audit trail and security posture they need.
Treat training time as an architectural KPI
How long it takes a new operator to reach proficiency tracks how well the system matches the work. If new operators need three weeks at a station before they can run it without supervision, the system is leaving work to the operator's memory that the interface could be doing for them. What shortens that curve is guided digital workflows that embed the right next step into the operator's screen. More classroom training upstream rarely closes the gap on its own.
Take Dentsply's implants division. On a Tulip-based system designed to support over a billion kitting combinations, the team cut new-operator training time by 75 percent compared to their previous workflow. The improvement comes from the system doing more of the work the operator's memory used to carry. You can hear the full story about Dentsply's adoption of Tulip here.
Plan multi-site rollouts for variation from day one
The most common multi-site rollout failure is the second-site clone. Site one goes well, the team treats the site-one deployment as the template, and site two inherits a workflow that doesn't fit. The rollout team spends the next year retrofitting site one back to handle what site two surfaced.
In practice, planning for variation means keeping the global core small and the site-level configuration layer expressive. The fewer things that have to be identical across plants, the easier each new site is to bring online, and the less the rollout team has to retrofit when site two reveals something site one was implicitly doing.
Stanley Black & Decker is a great example of this. They've been on Tulip since 2018, and the Stanley Production System (SPX) has expanded from a single-site initiative into a global framework connecting more than 50 plants and over 1,000 applications.
As the Stanley team scaled and iterated on their Tulip deployment over the years, they saw a $2B+ inventory reduction, a 15-point service-level increase, and sustained quality and safety improvements. The key to driving adoption across sites has been keeping the global core small, while enabling flexibility at local sites to support the various environments, products, and processes that make up Stanley’s portfolio.
Rolling out an MES that scales
The MES rollouts that succeed tend to share an architectural shape: composable apps the team can change without an IT ticket, a small global core each site can configure on top of without forking the template, and a data layer that gives the floor the same real-time visibility it sends to supervisors.
Most of the adoption friction that happens as a solution scale traces back to architectural choices made on day one. The more useful question to take into the next vendor conversation is what the architecture will make easy six months after go-live, and what it will make hard.
If you're working through how to evaluate or roll out an MES that operators, departments, and sites will use, reach out to a member of the Tulip team. We work with multi-site manufacturers on exactly this kind of decision.