During his keynote, Jensen Huang described a shift from isolated models to full-stack systems, where intelligence is becoming core industrial infrastructure. Every company will build it. Every industry will run on it.

That shift is no longer theoretical. It was on full display at NVIDIA GTC 2026. AI has moved beyond training environments and into physical systems, from robots operating in dynamic environments to vision systems interpreting real-world processes. The focus is no longer just on building models, but on running them continuously, at scale, and inside real operations.

As one life sciences executive at the show said, “We’re not trying to prove AI works anymore. We’re trying to understand how it actually works in a regulated, real-world environment.”

And nowhere is that shift more consequential, or more difficult, than in manufacturing.

https://tulip.widen.net/content/mybdz7rhei/
Jensen Huang, NVIDIA CEO, alongside Disney robot during keynote

Physical AI Is No Longer a Concept

Jensen Huang described this moment as the “ChatGPT moment” for Physical AI, where AI begins to interact directly with the physical world.

What stood out was not just the technology itself, but how quickly it is moving into production environments. AI is no longer sitting in dashboards or offline reports. It is entering the flow of work, observing processes as they happen, and increasingly influencing outcomes in real time.

This shift is powerful. But it also exposes a fundamental limitation. Seeing is not the same as understanding.

The Industry Is Simulating the Future—But Struggling to Explain the Present

Much of the manufacturing conversation centered around simulation and digital twins. The industry is building incredibly sophisticated tools to design and optimize factories before anything happens. They allow teams to test changes before making them, reducing risk and accelerating improvement.

But they are inherently forward-looking.

What was far less represented is the other half of the problem: understanding what actually happened.

When something goes wrong on the shop floor – a defect, a deviation, a stoppage – most systems can tell you what happened. A test failed. A machine triggered an alert. A process exceeded a threshold. But they struggle to explain why.

In practice, engineers are left stitching together fragmented data across systems, trying to reconstruct events after the fact. Even when video is available, it is rarely connected to operational data in a meaningful way. The physical reality of what occurred remains disconnected from the digital record, creating a slow, manual, and often inconclusive process.

The industry is investing heavily in simulating the future, but still lacks a clear and verifiable understanding of the present.

This gap has driven Tulip to a different kind of approach, focused not on prediction but on reconstructing reality. In his GTC Session “Augmenting Industrial Operations with Factory Playback Intelligence”, Rony Kubat highlighted the importance of connecting operational data with the physical environment it represents. Tulip’s approach, Factory Playback, is built on this principle: combining the operational context captured by Tulip—who did what, when, in which app, and with which machine—with synchronized video to create a complete, navigable history of production. This is the layer that turns video from observation into understanding, giving AI the grounded context it needs to explain not just what happened, but why.

https://tulip.widen.net/content/5vypq7x6wo/
Factory Playback, a new capability on the Tulip platform that lets manufacturers reconstruct and replay factory operations as they actually happened

Why This Matters for the Future of AI

As Physical AI continues to evolve, the need for grounded, contextual understanding becomes more urgent.

AI systems in manufacturing must do more than detect anomalies. They need to understand sequences of events, interpret interactions between people and machines, and identify cause and effect.

This requires a reliable source of operational truth.

When operational context is combined with video, manufacturers gain the ability to move faster and with more confidence. Root cause analysis becomes more precise. AI models can be trained on real operational scenarios. Process deviations can be identified and addressed in real time. Over time, this creates the foundation for closed-loop improvement and more autonomous operations.

In other words, AI can move from observing operations to truly understanding them.

https://tulip.widen.net/content/ar1brd2x0n
Tulip experts discussing Factory Playback with GTC attendees

The Next Chapter Is Just Beginning

GTC 2026 marked a turning point.

Physical AI is no longer theoretical. It’s here and it’s rapidly advancing.

But as the industry pushes forward with robotics, simulation, and vision models, the companies that win won’t just be the ones that can simulate the future. They’ll be the ones that can fully understand what’s happening on their factory floors today.

That requires more than video.

It requires context.

It requires a system that connects digital intent with physical reality.

This next chapter of manufacturing AI is just getting started and the momentum coming out of GTC suggests it’s going to move fast.

Join the next chapter of AI-enabled manufacturing

See how manufacturers are using AI with Tulip to turn real-time data into actionable decisions to boost visibility, and improve operational outcomes.

Day in the Life Illustration