AI has officially entered the quality conversation in life sciences, and teams are being asked to make sense of it faster than ever.

On one hand, technology leaders are eager to explore AI tools, from transforming video into work instructions to guiding deviation investigations and supporting app reviews with AI copilots. On the other, quality and regulatory teams are (rightfully) asking: “How do I defend compliance using these tools?”

AI is evolving faster than regulatory frameworks can keep pace. At Ops Calling 2025, a panel of leaders from medtech, biotech, and compliance discussed some of the challenges: teams often stall not due to technical limitations, but due to cultural hesitation especially around how to explain and defend AI use in audits or inspections. Many teams are unsure how to define scope, define appropriate control frameworks, or align on intended use.

“Lots of what we're seeing within my organization as we implement AI tools… we get stuck at the POC, right? Why? It's cultural. There's a culture there that is holding us back.” - Khaled Moussally, Executive Vice President, Quality — Compliance Group Inc.

In this blog, we’ll unpack how senior leaders from some of the most regulated corners of the life sciences industry, including Smith+Nephew, Vericel, Getinge, and Compliance Group Inc., are navigating AI adoption. Drawing on insights from executives in quality, IT, and regulatory roles, we’ll explore how to identify the right use cases, manage risk, and prepare for evolving audit expectations without slowing innovation.

AI Implementation Should Begin With a Clear Problem and not a Technology Push

Start with the Problem and the Right Intent

Before asking “Where can we use AI?”, ask:

  • What’s the problem we’re solving?

  • What outcome(s) matter most?

Look for high-friction areas like:

  • Manual work slowing down throughput

  • Tribal knowledge locked in outdated SOPs

  • Repeated deviations without clear root causes

  • Large volumes of unused or disconnected data

These are signs of use cases where AI can add value. Once you’ve identified the opportunity, ground your approach in purpose, not just compliance. Ask yourself:

  • Does this improve patient safety or product quality?

  • Will it support better or faster decisions or reduce human error?

When you lead with the right intent, compliance becomes easier to defend.

“The focus is always on patient safety, product quality, and data integrity.” - Khaled Moussally, Executive Vice President, Quality — Compliance Group Inc.

Regulators are still catching up. You don’t need all the answers - you only need clear intent, transparency, and a defendable control strategy.

Understanding Regulatory Language

What Is GxP and How to Tell if Your AI Use Case Is GxP or Non-GxP

GxP stands for “Good [x] Practices,” regulatory standards that protect product quality, patient safety, and data integrity for specific scopes, such as GMP for good manufacturing practices or GCP for good clinical practices.

In the AI context, one of the first questions regulators will ask, explicitly or not, is:
Does this AI system directly impact product quality, patient safety, or related data integrity?

If yes, it is a GxP use case and requires validation and Quality oversight.
If not, it may fall outside formal GxP scope, giving you more flexibility, but that still requires clarity of intended use and appropriate controls. Some systems may support compliance (like documentation tools) without directly affecting product quality, but they can still carry regulatory importance depending on how they’re used.

Even low-impact tools like translation aids or document search can become GxP-relevant if they influence regulated decisions. That’s why everything starts with intended use. Ask:

  • What does the tool do?

  • What data does it touch?

  • Is it making or informing GxP decisions?

Watch for mixed-use systems. Many platforms (like CRMs or MES) have both GxP and non-GxP functionality. The key is to classify by specific intended use, not system.

Use CSA and GAMP to Right-Size Your Controls

The FDA’s Computer Software Assurance (CSA) and ISPE’s Good Automated Manufacturing Practices guidance promotes scaleable, risk-based approaches. They encourage teams to:

  • Focus on critical thinking, not just paperwork

  • Test based on impact, not every feature

  • Prioritize controls for high-risk functionality

Applying this approach to how AI is used gives you the confidence to adopt AI tools faster, without compromising compliance.

Audit Readiness: What to Say When the Regulators Asks About Your AI Tools

As part of your AI implementation strategy, be sure to include concepts to prepare the team to be audit ready. When regulators ask about your AI tools, your ability to clearly explain their purpose, risk level, and risk controls becomes critical.

Adopt this mindset:
You’re the expert in your process. You understand where risk lives, what decisions AI supports, and how to apply the right controls. Many regulators are still learning how to inspect AI—this is your chance to lead with confidence, clarity and a clear risk control strategy.

Here’s how to walk them through your approach:

1. Start with Your Responsible AI Policy
What are your company’s top-down principles around AI use? Be ready to show that your AI adoption is intentional and aligned with safety, quality, and compliance values.

2. Explain How Tools Were Selected
Share how you evaluated and chose AI tools or suppliers, especially for use in regulated environments. This reinforces that your decisions were based on business needs and not hype.

3. Show Why Specific Use Cases Were Chosen
Describe the problem AI is solving, the intended use, and why it’s meaningful. Frame AI as a tool in the toolbox, not a stand alone system. Critical GMP decisions are executed by a qualified human.

4. Highlight the Safeguards You’ve Put in Place
Explain your control strategy: human-in-the-loop reviews, documented procedures, access controls, or other relevant checks. These are the guardrails that ensure responsible use.

5. Demonstrate Impact with Real Outcomes
Auditors respond well to measurable benefits. Show how your AI use reduces errors, accelerates decisions, or improves consistency. If you can connect it to improved patient safety, product quality, or data integrity that’s even better.

What Global Regulators Expect:

If you operate globally, your AI strategy must account for varying regulatory expectations. What satisfies the FDA may not meet EMA or other Notified Body standards.

In Europe, Annex 11 remains the foundation for computerized systems. The proposed Annex 22 aims to address AI and emerging technologies, and while still under review, it has received mixed feedback from the industry.

That said, regulators across regions are aligning around a shared set of principles:

  • Defined intended use

  • Risk-based justification

  • Human oversight

  • Traceable, explainable decisions

No matter the framework, the expectation is the same: be in control, understand your risk, and justify your decisions.

Where to Go From Here: A Foundation for AI and Audit Readiness

AI is changing life sciences, but regulatory expectations haven’t. You don’t need a perfect AI program, you just need clear intent, risk awareness, and human oversight. The teams that thrive will be the ones who act now: defining use, right-sizing validation, and focusing on improvement over fear. When you can explain your choices, you’re already audit-ready.

How Tulip Helps Manufacturers Use AI Without Losing Control

The companies featured in this blog aren’t just experimenting with AI, they’re applying it on the ground with Tulip at the center.

Tulip helps Quality, IT, and Operations teams embed AI into daily workflows, while giving management visibility into performance through real‑time analytics all without compromising compliance. Instead of isolated pilots or paper-heavy processes, teams are building real, auditable apps for the shop floor.

Here’s how:

  • Dynamic SOPs with AI support
    Teams use Tulip to guide operators through tasks with interactive Digital SOPs. AI summarizes past deviations or links to relevant records in real time, supports translation for multinational teams, while final decisions stay with qualified personnel.

  • Automatic documentation for auditability
    Every action - whether building, training, inspecting, or performing root cause analysis - is tracked. Tulip logs who executed an activity and when it was performed.Tulip’s AI Agents enable users to summarize past activity or retrieve key insights on demand, directly in the flow of work by drawing from real SOPs, manuals, and production data.

  • Safe, staged implementation
    Teams start with low-risk or non-GxP workflows (like onboarding or doc search), then scale to higher critical areas within Tulip’s validated environment. AI Translations enable multinational teams to standardize SOPs and app interfaces across languages without manual rework.

Tulip helps teams move from “How do we try AI?” to “Here’s how it’s improving our process”, all while staying audit-ready.