The FDA’s First AI Warning Letter Highlights the Importance of Human Oversight
Co-authored by Jeni Alexander and Michael McCarthy
Artificial Intelligence has changed the way we work. However, some organizations are finding out the hard way not to hand over the reins to AI completely.
On April 2nd, the FDA issued its first warning letter regarding the inappropriate use of AI.
This was issued to a drug manufacturer that used AI agents to comply with FDA regulations, but did not verify the AI’s outputs.
The FDA stated in the warning letter:
“If you use AI as an aid in document creation, you must review the AI generated documents to ensure they were accurate and actually compliant with CGMP. Your failure to do so is a violation of 21 CFR 211.22(c). Overreliance on artificial intelligence for your drug manufacturing operations was also documented during the inspection. For example, the FDA investigators found that you had not conducted process validation prior to distribution of your drug products, as required under 21 CFR 211.100, and informed you as such.
When informed of the violation, the manufacturer replied that they weren’t aware of the requirement because their AI agent didn’t tell them it was needed.
What the FDA Is Really Saying
The FDA isn’t discouraging AI adoption. Rather, it’s highlighting the significance of proper guardrails and ensuring that AI is implemented within a controlled, risk-based framework.
In fact, the warning letter specifies that “any output or recommendations from an AI agent must be reviewed and cleared by an authorized human representative of your firm’s Quality Unit.”
The message here is clear. AI can be a powerful tool in quality and compliance, but only when combined with the proper oversight and human expertise.
AI Without Governance Is a Known Risk
This warning didn’t come out of nowhere. Many leaders in the industry foresaw these risks with the AI boom.
In a Forbes article last year, Dot Compliance’s Founder and CEO, Doron Sitbon, wrote:
“Even in fiction, the challenges of intelligent systems aren’t new. Isaac Asimov’s Three Laws of Robotics imagines a world where machines are hardwired to protect humans, follow orders and avoid harm by offering a framework for trust and control. In the real world, AI systems aren’t born with those safeguards. They must be designed, monitored and governed. Without oversight, AI can drift. Algorithms trained on skewed data can make biased decisions. Models can lose accuracy. Systems that rely on third-party or embedded AI can create visibility gaps. These aren’t theoretical problems but active threats to compliance, safety and trust.”
Built-In Guardrails: The Role of AI Agent Personas
One approach to controlled AI implementation is through AI agent personas. These are predefined roles that guide AI behavior based on specific expertise and regulatory contexts.
For example, an FDA Regulatory Reviewer persona can review documents through the lens of inspection readiness, while an ISO 13485 Auditor persona assesses medical device QMS data against regulatory requirements.
These personas don’t work autonomously. They provide recommendations that human quality professionals review, validate, and approve before any changes enter the quality system. This creates the exact framework the FDA is requiring…AI as a tool, humans as the decision-makers.
Humans are the Heroes of AI in Quality
There is no doubt that AI has changed the way we work, but what is now clearer than ever is that humans remain a critical part of the process.
Responsible AI systems include:
- Human in the loop oversight: Qualified professionals validate AI outputs before they become part of the quality system
- Audit trails: Complete visibility into what AI recommended and what humans approved
- Feedback loops: Mechanisms to improve AI performance when responses are incorrect or incomplete
- Risk-based frameworks: Controls that align with regulatory expectations
When implemented correctly, AI doesn’t replace quality professionals. It makes them more effective, more engaged, and more intelligent in their work.
The FDA’s warning letter isn’t a call to abandon AI. It’s a call to implement it responsibly, with humans firmly in control.
Quality management systems should support this human-AI collaboration. Does yours?
If you have questions about how to implement AI responsibly, we’re here to help. Get in touch.