Article Header

What is happening with AI in GMP & GDP


What is happening with AI in GMP & GDP that you should be aware of by Migle Cousins - Swift GxP 

April 2026

I took a proper break over Easter (six days without emails), and it gave me something I do not get enough of during a normal week: uninterrupted time to sit and think. I used some of that time to write this post for Tech4Good Recruitment, as a practical run-through of where things stand with AI in pharmaceuticals right now. There is a lot happening in this space, and a clear summary of what to look out for felt useful for those of us working in GMP and GDP day to day.

 

The EMA and FDA guidance

In January 2026, the EMA and FDA jointly published their Guiding Principles of Good AI Practice in Drug Development. Ten high-level principles, developed collaboratively, covering AI use across the full medicines lifecycle, including manufacturing, clinical development, and post-market surveillance. It is worth noting that this is not a regulatory requirement, but industry guidance. They show a shared direction between two major regulators and will underpin the formal guidance that follows in each region. This matters because one of the recurring challenges for multisite organisations has been navigating differences between the FDA and EMA. This is certainly a step toward alignment.

The ten principles focus on a few themes: human oversight, risk-based validation proportionate to the context of use, lifecycle monitoring throughout deployment, robust documentation of data sources and model limitations, and clear accountability structures. The phrase that keeps appearing across all regulatory communications right now is "human-centric." AI should support expert judgement rather than replace it. You will hear that same framing from EMA, FDA, and MHRA, and it is certainly not going away.

 

EU GMP Annex 22 is in draft

The EMA is developing a new GMP Annex 22 specifically for AI in pharmaceutical manufacturing. The concept paper that preceded it attracted approximately 1350 comments from 79 individual contributors during consultation. The draft is now out for review, with a target publication date of sometime in 2026.

The direction from the draft guidance is fairly clear. Static, deterministic AI models that consistently produce the same output can be used in critical processes, provided they are properly validated. Dynamic or continuously learning models, and generative AI tools, are restricted to non-critical applications where human oversight is explicitly required and must be documented at each decision point.

This is not a surprise to anyone who has been watching this space closely, but it is helpful to see it being formalised. It also reinforces something worth saying plainly: generative AI tools, including the large language models that most people have been experimenting with, are not going to be appropriate for critical GMP decisions any time soon. The regulatory expectation is deterministic, auditable outputs, and generative models do not meet that bar in their current form.

 

EU GMP Annex 11

While Annex 22 attracts most of the attention in conversations about AI, the revision of Annex 11 on Computerised Systems is arguably just as consequential for the day-to-day reality of GMP. The revised Annex 11 is expected to be finalised in 2026 and represents the most substantial update to that guidance since the current version was introduced.

The scope now explicitly includes AI, machine learning, cloud-based services, SaaS platforms, and agile project management approaches. Specific areas being addressed include electronic signatures with multi-factor authentication requirements, audit trails with time-zone stamping, and explicit provisions for hybrid records where wet ink and digital signatures sit alongside each other. Cloud and open systems, which the current version of Annex 11 covers only lightly, are receiving much more detailed treatment.

If your organisation uses cloud-based software in your QMS, LIMS, or any other GxP-relevant system, this revision is worth following closely. The expectations around validation, data integrity, and change control for those systems are going to become considerably more specific.

 

The EU pharmaceutical legislation reform

In December 2025, EU co-legislators reached a provisional agreement to reform the EU pharmaceutical framework for the first time in over twenty years. The reform covers a broad range of areas, but one element particularly relevant to AI is the concept of regulatory sandboxes within the new legislation. These would allow innovative technologies, including AI-driven quality control release, continuous manufacturing, and decentralised manufacturing, to operate under controlled, closely supervised conditions while regulatory expectations are developed in real time alongside the technology.

That is a pragmatic approach, and one that acknowledges something regulators do not always say out loud: nobody has all the answers yet. The sandbox model creates space to generate evidence without requiring organisations to wait for fully formed guidance before they can start learning. The agreement still needs formal adoption by the Council and the European Parliament before it is published in the Official Journal, and exact implementation timelines are not yet confirmed.

 

What is MHRA is doing

In December 2025, the MHRA launched a National Commission into the Regulation of AI in Healthcare, with a call for evidence gathering views from clinicians, patients, industry professionals, and the public. The call closed in February 2026, and the findings will inform recommendations to MHRA later this year.

In parallel, the MHRA has been running its AI Airlock programme, a sandbox for testing regulatory challenges related to AI as a medical device. A second cohort is running through to April 2026, and the outputs will feed into updated guidance.

MHRA's stated position has been consistent throughout all of this work: AI should augment expert judgement rather than replace it, data governance and validation are non-negotiable, and models must be assessed against their specific context of use with clear accountability at each stage. Anyone who has navigated MHRA expectations around software or computerised systems will recognise this framing immediately. It is an extension of existing principles rather than something entirely new.

 

The FDA has been putting its own house in order

In June 2025, the FDA launched an internal generative AI tool called Elsa, built within a secure GovCloud environment. The tool is being used internally for tasks including expediting clinical protocol reviews, comparing product labels, and identifying high-priority inspection targets. Importantly, the models do not train on data submitted by regulated industry, so there is no route by which confidential submissions could feed back into the system. There is something worth noting in that: the agency has been asking industry to think carefully about AI governance while simultaneously building its own AI capability. That context is useful when thinking about the pace of change regulators are expecting from themselves as well as from the companies they oversee.

 

What does it all mean

For those working as RPs, QPs, or quality professionals, the consistent message across all of these developments is not especially complicated, even if the technical detail around it is. Human oversight matters; risk-based approaches apply; documentation needs to cover the full lifecycle of an AI system, including its training data and validation evidence; and change control applies to AI models just as it does to any other validated system.

What is still being worked through is the detail. What does adequate validation look like for an AI model used in a batch release decision? How do you handle performance drift in a system that is running continuously? What level of evidence do you need before an AI-assisted process can be considered validated in a GMP context? Annex 22 and the Annex 11 revision will address those questions, but we are not at final answers yet.

In the meantime, the most useful frame for any organisation trying to get ahead of this is to treat AI systems the same way you treat any other computerised system: define the intended use, validate against that intended use, manage change formally, and ensure your audit trail covers the decisions the system is contributing to. That is already the right approach. The new guidance will add specificity on top.

To make this reading a little more useful for you, I have put together a resource list for you! Thanks for sticking till the end.

 

Resources worth bookmarking

EMA-FDA Guiding Principles of Good AI Practice in Drug Development (January 2026)

https://www.ema.europa.eu/en/news/ema-fda-set-common-principles-ai-medicine-development-0

EMA Reflection Paper on the use of AI in the medicinal product lifecycle (2024)

https://www.ema.europa.eu/en/documents/scientific-guideline/reflection-paper-use-artificial-intelligence-ai-medicinal-product-lifecycle_en.pdf

EU GMP Annex 22 draft on AI in pharmaceutical manufacturing (2025, under consultation)

https://health.ec.europa.eu/medicinal-products/eudralex/eudralex-volume-4_en

EU GMP Annex 11 revision (draft, expected finalisation 2026)

https://health.ec.europa.eu/medicinal-products/eudralex/eudralex-volume-4_en

FDA Draft Guidance: Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products (January 2025)

https://www.fda.gov/regulatory-information/search-fda-guidance-documents

FDA Discussion Paper: Artificial Intelligence in Drug Manufacturing (FRAME initiative, 2023)

https://www.fda.gov/media/165743/download

MHRA AI Airlock programme

https://www.gov.uk/government/organisations/medicines-and-healthcare-products-regulatory-agency

EU AI Act (in force August 2024, phased enforcement through to 2027-2028)

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689

ICH Q9(R1) Quality Risk Management, which provides the risk-based framework underpinning most of the AI governance expectations described above

https://www.ema.europa.eu/en/ich-q9-quality-risk-management-scientific-guideline