Interview: Is global AI regulation really revolutionizing pharma and clinical trials?

By Liza Laws

- Last updated on GMT

© Getty Images
© Getty Images

Related tags AI Artificial intelligence Data management Clinical trials Drug discovery Research

OSP spoke to Berkeley Research Group healthcare managing director, Wendy Cheng for a discussion around developments in AI regulation for pharmaceuticals, clinical trials, and drug development at a global scale.

Wendy specializes in applying advanced methodologies to address complex healthcare research challenges and has used machine learning and other predictive modelling techniques, including latent class analysis to identify cost clusters in disease patient populations, and group-based trajectory analysis to identify disease prognosis patterns. 

Can you provide insights into the current global regulatory landscape governing AI in pharmaceuticals, particularly across major markets like the US, EU, Canada, UK, and the APAC region?

There is no AI-specific regulation for drug development at the moment across different global markets. That said, there is general guidance related to model-informed drug development and biostatistics that also applies to the field of AI. There have also been substantial local and global collaborative efforts to identify key considerations for proper AI use in medical product development, including the adoption and adaptation of technical standards and best practices for general computational models and for AI use in non-healthcare sectors, though such efforts have been largely focused on medical devices rather than drugs. For example, in October 2021, the US FDA, Health Canada, and the UK’s Medicines and Healthcare Products Regulatory Agency (MHRA) jointly identified 10 guiding principles​ to inform the development of Good Machine Learning Practice (GMLP) for medical devices that use AI. In May 2023, the US FDA issued a discussion paper on AI in drug development​ and outlined key areas to consider for the development of a framework or guidance. Similarly, in July 2023, the European Medicines Agency (EMA) issued a reflection paper on the use of AI in the medicinal product lifecycle​. This document provides initial considerations for the use of AI in various phases of the drug lifecycle, touching on topics related to documentation, regulatory interactions, technical aspects, governance, data protection, etc. 

Why is it crucial for pharmaceutical companies to understand the global context of AI regulation, similar to drug regulation?

Most pharmaceutical companies have a global view for their products. Any submissions for medical products are rarely restricted to just the US or any one local market. As long as AI applications are included in any part of the development of a medical product, there is a need to disclose such applications in the regulatory submissions across different global markets. As we move toward a future that guidance and regulations are formulated, there will be a need for pharmaceutical companies to be familiar with them in the global context and customize the regulatory responses around their products. 

Could you discuss BRG's analysis of the FDA's recent paper on the use of AI/ML in drug development, and what implications it may have for the industry?

As highlighted in the discussion paper on AI in drug development​, the FDA, just like other regulatory agencies, is still in the early stages of tracking trends and emerging issues surrounding AI use to identify areas to structure frameworks and standards. They recognize that involvement with other regulatory agencies, industry sponsors, academia and other healthcare providers is essential to arrive at a set of best practices, frameworks, and guidance documents that are sound, acceptable, and practical. The US FDA has demonstrated that they are very active in getting all stakeholders involved and to the next phase, as exemplified in a paper​ issued in March that lays out areas of focus for its medical product Centers and Offices (i.e., CBER, CDER, CDRH, and OCP).

How does the utilization of AI in drug development and clinical trials impact processes and legal/regulatory obligations for pharmaceutical companies?

While there is no AI-specific guidance at the moment, there is however, general regulatory guidance related to drug development and to clinical trials still apply, such as ICH E6 guideline for good clinical practice (GCP) for clinical trials.

What level of transparency is required when submitting documents to regulators for drug authorization in the context of AI utilization?

In the absence of guidance for AI use in drug development, there is no checklist for what needs to be included in submissions. However, regulatory agencies are always in favor of transparency because it increases the credibility and trust in the use of AI, as it maximizes the opportunity for the AI applications to be evaluated for reliability and ultimately, validity. As such, it is important to pre-specify and document the purpose of the AI application, the context of use, potential associated risk, and the development and validation of the AI technique/model, where applicable. As the US FDA stated in the discussion paper on AI in drug development​, in general, the use of a risk-based approach may guide the level of evidence and record-keeping needed for the verification and validation of AI/ML models for a specific context of use. Engagement with the FDA early in the process can also help inform and address these considerations.

The EMA holds a similar position in that any applicable guidance related to clinical trials applies to AI/ML as well. And when no clear applicable guidance is available, pharma companies are recommended to seek interactions with the regulatory agency. 

How does the incorporation of diverse data into AI's learning process influence drug development and clinical trials?

This certainly increases the generalizability of the AI application, such that it can be applied to a broader patient population and the results deemed more credible. Just like any scientific endeavor, validity is essential. If it’s a predictive model, the AI application needs to perform in a way that it is supposed to, meaning that it predicts accurately what it is designed to predict. So, when the AI application is built and tested on a diverse database(s)/population(s), then the validity is more likely to be maximized.  

In your opinion, what guidance do industry professionals need regarding the utilization of AI in drug development?

The pharma industry would benefit from clarity on the specific elements and processes to consider/required to ensure the safe, responsible, and ethical use of AI, which is a core priority of regulatory agencies. These may include considerations for risk management, methodological standards for the development and validation of AI models as well as documentation, and best practices and frameworks for real-world monitoring and quality assurance of AI applications. 

What methodological standards should be followed for AI model development and validation in the pharmaceutical industry? 

There is a wide range of AI applications, so it is hard to provide a one-size-fits-all recommendation for how models should be built. That said, the key is to ensure the ultimate application is valid, the data source(s) is fit for use, and, to the extent possible, the results are generalizable to the intended population(s).

Related topics Clinical Development

Related news