Skip to main content
NewsTechnology

FDA Draft Guidance on Leveraging AI for Drug and Biologic Regulatory Decisions

By January 14, 2025No Comments

The U.S. Food and Drug Administration (FDA) has introduced draft guidance titled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products Guidance for Industry and Other Interested Parties.” This guidance outlines a structured approach for integrating artificial intelligence (AI) into regulatory decision-making processes for drug and biologic development. The FDA highlights the necessity of transparency, thorough validation, and a risk-based strategy to ensure AI models are credible and dependable.

The Focus of the Guidance

This draft guidance is dedicated to the application of AI models that generate data or insights used in regulatory decisions related to drug safety, effectiveness, and quality. It does not extend to AI applications in drug discovery or operational processes that do not directly affect patient safety, product quality, or the reliability of study outcomes. Its recommendations apply across all stages of the drug lifecycle, including preclinical research, clinical trials, postmarket surveillance, and manufacturing.

A Seven-Step Risk-Based Framework for AI Credibility

To establish the credibility of AI models in regulatory contexts, the FDA proposes a seven-step risk-based framework:

1. Identify the Regulatory Question

Clearly define the regulatory question that the AI model is designed to address, incorporating evidence from multiple sources such as clinical studies and laboratory data.

2. Specify the Context of Use (COU)

Describe the model’s role and how its results will influence decision-making. Clarify whether it will function independently or alongside other evidence.

3. Evaluate AI Model Risk

Assess the model’s potential risk based on:

  • Model Influence: The degree to which the AI model’s output contributes to regulatory decisions.
  • Decision Consequence: The potential impact of incorrect decisions based on the model’s results.

4. Formulate a Credibility Assessment Plan

Develop a detailed plan to validate the model’s credibility, including its architecture, data sources, and performance evaluation metrics.

5. Implement the Plan

Carry out the assessment plan, with proactive engagement with the FDA recommended to align expectations and address any challenges early.

6. Record and Report Results

Document the findings of the credibility assessment, noting any deviations from the original plan and providing comprehensive evidence of the model’s reliability.

7. Assess Model Suitability

Determine if the AI model is fit for its intended purpose. If deficiencies are identified, sponsors can:

  • Reduce the model’s weight in decision-making by combining it with additional data.
  • Enhance validation procedures.
  • Introduce risk mitigation strategies.
  • Refine the model or adjust methodologies.
  • Reassess the model until it meets the required standards.

Ongoing Monitoring and Maintenance of AI Models

Because AI models can evolve, the guidance stresses the need for continuous monitoring, regular evaluations, and timely updates. A risk-based approach should guide ongoing lifecycle management to maintain consistent and accurate performance.

Encouragement for Early FDA Collaboration

The FDA strongly advises sponsors to initiate early discussions during the development of AI models intended for regulatory use. Early collaboration helps establish clear validation expectations and proactively addresses potential obstacles. The FDA offers various engagement opportunities, including formal meetings and consultative programs.

Potential Industry Impact

This guidance is expected to significantly influence pharmaceutical and biotech companies by fostering responsible innovation and introducing structured pathways for AI integration. Companies that adapt early to these standards can gain a competitive advantage in streamlining regulatory processes while ensuring compliance.

Challenges and Considerations for Implementation

Implementing this framework may present challenges, including the complexity of validating AI models, ensuring data privacy, and integrating AI systems into existing regulatory and operational workflows. Organizations must be prepared to invest in robust validation and continuous monitoring processes to meet regulatory expectations.

Global Regulatory Context

The FDA’s approach to AI regulation aligns with growing global efforts to manage AI use in healthcare. Comparing this guidance to international standards, such as those from the European Medicines Agency (EMA) and the UK’s Medicines and Healthcare products Regulatory Agency (MHRA), can offer a broader understanding of global regulatory trends in AI applications for drug development.

Role of Service Providers

Organizations specializing in clinical research support, such as mdgroup, can play a pivotal role in helping sponsors navigate these new regulatory requirements. From AI model validation to compliance strategy, these partners can provide critical expertise to ensure alignment with FDA guidance and streamline regulatory submissions.

Conclusion

The FDA’s draft guidance offers a comprehensive framework for integrating AI into regulatory decision-making in drug and biologic development. By emphasizing a risk-based approach, transparency, and lifecycle management, the guidance ensures that AI is applied responsibly, balancing innovation with the highest standards of safety and efficacy throughout the drug development process.

Need Help?
Hi! Please select an option