The FDA’s Aggressive AI Rollout: 10 Critical Questions To Ask Today
The FDA’s recent announcement of its agency-wide deployment of generative AI tools by June 30, 2025, marks a transformative moment in regulatory science. With the completion of an AI-assisted scientific review pilot, the FDA has signaled its commitment to leveraging AI to streamline processes and accelerate drug reviews. Leadership for this initiative includes Jeremy Walsh, the FDA’s newly appointed Chief AI Officer, and Sridhar Mantha, who brings expertise from CDER. While formal partnerships with organizations like OpenAI remain speculative, discussions around the potential role of advanced generative AI tools, such as the project cderGPT, highlight the FDA’s ambition to modernize its regulatory processes.
For regulatory affairs professionals in the pharmaceutical industry, this shift presents opportunities but also raises essential questions that demand attention. With this recent FDA announcement, we want to check in - did you ask these 10 questions?
Let’s dive in and explore the critical considerations for regulatory affairs experts on this rollout and why they’re so essential.
What Training Data Was Used to Create This Platform?
Generative AI’s effectiveness depends on the quality and diversity of its training data. Regulatory affairs professionals must inquire about the specific datasets used to train the FDA’s platform and how they were curated. Were these datasets representative of the broad spectrum of therapeutic areas and patient populations? Addressing these questions is critical to ensure the platform’s insights are reliable and not skewed by inherent biases or data gaps.
What Safeguards Are in Place to Mitigate Risks?
The FDA emphasizes the security of its generative AI system, but inherent risks like data breaches, algorithmic biases, and system errors remain significant concerns. Regulatory professionals must seek clarity on the measures implemented to prevent and address these risks. How robust are the safeguards against cybersecurity threats, and what processes are in place to detect and rectify potential biases in the AI’s outputs?
What Constitutes a "Successful" Rollout?
The FDA’s announcement highlights excitement about the transformative potential of AI, but what exactly defines success for this initiative? Is it purely about accelerating review timelines, or does it encompass broader objectives like improving accuracy and enhancing stakeholder confidence? Understanding these benchmarks will help regulatory professionals align their strategies with the agency’s goals and anticipate how these changes might impact their workflows.
How Will the FDA Maintain Transparency in AI Decision-Making?
Transparency is a cornerstone of trust in regulatory processes. Regulatory affairs professionals must understand how generative AI influences reviews and decisions. Will the FDA provide clear documentation of the AI’s role in specific determinations? Ensuring transparency will help build trust among stakeholders and allow companies to better align their submissions with FDA expectations.
How Will This Affect Submission Timelines and Interactions with Sponsors?
Dr. Martin Makary, FDA Commissioner, highlighted the promise of significantly accelerated review times, with tasks that once took days now reduced to minutes. While this acceleration is promising, it raises questions about how submission timelines and interactions with sponsors will be adjusted. Will sponsors need to adapt their submission formats and timelines, and if so, how might this impact the overall quality and efficiency of reviews?
How Will the AI-Assisted Processes Be Standardized Across Centers?
The FDA’s unified AI platform aims to integrate capabilities across all its centers, including the Center for Drug Evaluation and Research (CDER) and the Center for Biologics Evaluation and Research (CBER). However, the operational needs of each center vary. How will the FDA ensure consistency in AI application across these centers, and what steps will be taken to prevent discrepancies that could confuse sponsors or delay approvals?
What Role Will User Feedback Play in Refining AI Tools?
The FDA plans to gather user feedback and refine features after the rollout. Regulatory professionals should engage actively in this feedback loop. What mechanisms will be in place for users to report issues or suggest improvements? Involving industry stakeholders in refining these tools ensures the AI platform remains practical and effective for real-world applications.
What Training Will Be Provided to FDA Scientists and External Stakeholders?
Dr. Makary’s vision of reducing non-productive busywork depends on proper training for FDA staff. External stakeholders, including sponsors, must also adapt to these changes. Will the FDA offer workshops or guidance materials to help sponsors understand the new processes? Ensuring comprehensive training for all users is vital to maintaining submission quality and optimizing the platform’s utility.
How Will Future Enhancements Be Prioritized?
The FDA has indicated plans to expand use cases and improve usability for its AI tools. Regulatory professionals should seek clarity on how these enhancements will be prioritized. Will updates focus on addressing immediate challenges, or will they aim to expand the platform’s capabilities in new directions? Regular updates from the FDA will help sponsors stay informed and aligned with evolving processes.
What Are the Implications for Global Regulatory Harmonization?
As a leader in regulatory science, the FDA’s AI initiative has the potential to set benchmarks for global agencies. How will these advancements influence international regulatory standards, and what opportunities exist for harmonizing AI-driven processes across borders? For multinational companies, understanding these dynamics will be essential for streamlining submissions and reducing duplicative efforts.
Conclusion
The FDA’s generative AI rollout represents a bold leap into the future of regulatory science, but it also marks a critical moment for introspection and vigilance. This announcement was made with limited public information, leaving important details about training data, success metrics, and safeguards to be desired.
This rollout demands heightened scrutiny to assess the system’s performance and ensure it truly delivers on its promise of reducing inefficiencies without introducing new biases or uncertainties. Regulatory affairs teams must champion a culture of accountability and rigor, holding both the FDA and themselves to the highest standards of transparency and ethical AI use.
The future of regulatory science will be shaped by those willing to engage critically and collaboratively in this transformation. By staying informed, asking tough questions, and fostering open dialogue, regulatory affairs professionals can ensure that this bold step forward is also a step in the right direction.
Connect with Us: For more insights on regulatory policies surrounding AI, follow KAMI Think Tank and subscribe to our newsletter.