FDA Issues Artificial Intelligence/Machine Learning Action Plan

By and on January 20, 2021
Posted In Big Data

On January 12, 2021, the US Food and Drug Administration (FDA) released its Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. The Action Plan outlines five actions that FDA intends to take to further its oversight of AI/ML-based SaMD:

  1. Further develop the proposed regulatory framework, including through draft guidance on a predetermined change control plan for “learning” ML algorithms
    • FDA intends to publish the draft guidance on the predetermined change control plan in 2021 in order to clarify expectations for SaMD Pre-Specifications (SPS), which explain what “aspects the manufacturer changes through learning,” and Algorithm Change Protocol (ACP), which explains how the “algorithm will learn and change while remaining safe and effective.” The draft guidance will focus on what should be included in an SPS and ACP in order to ensure safety and effectiveness of the AI/ML SaMD algorithms. Other areas of focus include identification of modifications appropriate under the framework and the submission and review process.
  2. Support development of good machine learning practices (GMLP) to evaluate and improve ML algorithms
    • GMLPs are critical in guiding product development and oversight of AI/ML products. FDA has developed relationships with several communities, including the Institute of Electrical and Electronics Engineers P2801 Artificial Intelligence Medical Device Working Group, the International Organization for Standardization/ Joint Technical Committee 1/ SubCommittee 42 (ISO/ IEC JTC 1/SC 42) – Artificial Intelligence, and the Association for the Advancement of Medical Instrumentation/British Standards Institution Initiative on AI in medical technology. FDA is focused on working with these communities to come to a consensus on GMLP requirements.
  3. Foster a patient-centered approach, including transparency
    • FDA would like to increase patient education to ensure that users have important information about the benefits, risks and limitations of AI/ML products. To that end, FDA held a Patient Engagement Advisory meeting in October 2020, and the agency will use input gathered during the meeting to help identify types of information that it will recommend manufacturers include in AI/ML labeling to foster education and promote transparency.
  4. Develop methods to evaluate and improve ML algorithms
    • To address potential racial, ethical or socio-economic bias that may be inadvertently introduced into AI/ML systems that are trained using data from historical datasets, FDA intends to collaborate with researchers to improve methodologies for the identification and elimination of bias, and to improve the algorithms’ robustness to adapt to varying clinical inputs and conditions.
  5. Advance real world performance monitoring pilots
    • FDA states that gathering real world performance data on the use of the SaMD is an important risk-mitigation tool, as it may allow manufacturers to understand how their products are being used, how they can be improved, and what safety or usability concerns manufacturers need to address. To provide clarity and direction related to real world performance data, FDA supports the piloting of real world performance monitoring. FDA will develop a framework for gathering, validating and evaluating relevant real world performance parameters and metrics.

As discussed in detail here, in April 2019, FDA issued a white paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device,” announcing steps to consider a new regulatory framework to promote the development of safe and effective medical devices that use advanced AI algorithms. The Action Plan comes in response to stakeholder feedback on the white paper and FDA’s February 2020 Public Workshop on the Evolving Role of Artificial Intelligence in Radiological Imaging.

The Action Plan is a helpful step in developing a concrete regulatory strategy to address the development, safety and effectiveness, and post-market monitoring of AI/ML tools. FDA has identified key areas of assessment and risk in broad strokes, but input from stakeholders in the ecosystem is critical to the implementation of strategies that address the practical realities of bringing these tools to market. FDA encourages public engagement with the agency and the Action Plan is open for public comment here.

Vernessa T. PollardVernessa T. Pollard
Vernessa T. Pollard advises companies on regulatory, compliance, enforcement and policy matters involving pharmaceuticals, medical devices, health information technology (HIT) and digital health solutions, services and software. She advises companies and investors on regulatory and compliance issues arising from mergers, acquisitions and other transactions involving Food and Drug Administration (FDA)-regulated products. She also counsels manufacturers, distributors and retailers on regulatory and compliance issues related to food and cosmetic marketing and safety. Read Vernessa Pollard's full bio.


Anisa MohantyAnisa Mohanty
Anisa Mohanty advises life sciences companies on regulatory, compliance, enforcement, policy, and legislative matters arising under the Federal Food, Drug, and Cosmetic Act (FDCA). She counsels pharmaceutical, medical device, and consumer product companies on premarket pathways, advertising and promotion, and current Good Manufacturing Practice (cGMP) and Quality System requirements. Read Anisa Mohanty's full bio.

STAY CONNECTED

TOPICS

ARCHIVES