FDA
Subscribe to FDA's Posts

McDermott Partners Recognized As Digital Health Power Players

Washington (August 11, 2021)McDermott Will & Emery partners Jennifer Geetter and Vernessa Pollard have been recognized within an Insider profile on the “9 behind-the-scenes players who can make or break your digital-health startup.” The pair discussed advising young companies on the regulatory hurdles they have to clear before tackling the healthcare market.

As part of the Firm’s industry-leading health practice, Jennifer advises digital health companies on the development, delivery and implementation of innovative healthcare solutions. Vernessa leads medical device and technology companies through US Food and Drug Administration (FDA) regulations to bring products to market.

Vernessa explored with Insider how more tech companies are spreading into healthcare. These companies need assistance discussing the FDA’s newest regulations, including machine learning oversights or health-data privacy rules. If businesses are not knowledgeable about their regulatory requirements, it can make or break their investments.

“We have a number of what we’d consider to be nontraditionally FDA regulated entities, such as the large tech companies and even hospitals and healthcare providers, that are entering this space because they’re developing new tools or technology that may trigger FDA requirements,” Vernessa noted.

Jennifer explained that it’s not always clear if decade old FDA laws apply, so she advises her clients to prioritize building trust with patients through an emphasis of privacy and cybersecurity protection.

“There’s something about the intimacy of the standard doctor-patient relationship when you’re sitting across the room from your doctor that breeds trust,” Jennifer said. “When you’re in a digital healthcare system with distance, you don’t necessarily have that.”

McDermott Will & Emery is the nation’s leading health law firm. The Health Industry Advisory group is the only health practice to receive top national rankings from U.S. News – Best Lawyers “Best Law Firms,” Chambers USA, The Legal 500 US and Law360. The practice was also recognized by Chambers as “Health Team of the Year” in 2010, 2013, 2017 and 2019. McDermott has also held the top spot in PitchBook’s League Tables as the most active firm for healthcare private equity since 2017.




read more

FDA Issues Artificial Intelligence/Machine Learning Action Plan

On January 12, 2021, the US Food and Drug Administration (FDA) released its Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. The Action Plan outlines five actions that FDA intends to take to further its oversight of AI/ML-based SaMD:

  1. Further develop the proposed regulatory framework, including through draft guidance on a predetermined change control plan for “learning” ML algorithms
    • FDA intends to publish the draft guidance on the predetermined change control plan in 2021 in order to clarify expectations for SaMD Pre-Specifications (SPS), which explain what “aspects the manufacturer changes through learning,” and Algorithm Change Protocol (ACP), which explains how the “algorithm will learn and change while remaining safe and effective.” The draft guidance will focus on what should be included in an SPS and ACP in order to ensure safety and effectiveness of the AI/ML SaMD algorithms. Other areas of focus include identification of modifications appropriate under the framework and the submission and review process.
  2. Support development of good machine learning practices (GMLP) to evaluate and improve ML algorithms
    • GMLPs are critical in guiding product development and oversight of AI/ML products. FDA has developed relationships with several communities, including the Institute of Electrical and Electronics Engineers P2801 Artificial Intelligence Medical Device Working Group, the International Organization for Standardization/ Joint Technical Committee 1/ SubCommittee 42 (ISO/ IEC JTC 1/SC 42) – Artificial Intelligence, and the Association for the Advancement of Medical Instrumentation/British Standards Institution Initiative on AI in medical technology. FDA is focused on working with these communities to come to a consensus on GMLP requirements.
  3. Foster a patient-centered approach, including transparency
    • FDA would like to increase patient education to ensure that users have important information about the benefits, risks and limitations of AI/ML products. To that end, FDA held a Patient Engagement Advisory meeting in October 2020, and the agency will use input gathered during the meeting to help identify types of information that it will recommend manufacturers include in AI/ML labeling to foster education and promote transparency.
  4. Develop methods to evaluate and improve ML algorithms
    • To address potential racial, ethical or socio-economic bias that may be inadvertently introduced into AI/ML systems that are trained using data from historical datasets, FDA intends to collaborate with researchers to improve methodologies for the identification and elimination of bias, and to improve the algorithms’ robustness to adapt to varying clinical inputs and conditions.
  5. Advance real world performance monitoring pilots
    • FDA states that gathering real world performance data on the use of the SaMD is an important risk-mitigation tool, as it may allow manufacturers to understand how their products are being used, how they can be improved, and what safety or usability concerns manufacturers need to address. To provide clarity and direction related to real world performance data, FDA supports the piloting of real world performance monitoring. FDA will develop a framework for gathering, validating and evaluating relevant real world performance parameters [...]

      Continue Reading



read more

The Toughest Problem Set: Navigating Regulatory and Operational Challenges on University Campuses

When the academic year ended in the spring of 2020, many US university students assumed that a return to campus would be straightforward this fall. However, it is now clear—at least in the near term—that a return to the old “normal” will not be possible. Some universities have concluded that their best course of action is to offer only distanced learning for the time being. Other universities, however, are welcoming students back onto campus, and into residence and dining halls, classrooms, labs and libraries. Each of those universities is developing its own approach to retain the benefits of on-campus student life while reducing risk to the greatest extent possible; nevertheless, some have had to adjust their plans to pivot to remote learning when faced with clusters of positive cases on campus. One thing is clear: The fall semester will be a real-time, national learning laboratory.

Because widespread, rapid testing remains unavailable in many locations, universities have had to find innovative ways to implement testing, tracing and isolation protocols to reduce the risk of transmission among students, faculty and staff. There is no one perfect protocol—all universities are in unchartered waters. But there are a few key components university administrators may want to consider and address:

  • Apps: Symptom checkers, contact tracing and other apps can be useful in identifying and focusing attention on the onset of symptoms, fostering accountability and identifying high-risk exposure. In considering whether to incorporate apps and related technologies into their back to campus plans, universities must anticipate and address considerations related to privacy, security and reporting of results, and will need to consider how such apps are hosted (for example, through Apple’s App Store) and whether any third parties will have access to the personal data collected.
  • Contact Tracing: In addition to the issues noted above, contact tracing efforts also present other challenges, including managing reliability, over/under inclusiveness and liability (for both false positives and false negatives). In addition, the effectiveness of contact tracing is closely tied to its speed and comprehensiveness; to implement a successful contact tracing program, universities will need to balance effectiveness with privacy and autonomy.
  • CLIA: The Clinical Laboratory Improvement Act (CLIA) will require that many of the tests be performed in CLIA-certified (and state-licensed, where required) space. Universities will need to consider how best to handle building out additional compliant space, creating additional “point of care” testing or specimen collection sites if needed to test students, faculty and staff where they are and validating the test(s) being offered. Tests that are not yet validated likely cannot be used to return patient-specific results that inform student and staff care or be used to prompt “official” testing.
  • FDA/Emergency Use Authorizations (EUA): In general, the Food and Drug Administration (FDA) expects developers of molecular, antigen and (in the case of test kit manufacturers) antibody tests to obtain an EUA. However, under FDA enforcement policies during the pandemic, many of these same tests—if validated and offered with appropriate agency-mandated disclaimers—can be offered before [...]

    Continue Reading



read more

Available Now – 2019 Digital Health Year in Review

Throughout the past year, the healthcare and life science industries experienced a proliferation of digital health innovation that challenged traditional notions of healthcare delivery and payment, as well as product research, development and commercialization, for long-standing and new stakeholders alike. Lawmakers and regulators made meaningful progress towards modernizing the existing legal framework to both protect patients and consumers and encourage continued innovation, but these efforts still lag behind the pace of digital health innovation. As a result, some obstacles, misalignment and ambiguity remain, and 2020 will likely be another year of significant legal and regulatory change.

Click here to read our review of key developments that shaped digital health in 2019 and set the groundwork for trends in 2020.

 




read more

US Office of Management and Budget Calls for Federal Agencies to Reduce Barriers to Artificial Intelligence

On January 7, 2020, the Director of the US Office of Management and Budget (OMB) issued a Draft Memorandum (the Memorandum) to all federal “implementing agencies” regarding the development of regulatory and non-regulatory approaches to reducing barriers to the development and adoption of artificial intelligence (AI) technologies. Implementing agencies are agencies that conduct foundational research, develop and deploy AI technologies, provide educational grants, and regulate and provide guidance for applications of AI technologies, as determined by the co-chairs of the National Science and Technology Council (NSTC) Select Committee. To our knowledge, the NTSC has not yet determined which agencies are “implementing agencies” for purposes of the Memorandum.

Submission of Agency Plan to OMB

The “implementing agencies” have 180 days to submit to OMB their plans for addressing the Memorandum.

An agency’s plan must: (1) identify any statutory authorities specifically governing the agency’s regulation of AI applications as well as collections of AI-related information from regulated entities; and (2) report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within the agency’s regulatory authorities. OMB also requests but does not require agencies to list and describe any planned or considered regulatory actions on AI.

Principles for the Stewardship of AI Applications

The Memorandum outlines the following as principles and considerations that agencies should address in determining regulatory or non-regulatory approaches to AI:

  1. Public trust in AI. Regulatory and non-regulatory approaches to AI need to be reliable, robust and trustworthy.
  2. Public participation. The public should have the opportunity to take part in the rule-making process.
  3. Scientific integrity and information quality. The government should use scientific and technical information and processes when developing a stance on AI.
  4. Risk assessment and management.A risk assessment should be conducted before determining regulatory and non-regulatory approaches.
  5. Benefits and costs. Agencies need to consider the societal costs and benefits related to developing and using AI applications.
  6. Flexibility. Agency approaches to AI should be flexible and performance-based.
  7. Fairness and nondiscrimination. Fairness and nondiscrimination in outcomes needs to be considered in both regulatory and non-regulatory approaches.
  8. Disclosure and transparency. Agencies should be transparent. Transparency can serve to improve public trust in AI.
  9. Safety and security. Agencies should guarantee confidentiality, integrity and availability of data use by AI by ensuring that the proper controls are in place.
  10. Interagency coordination. Agencies need to work together to ensure consistency and predictability of AI-related policies.

(more…)




read more

Is Your Software a Medical Device? FDA Issues Six Digital Health Guidance Documents

The 21st Century Cures Act, enacted in December 2016, amended the definition of “medical device” in section 201(h) of the Federal Food, Drug, and Cosmetic Act (FDCA) to exclude five distinct categories of software or digital health products. In response, the US Food and Drug Administration (FDA) issued new digital health guidance and revised several pre-existing medical device guidance documents. FDA also stated that it would continue to assess how to update and revise these guidance documents as its thinking evolved.

Late last week, FDA issued five final guidance documents and re-issued a draft guidance document to better reflect FDA’s current thinking on software as a medical device (SaMD) and other digital health products:

Most of the guidance documents reflect modest changes to prior draft guidance documents that describe categories of low-risk health and wellness devices that FDA does not intend to regulate. FDA’s new draft Clinical Decision Support (CDS) Software guidance, however, provides a new and more detailed analysis of risk factors that FDA will apply to determine whether a CDS tool is a medical device. FDA updated its previously issued draft CDS guidance without finalizing it. Although the new guidance does not explain why FDA is reissuing the CDS guidance in draft, the new draft guidance seems to reflect the agency’s attempt to better align its definition of non-device software with the often misunderstood and misinterpreted statutory definition of CDS in section 520(o)(1)(E) of the Cures Act. The chart below summarizes the key provisions and changes to these guidance documents.

Digital health products can present a particular challenge for developers and regulators in assessing the appropriate regulatory pathways for a new product. The updated guidance documents reflect the need for a more flexible, risk-based approach to regulation that accommodates a rapidly evolving technological landscape. These documents also reflect what appears to be the new normal for digital health regulation—the need for iterative thinking and ongoing revisions to interpretive guidance documents to keep pace with a constantly changing marketplace.

Click here to read the full client alert on this issue. 




read more

To Market, To Market: FDA’s Digital Health Precertification Program

In response to the rapid pace of innovation in the health and life sciences arena, the US Food and Drug Administration (FDA) is taking a proactive, risk-based approach to regulating digital health products. Software applications and other transformative technologies, such as artificial intelligence and 3D printing, are reshaping how medical devices are developed, and FDA is seeking to align its mission and regulatory obligations with those changes.

FDA’s digital health software precertification program is a prime example of this approach. Once fully implemented, this voluntary program should expedite the path to market for software as a medical device (SaMD), and promote greater transparency between FDA and regulated entities.

Under the program, FDA will conduct a holistic review of the company producing the SaMD, taking into account aspects such as management culture, quality systems and cybersecurity protocols, to ascertain whether the company has developed sufficient infrastructure to ensure that its products will comply with FDA requirements and function safely as intended. Companies that fulfill the requirements of the excellence appraisal and related reviews will receive precertification that may provide for faster premarket reviews and more flexible approaches to data submissions at the outset.

(more…)




read more

Reviewing Key Principles from FDA’s Artificial Intelligence White Paper

In April 2019, the US Food and Drug Administration (FDA) issued a white paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device,” announcing steps to consider a new regulatory framework to promote the development of safe and effective medical devices that use advanced AI algorithms. AI, and specifically ML, are “techniques used to design and train software algorithms to learn from and act on data.” FDA’s proposed approach would allow modifications to algorithms to be made from real-world learning and adaptation that accommodates the iterative nature of AI products while ensuring FDA’s standards for safety and effectiveness are maintained.

Under the existing framework, a premarket submission (i.e., a 510(k)) would be required if the AI/ML software modification significantly affects device performance or the device’s safety and effectiveness; the modification is to the device’s intended use; or the modification introduces a major change to the software as a medical device (SaMD) algorithm. In the case of a PMA-approved SaMD, a PMA supplement would be required for changes that affect safety or effectiveness. FDA noted that adaptive AI/ML technologies require a new total product lifecycle (TPLC) regulatory approach and focuses on three types of modifications to AI/ML-based SaMD:

(more…)




read more

FDA’s Breakthrough Device Program: Opportunities and Challenges for Device Developers

As part of the 21st Century Cures Act, Congress gave the US Food and Drug Administration (FDA) the authority to establish a Breakthrough Devices Program intended to expedite the development and prioritize the review of certain medical devices that provide for more effective treatment or diagnosis of life-threatening or irreversibly debilitating disease or conditions. In December 2018, FDA issued a guidance document describing policies FDA intends to use to implement the Program.

There are two criteria for inclusion in the Breakthrough Device Program:

  1. The device must provide for a more effective treatment or diagnosis of a life-threatening or irreversibly debilitating human disease or condition; and
  2. The device must (i) represent breakthrough technology, (ii) have no approved or cleared alternatives, (iii) offer significant advantages over existing approved or cleared alternatives, or (iv) demonstrate that its availability is in the best interest of patients.

(more…)




read more

Three Tips for Tackling Risk in Digital Health

Digital health companies face a complicated regulatory landscape. While the opportunities for innovation and dynamic partnerships are abundant, so are the potential compliance pitfalls. In 2018 and in 2019, several digital health companies faced intense scrutiny—not only from regulatory agencies, but in some cases from their own investors. While the regulatory framework for digital technology in health care and life sciences will continue to evolve, digital health enterprises can take key steps now to mitigate risk, ensure compliance and position themselves for success.

  1. Be accurate about quality.

Ensuring that you have a high-quality product or service is only the first step; you should also be exactingly accurate in the way that you speak about your product’s quality or efficacy. Even if a product or service does not require US Food and Drug Administration clearance for making claims, you still may face substantial regulatory risk and liability if the product does not perform at the level described. As demonstrated in several recent public cases, an inaccurate statement of quality or efficacy can draw state and federal regulatory scrutiny, and carries consequences for selling your product in the marketplace and securing reimbursement.

Tech companies and non-traditional health industry players should take careful stock of the health sector’s unique requirements and liabilities in this area, as the risk is much higher in this arena than in other industries.

(more…)




read more

STAY CONNECTED

TOPICS

ARCHIVES

2021 Chambers USA top ranked firm
LEgal 500 EMEA top tier firm 2021
U.S. News Law Firm of the Year 2022 Health Care Law