Throughout the past year, the healthcare and life science industries experienced a proliferation of digital health innovation that challenged traditional notions of healthcare delivery and payment, as well as product research, development and commercialization, for long-standing and new stakeholders alike. Lawmakers and regulators made meaningful progress towards modernizing the existing legal framework to both protect

On January 7, 2020, the Director of the US Office of Management and Budget (OMB) issued a Draft Memorandum (the Memorandum) to all federal “implementing agencies” regarding the development of regulatory and non-regulatory approaches to reducing barriers to the development and adoption of artificial intelligence (AI) technologies. Implementing agencies are agencies that conduct foundational research, develop and deploy AI technologies, provide educational grants, and regulate and provide guidance for applications of AI technologies, as determined by the co-chairs of the National Science and Technology Council (NSTC) Select Committee. To our knowledge, the NTSC has not yet determined which agencies are “implementing agencies” for purposes of the Memorandum.

Submission of Agency Plan to OMB

The “implementing agencies” have 180 days to submit to OMB their plans for addressing the Memorandum.

An agency’s plan must: (1) identify any statutory authorities specifically governing the agency’s regulation of AI applications as well as collections of AI-related information from regulated entities; and (2) report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within the agency’s regulatory authorities. OMB also requests but does not require agencies to list and describe any planned or considered regulatory actions on AI.

Principles for the Stewardship of AI Applications

The Memorandum outlines the following as principles and considerations that agencies should address in determining regulatory or non-regulatory approaches to AI:

  1. Public trust in AI. Regulatory and non-regulatory approaches to AI need to be reliable, robust and trustworthy.
  2. Public participation. The public should have the opportunity to take part in the rule-making process.
  3. Scientific integrity and information quality. The government should use scientific and technical information and processes when developing a stance on AI.
  4. Risk assessment and management.A risk assessment should be conducted before determining regulatory and non-regulatory approaches.
  5. Benefits and costs. Agencies need to consider the societal costs and benefits related to developing and using AI applications.
  6. Flexibility. Agency approaches to AI should be flexible and performance-based.
  7. Fairness and nondiscrimination. Fairness and nondiscrimination in outcomes needs to be considered in both regulatory and non-regulatory approaches.
  8. Disclosure and transparency. Agencies should be transparent. Transparency can serve to improve public trust in AI.
  9. Safety and security. Agencies should guarantee confidentiality, integrity and availability of data use by AI by ensuring that the proper controls are in place.
  10. Interagency coordination. Agencies need to work together to ensure consistency and predictability of AI-related policies.


Continue Reading

The 21st Century Cures Act, enacted in December 2016, amended the definition of “medical device” in section 201(h) of the Federal Food, Drug, and Cosmetic Act (FDCA) to exclude five distinct categories of software or digital health products. In response, the US Food and Drug Administration (FDA) issued new digital health guidance and revised several

In response to the rapid pace of innovation in the health and life sciences arena, the US Food and Drug Administration (FDA) is taking a proactive, risk-based approach to regulating digital health products. Software applications and other transformative technologies, such as artificial intelligence and 3D printing, are reshaping how medical devices are developed, and FDA is seeking to align its mission and regulatory obligations with those changes.

FDA’s digital health software precertification program is a prime example of this approach. Once fully implemented, this voluntary program should expedite the path to market for software as a medical device (SaMD), and promote greater transparency between FDA and regulated entities.

Under the program, FDA will conduct a holistic review of the company producing the SaMD, taking into account aspects such as management culture, quality systems and cybersecurity protocols, to ascertain whether the company has developed sufficient infrastructure to ensure that its products will comply with FDA requirements and function safely as intended. Companies that fulfill the requirements of the excellence appraisal and related reviews will receive precertification that may provide for faster premarket reviews and more flexible approaches to data submissions at the outset.


Continue Reading

In April 2019, the US Food and Drug Administration (FDA) issued a white paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device,” announcing steps to consider a new regulatory framework to promote the development of safe and effective medical devices that use advanced AI algorithms. AI, and specifically ML, are “techniques used to design and train software algorithms to learn from and act on data.” FDA’s proposed approach would allow modifications to algorithms to be made from real-world learning and adaptation that accommodates the iterative nature of AI products while ensuring FDA’s standards for safety and effectiveness are maintained.

Under the existing framework, a premarket submission (i.e., a 510(k)) would be required if the AI/ML software modification significantly affects device performance or the device’s safety and effectiveness; the modification is to the device’s intended use; or the modification introduces a major change to the software as a medical device (SaMD) algorithm. In the case of a PMA-approved SaMD, a PMA supplement would be required for changes that affect safety or effectiveness. FDA noted that adaptive AI/ML technologies require a new total product lifecycle (TPLC) regulatory approach and focuses on three types of modifications to AI/ML-based SaMD:


Continue Reading

As part of the 21st Century Cures Act, Congress gave the US Food and Drug Administration (FDA) the authority to establish a Breakthrough Devices Program intended to expedite the development and prioritize the review of certain medical devices that provide for more effective treatment or diagnosis of life-threatening or irreversibly debilitating disease or conditions. In December 2018, FDA issued a guidance document describing policies FDA intends to use to implement the Program.

There are two criteria for inclusion in the Breakthrough Device Program:

  1. The device must provide for a more effective treatment or diagnosis of a life-threatening or irreversibly debilitating human disease or condition; and
  2. The device must (i) represent breakthrough technology, (ii) have no approved or cleared alternatives, (iii) offer significant advantages over existing approved or cleared alternatives, or (iv) demonstrate that its availability is in the best interest of patients.


Continue Reading

Digital health companies face a complicated regulatory landscape. While the opportunities for innovation and dynamic partnerships are abundant, so are the potential compliance pitfalls. In 2018 and in 2019, several digital health companies faced intense scrutiny—not only from regulatory agencies, but in some cases from their own investors. While the regulatory framework for digital technology in health care and life sciences will continue to evolve, digital health enterprises can take key steps now to mitigate risk, ensure compliance and position themselves for success.

  1. Be accurate about quality.

Ensuring that you have a high-quality product or service is only the first step; you should also be exactingly accurate in the way that you speak about your product’s quality or efficacy. Even if a product or service does not require US Food and Drug Administration clearance for making claims, you still may face substantial regulatory risk and liability if the product does not perform at the level described. As demonstrated in several recent public cases, an inaccurate statement of quality or efficacy can draw state and federal regulatory scrutiny, and carries consequences for selling your product in the marketplace and securing reimbursement.

Tech companies and non-traditional health industry players should take careful stock of the health sector’s unique requirements and liabilities in this area, as the risk is much higher in this arena than in other industries.


Continue Reading

Join us on November 8, 2018, for the third installment of McDermott’s live webinar series on digital health. In this installment, partners Bernadette M. Broccolo, Jiayan Chen and Vernessa T. Pollard will explore opportunities for accelerating biomedical research, development and commercialization through digital health tools and solutions, such as end-user license agreements (EULAs), wearables

The digitization of health care and the proliferation of electronic medical records is happening rapidly, generating large quantities of data with potential to provide valuable insights into disease and wellness and help solve challenging public health problems.

There is tremendous enthusiasm over the possibilities of leveraging this data for secondary use–i.e., a use

Throughout 2017, the health care and life sciences industries experienced a widespread proliferation of digital health innovation that presents challenges to traditional notions of health care delivery and payment as well as product research, development and commercialization for both long-standing and new stakeholders. At the same time, lawmakers and regulators made meaningful progress toward modernizing