On January 7, 2020, the Director of the US Office of Management and Budget (OMB) issued a Draft Memorandum (the Memorandum) to all federal “implementing agencies” regarding the development of regulatory and non-regulatory approaches to reducing barriers to the development and adoption of artificial intelligence (AI) technologies. Implementing agencies are agencies that conduct foundational research, develop and deploy AI technologies, provide educational grants, and regulate and provide guidance for applications of AI technologies, as determined by the co-chairs of the National Science and Technology Council (NSTC) Select Committee. To our knowledge, the NTSC has not yet determined which agencies are “implementing agencies” for purposes of the Memorandum.

Submission of Agency Plan to OMB

The “implementing agencies” have 180 days to submit to OMB their plans for addressing the Memorandum.

An agency’s plan must: (1) identify any statutory authorities specifically governing the agency’s regulation of AI applications as well as collections of AI-related information from regulated entities; and (2) report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within the agency’s regulatory authorities. OMB also requests but does not require agencies to list and describe any planned or considered regulatory actions on AI.

Principles for the Stewardship of AI Applications

The Memorandum outlines the following as principles and considerations that agencies should address in determining regulatory or non-regulatory approaches to AI:

  1. Public trust in AI. Regulatory and non-regulatory approaches to AI need to be reliable, robust and trustworthy.
  2. Public participation. The public should have the opportunity to take part in the rule-making process.
  3. Scientific integrity and information quality. The government should use scientific and technical information and processes when developing a stance on AI.
  4. Risk assessment and management.A risk assessment should be conducted before determining regulatory and non-regulatory approaches.
  5. Benefits and costs. Agencies need to consider the societal costs and benefits related to developing and using AI applications.
  6. Flexibility. Agency approaches to AI should be flexible and performance-based.
  7. Fairness and nondiscrimination. Fairness and nondiscrimination in outcomes needs to be considered in both regulatory and non-regulatory approaches.
  8. Disclosure and transparency. Agencies should be transparent. Transparency can serve to improve public trust in AI.
  9. Safety and security. Agencies should guarantee confidentiality, integrity and availability of data use by AI by ensuring that the proper controls are in place.
  10. Interagency coordination. Agencies need to work together to ensure consistency and predictability of AI-related policies.


Continue Reading

In April 2019, the US Food and Drug Administration (FDA) issued a white paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device,” announcing steps to consider a new regulatory framework to promote the development of safe and effective medical devices that use advanced AI algorithms. AI, and specifically ML, are “techniques used to design and train software algorithms to learn from and act on data.” FDA’s proposed approach would allow modifications to algorithms to be made from real-world learning and adaptation that accommodates the iterative nature of AI products while ensuring FDA’s standards for safety and effectiveness are maintained.

Under the existing framework, a premarket submission (i.e., a 510(k)) would be required if the AI/ML software modification significantly affects device performance or the device’s safety and effectiveness; the modification is to the device’s intended use; or the modification introduces a major change to the software as a medical device (SaMD) algorithm. In the case of a PMA-approved SaMD, a PMA supplement would be required for changes that affect safety or effectiveness. FDA noted that adaptive AI/ML technologies require a new total product lifecycle (TPLC) regulatory approach and focuses on three types of modifications to AI/ML-based SaMD:


Continue Reading

As digital health innovation continues to move at light speed, both new and incumbent stakeholders find themselves on a new frontier—one that challenges traditional health care delivery and payment frameworks, in addition to changing the landscape for product research, development and commercialization. Modernization of the existing legal framework has not kept pace with the rate of digital health innovation, leaving no shortage of obstacles, misalignment and ambiguity for those in the wake.

What did we learn in 2017 and what’s to come on the digital health frontier in the year ahead? From advances and investments in artificial intelligence (AI) and machine learning (ML) to the increasingly complex conversion of health care innovation and policy, McDermott’s Digital Health Year in Review details the key developments that shaped digital health in 2017, along with planning considerations and predictions for the health care and life science industries in 2018. 
Continue Reading

Although the incorporation of technology into human endeavours—commercial, political and personal—is a normal component of technological innovation, the advent of artificial intelligence technology is producing significant challenges we have not felt or understood with earlier innovations. For many years, for example, there has been speculation, research and public debate about the impact of the internet,