AI
Subscribe to AI's Posts

Introducing McDermott’s AI in Healthcare Resource Center

Stay up-to-date and in-the-know on the surge of developments impacting AI in healthcare.

Our global, cross-practice team has curated links to key AI-related resources from legislative and executive bodies, government agencies and industry stakeholders in one easy-access location to help you stay current and engage in the AI conversation. We will continuously update this page as new developments emerge.
You will find:

  • Opportunities for public comment from lawmakers and government agencies to help you shape AI policy
  • Healthcare-specific policy and regulatory initiatives and legislative activity to watch
  • Frameworks, playbooks, whitepapers, blogs and more from key stakeholders
  • Insights and analysis from McDermott’s global healthcare, privacy and health policy leaders

Access now and subscribe for updates. 




read more

Potential Applications of AI in Health Care

Artificial intelligence (AI) offers powerful new modalities for improving care delivery and access, harnessing previously untapped data, and reducing error and waste. As AI applications proliferate, health industry stakeholders are increasingly exploring how they might integrate these solutions to benefit their providers and patients. This article includes just a small sample of potential applications of AI to address a broad range of needs in healthcare care and life sciences.

To view the full article, “Potential Applications of AI in Healthcare,” click here.

For a deeper dive into the role of AI in healthcare and the board’s governance responsibility, read our June 2021 Health Law Connections article.




read more

Fiduciary Engagement in Artificial Intelligence Innovation: A Governance Imperative

For most healthcare and life sciences companies, investment in and deployment of AI technology is expected to be a critical strategic component for the foreseeable future. Effective, ongoing governance oversight of AI will be a critical organizational concern for companies, and the governance framework itself must reflect and be able to accommodate the highly dynamic nature of AI. Establishing a framework for board decision making and oversight at the earliest possible stage of an organization’s development and implementation of its AI strategy will contribute significantly to the board’s ability to fulfill its fiduciary responsibilities and thereby enhance the AI initiatives’ trustworthiness and prospects for success.

Click here to read the full article.

Originally published in the June 2021 issue of Health Law Connections, produced by the American Health Law Association.




read more

FDA Issues Artificial Intelligence/Machine Learning Action Plan

On January 12, 2021, the US Food and Drug Administration (FDA) released its Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. The Action Plan outlines five actions that FDA intends to take to further its oversight of AI/ML-based SaMD:

  1. Further develop the proposed regulatory framework, including through draft guidance on a predetermined change control plan for “learning” ML algorithms
    • FDA intends to publish the draft guidance on the predetermined change control plan in 2021 in order to clarify expectations for SaMD Pre-Specifications (SPS), which explain what “aspects the manufacturer changes through learning,” and Algorithm Change Protocol (ACP), which explains how the “algorithm will learn and change while remaining safe and effective.” The draft guidance will focus on what should be included in an SPS and ACP in order to ensure safety and effectiveness of the AI/ML SaMD algorithms. Other areas of focus include identification of modifications appropriate under the framework and the submission and review process.
  2. Support development of good machine learning practices (GMLP) to evaluate and improve ML algorithms
    • GMLPs are critical in guiding product development and oversight of AI/ML products. FDA has developed relationships with several communities, including the Institute of Electrical and Electronics Engineers P2801 Artificial Intelligence Medical Device Working Group, the International Organization for Standardization/ Joint Technical Committee 1/ SubCommittee 42 (ISO/ IEC JTC 1/SC 42) – Artificial Intelligence, and the Association for the Advancement of Medical Instrumentation/British Standards Institution Initiative on AI in medical technology. FDA is focused on working with these communities to come to a consensus on GMLP requirements.
  3. Foster a patient-centered approach, including transparency
    • FDA would like to increase patient education to ensure that users have important information about the benefits, risks and limitations of AI/ML products. To that end, FDA held a Patient Engagement Advisory meeting in October 2020, and the agency will use input gathered during the meeting to help identify types of information that it will recommend manufacturers include in AI/ML labeling to foster education and promote transparency.
  4. Develop methods to evaluate and improve ML algorithms
    • To address potential racial, ethical or socio-economic bias that may be inadvertently introduced into AI/ML systems that are trained using data from historical datasets, FDA intends to collaborate with researchers to improve methodologies for the identification and elimination of bias, and to improve the algorithms’ robustness to adapt to varying clinical inputs and conditions.
  5. Advance real world performance monitoring pilots
    • FDA states that gathering real world performance data on the use of the SaMD is an important risk-mitigation tool, as it may allow manufacturers to understand how their products are being used, how they can be improved, and what safety or usability concerns manufacturers need to address. To provide clarity and direction related to real world performance data, FDA supports the piloting of real world performance monitoring. FDA will develop a framework for gathering, validating and evaluating relevant real world performance parameters [...]

      Continue Reading



read more

New Podcast | Protecting Your Tech: IP Considerations in Digital Health

Digital health companies are producing innovative products at a rapidly accelerating pace and experiencing a boom in investments and demand as the regulatory environment becomes more supportive of digital health services to both improve patient care and stay profitable. Protecting intellectual property (IP) and building a feasible data strategy to support the research and development endeavor are essential steps for companies in their drive toward commercialization and return on their investment. On this episode of the Of Digital Interest podcast, McDermott partners Bernadette Broccolo (Health) and Ahsan Shaikh (IP), explore key issues for digital health companies, their collaboration partners and investors, and start-ups to consider, including:

  • What is currently patent eligible in the digital health space?
  • What patent-eligible trends and opportunities are we seeing?
  • How do laws governing data sharing among digital health collaborators impact the research and development effort and associated IP rights?
  • What challenges and opportunities do artificial intelligence, blockchain and machine learning present for digital health innovators?

Listen now




read more

US Office of Management and Budget Calls for Federal Agencies to Reduce Barriers to Artificial Intelligence

On January 7, 2020, the Director of the US Office of Management and Budget (OMB) issued a Draft Memorandum (the Memorandum) to all federal “implementing agencies” regarding the development of regulatory and non-regulatory approaches to reducing barriers to the development and adoption of artificial intelligence (AI) technologies. Implementing agencies are agencies that conduct foundational research, develop and deploy AI technologies, provide educational grants, and regulate and provide guidance for applications of AI technologies, as determined by the co-chairs of the National Science and Technology Council (NSTC) Select Committee. To our knowledge, the NTSC has not yet determined which agencies are “implementing agencies” for purposes of the Memorandum.

Submission of Agency Plan to OMB

The “implementing agencies” have 180 days to submit to OMB their plans for addressing the Memorandum.

An agency’s plan must: (1) identify any statutory authorities specifically governing the agency’s regulation of AI applications as well as collections of AI-related information from regulated entities; and (2) report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within the agency’s regulatory authorities. OMB also requests but does not require agencies to list and describe any planned or considered regulatory actions on AI.

Principles for the Stewardship of AI Applications

The Memorandum outlines the following as principles and considerations that agencies should address in determining regulatory or non-regulatory approaches to AI:

  1. Public trust in AI. Regulatory and non-regulatory approaches to AI need to be reliable, robust and trustworthy.
  2. Public participation. The public should have the opportunity to take part in the rule-making process.
  3. Scientific integrity and information quality. The government should use scientific and technical information and processes when developing a stance on AI.
  4. Risk assessment and management.A risk assessment should be conducted before determining regulatory and non-regulatory approaches.
  5. Benefits and costs. Agencies need to consider the societal costs and benefits related to developing and using AI applications.
  6. Flexibility. Agency approaches to AI should be flexible and performance-based.
  7. Fairness and nondiscrimination. Fairness and nondiscrimination in outcomes needs to be considered in both regulatory and non-regulatory approaches.
  8. Disclosure and transparency. Agencies should be transparent. Transparency can serve to improve public trust in AI.
  9. Safety and security. Agencies should guarantee confidentiality, integrity and availability of data use by AI by ensuring that the proper controls are in place.
  10. Interagency coordination. Agencies need to work together to ensure consistency and predictability of AI-related policies.

(more…)




read more

Reviewing Key Principles from FDA’s Artificial Intelligence White Paper

In April 2019, the US Food and Drug Administration (FDA) issued a white paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device,” announcing steps to consider a new regulatory framework to promote the development of safe and effective medical devices that use advanced AI algorithms. AI, and specifically ML, are “techniques used to design and train software algorithms to learn from and act on data.” FDA’s proposed approach would allow modifications to algorithms to be made from real-world learning and adaptation that accommodates the iterative nature of AI products while ensuring FDA’s standards for safety and effectiveness are maintained.

Under the existing framework, a premarket submission (i.e., a 510(k)) would be required if the AI/ML software modification significantly affects device performance or the device’s safety and effectiveness; the modification is to the device’s intended use; or the modification introduces a major change to the software as a medical device (SaMD) algorithm. In the case of a PMA-approved SaMD, a PMA supplement would be required for changes that affect safety or effectiveness. FDA noted that adaptive AI/ML technologies require a new total product lifecycle (TPLC) regulatory approach and focuses on three types of modifications to AI/ML-based SaMD:

(more…)




read more

FDA’s Breakthrough Device Program: Opportunities and Challenges for Device Developers

As part of the 21st Century Cures Act, Congress gave the US Food and Drug Administration (FDA) the authority to establish a Breakthrough Devices Program intended to expedite the development and prioritize the review of certain medical devices that provide for more effective treatment or diagnosis of life-threatening or irreversibly debilitating disease or conditions. In December 2018, FDA issued a guidance document describing policies FDA intends to use to implement the Program.

There are two criteria for inclusion in the Breakthrough Device Program:

  1. The device must provide for a more effective treatment or diagnosis of a life-threatening or irreversibly debilitating human disease or condition; and
  2. The device must (i) represent breakthrough technology, (ii) have no approved or cleared alternatives, (iii) offer significant advantages over existing approved or cleared alternatives, or (iv) demonstrate that its availability is in the best interest of patients.

(more…)




read more

Digital Health in the UK: The New Regulatory Environment Under the Medical Device Regulation

Investment in artificial intelligence (AI) and digital health technologies has increased exponentially over the last few years. In the United Kingdom, the excitement and interest in this space has been supported by NHS policies, including proposals in the NHS Long Term Plan, which set out ambitious aims for the acceleration and adoption of digital health and AI, particularly in primary care, outpatients and wearable devices.

Although these developments are encouraging to developers, there is still no clear framework for reimbursement or tariffs for digital health tools and AI.

At the same time, the plethora of new technologies has led to increased calls for regulation and oversight, particularly around data quality and evaluation. Many of these concerns may be addressed by the new Medical Device Regulation (MDR) and other regulatory developments. In fact, there is some risk that while regulatory landscape is moving quickly, the pricing environment is still a way behind.

In May 2020, the new MDR will change the law and process of certification for medical software. The new law includes significant changes for digital health technologies which are medical devices. In March 2019, the National Institute for Health and Care Excellence (NICE) also published a new evidence standards framework for digital health technologies. The Care Quality Commission (CQC) already regulates online provision of health care, and there are calls for wider and greater regulation. The government has also published a code on the use of data in AI.

Digital Health Technologies and the MDR

The new MDR will mean a significant change to the regulatory framework for medical devices in the European Union.

As with the previous law, the MDR regulates devices through a classification system.

The new regime introduces new rules for medical software that falls within the definition of device. This will mean significant changes for companies that develop or offer medical software solutions, especially if their current certification has been “up-classed” under the MDR.

Key Takeaways for Investors in Digital Health Tools

Companies and investors in digital health should:
(more…)




read more

Live Webinar: Developing and Procuring Digital Health AI Solutions: Advice for Developers, Purchasers and Vendors

Join McDermott next Wednesday for a live webinar on the unique considerations in developing and procuring AI solutions for digital health applications from the perspective of various stakeholders. We will discuss the legal issues and strategies surrounding:

  • Research and data mapping essential to the development and validation of AI technologies
  • Protecting and maintaining intellectual property rights in AI solutions
  • Technology development
  • Risk management and mitigation for various contractual arrangements, including contracts with customers, vendors and users

We will also focus on the trends in US law for AI solutions in the digital health space, and present actionable advice that will help you develop an effective strategy for developing and procuring AI solutions for digital health applications.

Developing and Procuring Digital Health AI Solutions: Advice for Developers, Purchasers and Vendors
Wednesday, June 13, 2018 | 11:00 am CT | 12:00 pm ET
Register Here

 




read more

STAY CONNECTED

TOPICS

ARCHIVES

2021 Chambers USA top ranked firm
LEgal 500 EMEA top tier firm 2021
U.S. News Law Firm of the Year 2022 Health Care Law