artificial intelligence
Subscribe to artificial intelligence's Posts

Potential Applications of AI in Health Care

Artificial intelligence (AI) offers powerful new modalities for improving care delivery and access, harnessing previously untapped data, and reducing error and waste. As AI applications proliferate, health industry stakeholders are increasingly exploring how they might integrate these solutions to benefit their providers and patients. This article includes just a small sample of potential applications of AI to address a broad range of needs in healthcare care and life sciences.

To view the full article, “Potential Applications of AI in Healthcare,” click here.

For a deeper dive into the role of AI in healthcare and the board’s governance responsibility, read our June 2021 Health Law Connections article.




Fiduciary Engagement in Artificial Intelligence Innovation: A Governance Imperative

For most healthcare and life sciences companies, investment in and deployment of AI technology is expected to be a critical strategic component for the foreseeable future. Effective, ongoing governance oversight of AI will be a critical organizational concern for companies, and the governance framework itself must reflect and be able to accommodate the highly dynamic nature of AI. Establishing a framework for board decision making and oversight at the earliest possible stage of an organization’s development and implementation of its AI strategy will contribute significantly to the board’s ability to fulfill its fiduciary responsibilities and thereby enhance the AI initiatives’ trustworthiness and prospects for success.

Click here to read the full article.

Originally published in the June 2021 issue of Health Law Connections, produced by the American Health Law Association.




FDA Issues Artificial Intelligence/Machine Learning Action Plan

On January 12, 2021, the US Food and Drug Administration (FDA) released its Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. The Action Plan outlines five actions that FDA intends to take to further its oversight of AI/ML-based SaMD:

  1. Further develop the proposed regulatory framework, including through draft guidance on a predetermined change control plan for “learning” ML algorithms
    • FDA intends to publish the draft guidance on the predetermined change control plan in 2021 in order to clarify expectations for SaMD Pre-Specifications (SPS), which explain what “aspects the manufacturer changes through learning,” and Algorithm Change Protocol (ACP), which explains how the “algorithm will learn and change while remaining safe and effective.” The draft guidance will focus on what should be included in an SPS and ACP in order to ensure safety and effectiveness of the AI/ML SaMD algorithms. Other areas of focus include identification of modifications appropriate under the framework and the submission and review process.
  2. Support development of good machine learning practices (GMLP) to evaluate and improve ML algorithms
    • GMLPs are critical in guiding product development and oversight of AI/ML products. FDA has developed relationships with several communities, including the Institute of Electrical and Electronics Engineers P2801 Artificial Intelligence Medical Device Working Group, the International Organization for Standardization/ Joint Technical Committee 1/ SubCommittee 42 (ISO/ IEC JTC 1/SC 42) – Artificial Intelligence, and the Association for the Advancement of Medical Instrumentation/British Standards Institution Initiative on AI in medical technology. FDA is focused on working with these communities to come to a consensus on GMLP requirements.
  3. Foster a patient-centered approach, including transparency
    • FDA would like to increase patient education to ensure that users have important information about the benefits, risks and limitations of AI/ML products. To that end, FDA held a Patient Engagement Advisory meeting in October 2020, and the agency will use input gathered during the meeting to help identify types of information that it will recommend manufacturers include in AI/ML labeling to foster education and promote transparency.
  4. Develop methods to evaluate and improve ML algorithms
    • To address potential racial, ethical or socio-economic bias that may be inadvertently introduced into AI/ML systems that are trained using data from historical datasets, FDA intends to collaborate with researchers to improve methodologies for the identification and elimination of bias, and to improve the algorithms’ robustness to adapt to varying clinical inputs and conditions.
  5. Advance real world performance monitoring pilots
    • FDA states that gathering real world performance data on the use of the SaMD is an important risk-mitigation tool, as it may allow manufacturers to understand how their products are being used, how they can be improved, and what safety or usability concerns manufacturers need to address. To provide clarity and direction related to real world performance data, FDA supports the piloting of real world performance monitoring. FDA will develop a framework for gathering, validating and evaluating relevant real world performance parameters [...]

      Continue Reading



US Office of Management and Budget Calls for Federal Agencies to Reduce Barriers to Artificial Intelligence

On January 7, 2020, the Director of the US Office of Management and Budget (OMB) issued a Draft Memorandum (the Memorandum) to all federal “implementing agencies” regarding the development of regulatory and non-regulatory approaches to reducing barriers to the development and adoption of artificial intelligence (AI) technologies. Implementing agencies are agencies that conduct foundational research, develop and deploy AI technologies, provide educational grants, and regulate and provide guidance for applications of AI technologies, as determined by the co-chairs of the National Science and Technology Council (NSTC) Select Committee. To our knowledge, the NTSC has not yet determined which agencies are “implementing agencies” for purposes of the Memorandum.

Submission of Agency Plan to OMB

The “implementing agencies” have 180 days to submit to OMB their plans for addressing the Memorandum.

An agency’s plan must: (1) identify any statutory authorities specifically governing the agency’s regulation of AI applications as well as collections of AI-related information from regulated entities; and (2) report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within the agency’s regulatory authorities. OMB also requests but does not require agencies to list and describe any planned or considered regulatory actions on AI.

Principles for the Stewardship of AI Applications

The Memorandum outlines the following as principles and considerations that agencies should address in determining regulatory or non-regulatory approaches to AI:

  1. Public trust in AI. Regulatory and non-regulatory approaches to AI need to be reliable, robust and trustworthy.
  2. Public participation. The public should have the opportunity to take part in the rule-making process.
  3. Scientific integrity and information quality. The government should use scientific and technical information and processes when developing a stance on AI.
  4. Risk assessment and management.A risk assessment should be conducted before determining regulatory and non-regulatory approaches.
  5. Benefits and costs. Agencies need to consider the societal costs and benefits related to developing and using AI applications.
  6. Flexibility. Agency approaches to AI should be flexible and performance-based.
  7. Fairness and nondiscrimination. Fairness and nondiscrimination in outcomes needs to be considered in both regulatory and non-regulatory approaches.
  8. Disclosure and transparency. Agencies should be transparent. Transparency can serve to improve public trust in AI.
  9. Safety and security. Agencies should guarantee confidentiality, integrity and availability of data use by AI by ensuring that the proper controls are in place.
  10. Interagency coordination. Agencies need to work together to ensure consistency and predictability of AI-related policies.

(more…)




Reviewing Key Principles from FDA’s Artificial Intelligence White Paper

In April 2019, the US Food and Drug Administration (FDA) issued a white paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device,” announcing steps to consider a new regulatory framework to promote the development of safe and effective medical devices that use advanced AI algorithms. AI, and specifically ML, are “techniques used to design and train software algorithms to learn from and act on data.” FDA’s proposed approach would allow modifications to algorithms to be made from real-world learning and adaptation that accommodates the iterative nature of AI products while ensuring FDA’s standards for safety and effectiveness are maintained.

Under the existing framework, a premarket submission (i.e., a 510(k)) would be required if the AI/ML software modification significantly affects device performance or the device’s safety and effectiveness; the modification is to the device’s intended use; or the modification introduces a major change to the software as a medical device (SaMD) algorithm. In the case of a PMA-approved SaMD, a PMA supplement would be required for changes that affect safety or effectiveness. FDA noted that adaptive AI/ML technologies require a new total product lifecycle (TPLC) regulatory approach and focuses on three types of modifications to AI/ML-based SaMD:

(more…)




FDA’s Breakthrough Device Program: Opportunities and Challenges for Device Developers

As part of the 21st Century Cures Act, Congress gave the US Food and Drug Administration (FDA) the authority to establish a Breakthrough Devices Program intended to expedite the development and prioritize the review of certain medical devices that provide for more effective treatment or diagnosis of life-threatening or irreversibly debilitating disease or conditions. In December 2018, FDA issued a guidance document describing policies FDA intends to use to implement the Program.

There are two criteria for inclusion in the Breakthrough Device Program:

  1. The device must provide for a more effective treatment or diagnosis of a life-threatening or irreversibly debilitating human disease or condition; and
  2. The device must (i) represent breakthrough technology, (ii) have no approved or cleared alternatives, (iii) offer significant advantages over existing approved or cleared alternatives, or (iv) demonstrate that its availability is in the best interest of patients.

(more…)




Digital Health in the UK: The New Regulatory Environment Under the Medical Device Regulation

Investment in artificial intelligence (AI) and digital health technologies has increased exponentially over the last few years. In the United Kingdom, the excitement and interest in this space has been supported by NHS policies, including proposals in the NHS Long Term Plan, which set out ambitious aims for the acceleration and adoption of digital health and AI, particularly in primary care, outpatients and wearable devices.

Although these developments are encouraging to developers, there is still no clear framework for reimbursement or tariffs for digital health tools and AI.

At the same time, the plethora of new technologies has led to increased calls for regulation and oversight, particularly around data quality and evaluation. Many of these concerns may be addressed by the new Medical Device Regulation (MDR) and other regulatory developments. In fact, there is some risk that while regulatory landscape is moving quickly, the pricing environment is still a way behind.

In May 2020, the new MDR will change the law and process of certification for medical software. The new law includes significant changes for digital health technologies which are medical devices. In March 2019, the National Institute for Health and Care Excellence (NICE) also published a new evidence standards framework for digital health technologies. The Care Quality Commission (CQC) already regulates online provision of health care, and there are calls for wider and greater regulation. The government has also published a code on the use of data in AI.

Digital Health Technologies and the MDR

The new MDR will mean a significant change to the regulatory framework for medical devices in the European Union.

As with the previous law, the MDR regulates devices through a classification system.

The new regime introduces new rules for medical software that falls within the definition of device. This will mean significant changes for companies that develop or offer medical software solutions, especially if their current certification has been “up-classed” under the MDR.

Key Takeaways for Investors in Digital Health Tools

Companies and investors in digital health should:
(more…)




Live Webinar: Developing and Procuring Digital Health AI Solutions: Advice for Developers, Purchasers and Vendors

Join McDermott next Wednesday for a live webinar on the unique considerations in developing and procuring AI solutions for digital health applications from the perspective of various stakeholders. We will discuss the legal issues and strategies surrounding:

  • Research and data mapping essential to the development and validation of AI technologies
  • Protecting and maintaining intellectual property rights in AI solutions
  • Technology development
  • Risk management and mitigation for various contractual arrangements, including contracts with customers, vendors and users

We will also focus on the trends in US law for AI solutions in the digital health space, and present actionable advice that will help you develop an effective strategy for developing and procuring AI solutions for digital health applications.

Developing and Procuring Digital Health AI Solutions: Advice for Developers, Purchasers and Vendors
Wednesday, June 13, 2018 | 11:00 am CT | 12:00 pm ET
Register Here

 




Surfing “Tech’s Next Big Wave”: Navigating the Legal Challenges in Digital Health

Fortune’s April 2018 cover story, “Tech’s Next Big Wave: Big Data Meets Biology,” conveys loudly and clearly that technological innovation is transforming the health care continuum—changing the way care is delivered, as well as how patients manage their ongoing health—and as patient demand for health innovation increases, more companies seem eager to hop on the digital health bandwagon. The article provides a thoughtful, realistic (and somewhat sobering) perspective on digital health innovation’s successes and other results to date. It also quite effectively uses real world stories to convey the human dimension of digital health. One is the story of a mother who manually sampled and recorded her son’s glucose levels 20 times a day before an automated monitoring system connected to a mobile app allowed them both to live their lives without constant interruption by this critical care management function. Another describes use of an artificial intelligence “command center” to expedite access to life-saving surgery by a man with an aortic dissection. These real-world examples drive home the fact that digital health is already making a profound difference in our lives by removing barriers to care that are critical to saving lives and managing chronic diseases.

What the article does not touch on, however, are the myriad, complex legal challenges that must be addressed at the earliest stages of the planning process and the intensifying interest of government oversight and enforcement bodies, such as the Federal Trade Commission, the Food and Drug Administration, the Office of Civil Rights of the Department of Health and Human Services, and the Securities and Exchange Commission, interested in protecting the safety and privacy of patients and consumers. Just last month, we saw the SEC charge Theranos’ CEO Elizabeth Holmes with fraud for allegedly misleading investors about the company’s ability to detect health conditions from a small sample of blood. Earlier this year, another “unicorn” start-up, Outcome Health, settled with the federal government after The Wall Street Journal reported that they allegedly misled advertisers with manipulated information. The United States has also brought claims against the private equity company investor of a compounding pharmacy that allegedly paid illegal kickbacks to marketing firms to induce prescriptions written by telemedicine providers for costly compounded drugs reimbursed by TRICARE.

Opportunities and Challenges of the Patient Data “Gold Rush”

Eric Topol, MD, director at the Scripps Research Institute, told Fortune that “the quest to retrieve, analyze and leverage” data “has become the new gold rush. And a vanguard of tech titans—not to mention a bevy of hot startups—are on the hunt for it.” There is no doubt that harnessing and analyzing big data provide virtually limitless fuel for digital health innovation of the type patients and consumers are demanding and that tech companies are eager to develop and commercialize. While optimism about the quest for big data is certainly justified, it must be tempered by caution and careful consideration of complex, multi-dimensional legal [...]

Continue Reading




Order now: The Law of Digital Health Book

Designed to provide business leaders and their key advisors with the knowledge and insight they need to grow and sustain successful digital health initiatives, we are pleased to present The Law of Digital Health, a new book edited and authored by McDermott’s team of distinguished digital health lawyers, and published by AHLA.

Visit www.mwe.com/lawofdigitalhealth to order this comprehensive legal and regulatory analysis, coupled with practical planning and implementation strategies. You can also download the Executive Summary and hear more about how Digital Health is quickly and dynamically changing the health care landscape.

Explore more!




STAY CONNECTED

TOPICS

ARCHIVES