machine-learning
Subscribe to machine-learning's Posts

Inside the ONC’s Plans to Regulate Health AI: Proposal on Predictive Decision Support Interventions

The Department of Health and Human Services Office of the National Coordinator for Health Information Technology (ONC) published a notice of proposed rulemaking on April 18, 2023, titled “Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing,” or HTI-1.

HTI-1 includes proposals that would create new transparency and risk management expectations for artificial intelligence (AI) and machine learning (ML) technology that aid decision-making in healthcare. Importantly, ONC’s proposal includes requirements for a broad set of technologies based on algorithms or models that could be used by many types of users, for any purpose, irrespective of the technology’s risk profile and regardless of who developed the algorithm or model. If finalized, ONC’s HTI-1 proposal for AI/ML technologies they define as “Predictive Decision Support Interventions” will significantly impact the development, deployment and use of AI/ML tools in healthcare. As of June 14, 2023, comments on the HTI-1 proposed rule are due on June 20, 2023.

Download the report!




read more

FDA Issues Artificial Intelligence/Machine Learning Action Plan

On January 12, 2021, the US Food and Drug Administration (FDA) released its Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. The Action Plan outlines five actions that FDA intends to take to further its oversight of AI/ML-based SaMD:

  1. Further develop the proposed regulatory framework, including through draft guidance on a predetermined change control plan for “learning” ML algorithms
    • FDA intends to publish the draft guidance on the predetermined change control plan in 2021 in order to clarify expectations for SaMD Pre-Specifications (SPS), which explain what “aspects the manufacturer changes through learning,” and Algorithm Change Protocol (ACP), which explains how the “algorithm will learn and change while remaining safe and effective.” The draft guidance will focus on what should be included in an SPS and ACP in order to ensure safety and effectiveness of the AI/ML SaMD algorithms. Other areas of focus include identification of modifications appropriate under the framework and the submission and review process.
  2. Support development of good machine learning practices (GMLP) to evaluate and improve ML algorithms
    • GMLPs are critical in guiding product development and oversight of AI/ML products. FDA has developed relationships with several communities, including the Institute of Electrical and Electronics Engineers P2801 Artificial Intelligence Medical Device Working Group, the International Organization for Standardization/ Joint Technical Committee 1/ SubCommittee 42 (ISO/ IEC JTC 1/SC 42) – Artificial Intelligence, and the Association for the Advancement of Medical Instrumentation/British Standards Institution Initiative on AI in medical technology. FDA is focused on working with these communities to come to a consensus on GMLP requirements.
  3. Foster a patient-centered approach, including transparency
    • FDA would like to increase patient education to ensure that users have important information about the benefits, risks and limitations of AI/ML products. To that end, FDA held a Patient Engagement Advisory meeting in October 2020, and the agency will use input gathered during the meeting to help identify types of information that it will recommend manufacturers include in AI/ML labeling to foster education and promote transparency.
  4. Develop methods to evaluate and improve ML algorithms
    • To address potential racial, ethical or socio-economic bias that may be inadvertently introduced into AI/ML systems that are trained using data from historical datasets, FDA intends to collaborate with researchers to improve methodologies for the identification and elimination of bias, and to improve the algorithms’ robustness to adapt to varying clinical inputs and conditions.
  5. Advance real world performance monitoring pilots
    • FDA states that gathering real world performance data on the use of the SaMD is an important risk-mitigation tool, as it may allow manufacturers to understand how their products are being used, how they can be improved, and what safety or usability concerns manufacturers need to address. To provide clarity and direction related to real world performance data, FDA supports the piloting of real world performance monitoring. FDA will develop a framework for gathering, validating and evaluating relevant real world performance parameters [...]

      Continue Reading



read more

New Podcast | Protecting Your Tech: IP Considerations in Digital Health

Digital health companies are producing innovative products at a rapidly accelerating pace and experiencing a boom in investments and demand as the regulatory environment becomes more supportive of digital health services to both improve patient care and stay profitable. Protecting intellectual property (IP) and building a feasible data strategy to support the research and development endeavor are essential steps for companies in their drive toward commercialization and return on their investment. On this episode of the Of Digital Interest podcast, McDermott partners Bernadette Broccolo (Health) and Ahsan Shaikh (IP), explore key issues for digital health companies, their collaboration partners and investors, and start-ups to consider, including:

  • What is currently patent eligible in the digital health space?
  • What patent-eligible trends and opportunities are we seeing?
  • How do laws governing data sharing among digital health collaborators impact the research and development effort and associated IP rights?
  • What challenges and opportunities do artificial intelligence, blockchain and machine learning present for digital health innovators?

Listen now




read more

US Office of Management and Budget Calls for Federal Agencies to Reduce Barriers to Artificial Intelligence

On January 7, 2020, the Director of the US Office of Management and Budget (OMB) issued a Draft Memorandum (the Memorandum) to all federal “implementing agencies” regarding the development of regulatory and non-regulatory approaches to reducing barriers to the development and adoption of artificial intelligence (AI) technologies. Implementing agencies are agencies that conduct foundational research, develop and deploy AI technologies, provide educational grants, and regulate and provide guidance for applications of AI technologies, as determined by the co-chairs of the National Science and Technology Council (NSTC) Select Committee. To our knowledge, the NTSC has not yet determined which agencies are “implementing agencies” for purposes of the Memorandum.

Submission of Agency Plan to OMB

The “implementing agencies” have 180 days to submit to OMB their plans for addressing the Memorandum.

An agency’s plan must: (1) identify any statutory authorities specifically governing the agency’s regulation of AI applications as well as collections of AI-related information from regulated entities; and (2) report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within the agency’s regulatory authorities. OMB also requests but does not require agencies to list and describe any planned or considered regulatory actions on AI.

Principles for the Stewardship of AI Applications

The Memorandum outlines the following as principles and considerations that agencies should address in determining regulatory or non-regulatory approaches to AI:

  1. Public trust in AI. Regulatory and non-regulatory approaches to AI need to be reliable, robust and trustworthy.
  2. Public participation. The public should have the opportunity to take part in the rule-making process.
  3. Scientific integrity and information quality. The government should use scientific and technical information and processes when developing a stance on AI.
  4. Risk assessment and management.A risk assessment should be conducted before determining regulatory and non-regulatory approaches.
  5. Benefits and costs. Agencies need to consider the societal costs and benefits related to developing and using AI applications.
  6. Flexibility. Agency approaches to AI should be flexible and performance-based.
  7. Fairness and nondiscrimination. Fairness and nondiscrimination in outcomes needs to be considered in both regulatory and non-regulatory approaches.
  8. Disclosure and transparency. Agencies should be transparent. Transparency can serve to improve public trust in AI.
  9. Safety and security. Agencies should guarantee confidentiality, integrity and availability of data use by AI by ensuring that the proper controls are in place.
  10. Interagency coordination. Agencies need to work together to ensure consistency and predictability of AI-related policies.

(more…)




read more

Reviewing Key Principles from FDA’s Artificial Intelligence White Paper

In April 2019, the US Food and Drug Administration (FDA) issued a white paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device,” announcing steps to consider a new regulatory framework to promote the development of safe and effective medical devices that use advanced AI algorithms. AI, and specifically ML, are “techniques used to design and train software algorithms to learn from and act on data.” FDA’s proposed approach would allow modifications to algorithms to be made from real-world learning and adaptation that accommodates the iterative nature of AI products while ensuring FDA’s standards for safety and effectiveness are maintained.

Under the existing framework, a premarket submission (i.e., a 510(k)) would be required if the AI/ML software modification significantly affects device performance or the device’s safety and effectiveness; the modification is to the device’s intended use; or the modification introduces a major change to the software as a medical device (SaMD) algorithm. In the case of a PMA-approved SaMD, a PMA supplement would be required for changes that affect safety or effectiveness. FDA noted that adaptive AI/ML technologies require a new total product lifecycle (TPLC) regulatory approach and focuses on three types of modifications to AI/ML-based SaMD:

(more…)




read more

On the Digital Health Frontier: Developments Driving Industry Change in 2018

As digital health innovation continues to move at light speed, both new and incumbent stakeholders find themselves on a new frontier—one that challenges traditional health care delivery and payment frameworks, in addition to changing the landscape for product research, development and commercialization. Modernization of the existing legal framework has not kept pace with the rate of digital health innovation, leaving no shortage of obstacles, misalignment and ambiguity for those in the wake.

What did we learn in 2017 and what’s to come on the digital health frontier in the year ahead? From advances and investments in artificial intelligence (AI) and machine learning (ML) to the increasingly complex conversion of health care innovation and policy, McDermott’s Digital Health Year in Review details the key developments that shaped digital health in 2017, along with planning considerations and predictions for the health care and life science industries in 2018.  (more…)




read more

Artificial Intelligence in Health Care: Framework Needed

Although the incorporation of technology into human endeavours—commercial, political and personal—is a normal component of technological innovation, the advent of artificial intelligence technology is producing significant challenges we have not felt or understood with earlier innovations. For many years, for example, there has been speculation, research and public debate about the impact of the internet, the functioning of search engines, and online advertising techniques on commercial and political decisions.

The alleged “hacking” of the 2016 US presidential election, and the concerns about such activities in the 2017 European elections, will only heighten the interweaving discussions on free speech, national sovereignty, cyber security and the nature of privacy.

The use of artificial intelligence and machine-learning technologies has only added to the list of issues and areas of concern. The consequences of automobile accidents involving “self-driving” technologies, the “flash crashes” on securities markets due to algorithmic trading, and bias in systems designed to determine benefit eligibility, are requiring us to consider what happens when we defer judgment to machines, and highlighting the importance of quality in data sets and sensors.

Continue Reading

Read Full International News, Fall 2017




read more

STAY CONNECTED

TOPICS

ARCHIVES

2021 Chambers USA top ranked firm
LEgal 500 EMEA top tier firm 2021
U.S. News Law Firm of the Year 2022 Health Care Law