Big Data
Subscribe to Big Data's Posts

Fiduciary Engagement in Artificial Intelligence Innovation: A Governance Imperative

For most healthcare and life sciences companies, investment in and deployment of AI technology is expected to be a critical strategic component for the foreseeable future. Effective, ongoing governance oversight of AI will be a critical organizational concern for companies, and the governance framework itself must reflect and be able to accommodate the highly dynamic nature of AI. Establishing a framework for board decision making and oversight at the earliest possible stage of an organization’s development and implementation of its AI strategy will contribute significantly to the board’s ability to fulfill its fiduciary responsibilities and thereby enhance the AI initiatives’ trustworthiness and prospects for success.

Click here to read the full article.

Originally published in the June 2021 issue of Health Law Connections, produced by the American Health Law Association.




FDA Issues Artificial Intelligence/Machine Learning Action Plan

On January 12, 2021, the US Food and Drug Administration (FDA) released its Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. The Action Plan outlines five actions that FDA intends to take to further its oversight of AI/ML-based SaMD:

  1. Further develop the proposed regulatory framework, including through draft guidance on a predetermined change control plan for “learning” ML algorithms
    • FDA intends to publish the draft guidance on the predetermined change control plan in 2021 in order to clarify expectations for SaMD Pre-Specifications (SPS), which explain what “aspects the manufacturer changes through learning,” and Algorithm Change Protocol (ACP), which explains how the “algorithm will learn and change while remaining safe and effective.” The draft guidance will focus on what should be included in an SPS and ACP in order to ensure safety and effectiveness of the AI/ML SaMD algorithms. Other areas of focus include identification of modifications appropriate under the framework and the submission and review process.
  2. Support development of good machine learning practices (GMLP) to evaluate and improve ML algorithms
    • GMLPs are critical in guiding product development and oversight of AI/ML products. FDA has developed relationships with several communities, including the Institute of Electrical and Electronics Engineers P2801 Artificial Intelligence Medical Device Working Group, the International Organization for Standardization/ Joint Technical Committee 1/ SubCommittee 42 (ISO/ IEC JTC 1/SC 42) – Artificial Intelligence, and the Association for the Advancement of Medical Instrumentation/British Standards Institution Initiative on AI in medical technology. FDA is focused on working with these communities to come to a consensus on GMLP requirements.
  3. Foster a patient-centered approach, including transparency
    • FDA would like to increase patient education to ensure that users have important information about the benefits, risks and limitations of AI/ML products. To that end, FDA held a Patient Engagement Advisory meeting in October 2020, and the agency will use input gathered during the meeting to help identify types of information that it will recommend manufacturers include in AI/ML labeling to foster education and promote transparency.
  4. Develop methods to evaluate and improve ML algorithms
    • To address potential racial, ethical or socio-economic bias that may be inadvertently introduced into AI/ML systems that are trained using data from historical datasets, FDA intends to collaborate with researchers to improve methodologies for the identification and elimination of bias, and to improve the algorithms’ robustness to adapt to varying clinical inputs and conditions.
  5. Advance real world performance monitoring pilots
    • FDA states that gathering real world performance data on the use of the SaMD is an important risk-mitigation tool, as it may allow manufacturers to understand how their products are being used, how they can be improved, and what safety or usability concerns manufacturers need to address. To provide clarity and direction related to real world performance data, FDA supports the piloting of real world performance monitoring. FDA will develop a framework for gathering, validating and evaluating relevant real world performance parameters [...]

      Continue Reading



OFAC Advisory Warns of Civil Penalties for Ransomware Payments

On October 1, 2020, the US Department of the Treasury’s Office of Foreign Assets Control (OFAC) issued an advisory alert that serves as a warning to entities who have been or will be the victim of a ransomware attack. As such, the crucial decision of whether to pay a ransom now comes with the additional risk of legal scrutiny by a powerful federal agency and the possibility of steep fines.

Access the article.




Uber Criminal Complaint Raises the Stakes for Breach Response

On August 20, 2020, a criminal complaint was filed charging Joseph Sullivan, Uber’s former chief security officer, with obstruction of justice and misprision of a felony in connection with an alleged attempted cover-up of a 2016 data breach. These are serious charges for which Mr. Sullivan has the presumption of innocence.

At the time of the 2016 data breach, Uber was being investigated by the US Federal Trade Commission (FTC) in connection with a prior data breach that occurred in 2014. According to the complaint, the hackers behind the 2016 breach stole a database containing the personal information of about 57 million Uber users and drivers. The hackers contacted Uber to inform the company of the attack and demanded payment in return for their silence. According to the complaint, Uber’s response was to attempt to recast the breach as a legitimate event under Uber’s “bug bounty” program and pay a bounty. An affidavit submitted with the complaint portrays a detailed story of deliberate steps undertaken by Mr. Sullivan to allegedly conceal the 2016 breach from the FTC, law enforcement and the public.

Contemporaneous with the filing of the complaint, the Department of Justice (DOJ) submitted a press release quoting US Attorney for the Northern District of California David L. Anderson:

“We expect good corporate citizenship. We expect prompt reporting of criminal conduct. We expect cooperation with our investigations. We will not tolerate corporate cover-ups. We will not tolerate illegal hush money payments.”

The press release also quoted Federal Bureau of Investigation (FBI) Deputy Special Agent in Charge Craig Fair:

“Concealing information about a felony from law enforcement is a crime. While this case is an extreme example of a prolonged attempt to subvert law enforcement, we hope companies stand up and take notice. Do not help criminal hackers cover their tracks. Do not make the problem worse for your customers, and do not cover up criminal attempts to steal people’s personal data.”

Collectively, the case and statements from the DOJ are probably a unicorn based on, if the facts as alleged are true, a case involving a deliberate cover-up of a data breach in the course of an active FTC investigation. However, many of the statements from the DOJ and the specific allegations in the complaint appear to have potentially far-reaching implications (for companies, their executives and cybersecurity professionals) that breach response counsel must seriously consider in future incidents.

A common question when responding to a ransomware or other cyberattack is whether and when to inform law enforcement. The criminal complaint has the potential to make this an even more difficult decision for future cyberattack victims. Further, while the alleged conduct at issue may seem particularly egregious, the DOJ’s statements could cause a blurring of the lines between what the government may contend is illegal concealment of a security incident and activities generally thought to be legitimate security incident risk and exposure mitigation. We explore these and other key takeaways from the criminal complaint in more detail below.

[...]

Continue Reading



Preparing Your Data for a Post-COVID-19 World

The US healthcare system’s data infrastructure needs an overhaul to prepare for future health crises, streamline patient care, improve data sharing and accessibility among patients, providers and government entities, and move toward the delivery of coordinated care. With insights from leaders from Arcadia, Validic and McDermott, we recently discussed key analyses and updates on the interoperability and application programming interfaces (API) criteria from the 21st Century Cures Act, stakeholder benefits of healthcare data exchange and data submission facilitation for public health purposes. Click here to listen to the webinar recording, and read on for highlights from the program.

To learn more about the “Around the Corner” webinar series and attend an upcoming program, click here.

PROGRAM INSIGHTS

  • COVID-19 is reshaping healthcare through technology. Hospitals, clinicians and payors need to use digital health tools to address the challenges of the coronavirus (COVID-19) public health pandemic. How COVID-19 data and health information are captured, and then move through electronic systems, will form the foundation by which digital health tools can become effective in identifying cases, treating them and ensuring favorable outcomes.
  • API certification requirements under the 21st Century Cures Act are designed to enhance the accessibility of electronic health information. The 21st Century Cures Act’s purpose is to advance interoperability, address information blocking, support seamless exchange of electronic health information and promote patient access. Putting data from electronic health records (EHRs) into patients’ hands through consumer-facing apps will empower them to understand and take control of their health.
  • EHR vendors will be required to offer APIs that comply with the Fast Healthcare Interoperability Resource (FHIR) standard by May 1, 2022. The 21st Century Cures Act Final Rule will require EHR vendors to offer FHIR based APIs that make electronic health information more readily available to third-party applications (apps) of patients’ and providers’ choosing. API standardization will make it easier for third-party developers to build these apps, and for patients and providers wishing to use third-party apps to leverage their electronic health information for various purposes, including health information exchange and population health management.

 

  • Interoperability refers to the standards that make it possible for different EHR systems to exchange patient medical records and information between providers. Increased interoperability between EHR systems using harmonized standards allows for a more seamless transfer of patient data between providers. The interoperability requirements in the 21st Century Cures Act have the potential to advance patient access to their data and the use of information among physicians.
  • Both providers and patients can drive data exchange. One challenge impacting data exchange between patients and providers is that providers cannot always access or integrate data that patients have created with third-party tools (e.g., fitness trackers). However, there is emerging technology designed to aggregate and standardize consumer-generated health information, enabling the [...]

    Continue Reading



Public Backlash Calls Use of Facial Recognition Systems into Question

In recent weeks and months, legal and technical issues related to use of facial recognition systems in the United States have received national attention, including concerns that the technology lacks accuracy in identifying non-white individuals and that its widespread use by police departments may play a role in racially discriminatory policing. Privacy considerations will play a key role in the ongoing debate over the future of facial recognition technology.

Facial recognition systems (FRS) are automated or semi-automated technologies that analyze an individual’s features by extracting facial patterns from video or still images. FRS use attributes or features of an individual’s face to create data that can be used for the unique personal identification of a specific individual. FRS use has grown exponentially in recent years. In addition to widespread adoption by law enforcement agencies, FRS are also frequently used in retail, banking and security sectors, such as airport screening. Particularly in recent weeks and months, legal and technical issues associated with FRS have come to the forefront, including concerns that the technology lacks accuracy in identifying non-white individuals and that its widespread use by police departments may play a role in racially discriminatory policing.

In response to the global Coronavirus (COVID-19) pandemic, public health agencies and private sector companies have considered ways that FRS might be used in conjunction with proximity and geolocation tracking data to control the disease’s spread. Some foreign governments have implemented extensive biometric and behavioral monitoring to track and contain the spread of the virus, and have used FRS to identify persons who have been in contact with COVID-19-positive individuals and to enforce quarantine or stay-at-home orders. By contrast, use of FRS in the United States already faced opposition because of pre-COVID-19 data privacy concerns, and has encountered increased backlash after the civil rights protests of the past month due to concerns over the technology’s accuracy and accompanying questions regarding its use by law enforcement agencies.

Accuracy Concerns

There are currently no industry standards for the development of FRS, and as a result, FRS algorithms differ significantly in accuracy. A December 2019 National Institute of Standards and Technology (NIST) study, the third in a series conducted through its Face Recognition Vendor Test program, evaluated the effects of factors such as race and sex on facial recognition software. The study analyzed 189 facial recognition algorithms from 99 developers, using collections of photographs with approximately 18 million images of eight million people pulled from databases provided by the US Department of State, the Department of Homeland Security and the Federal Bureau of Investigation. The study found disproportionately higher false positive rates for African American, Asian and Native American faces for one-to-one matching, and higher rates of false positives for African American females for one-to-many matching. The effect of the high rate of false positives for African American females put this group at the greatest risk of misidentification. While law enforcement is encouraged to adopt a high threshold recognition percentage—often 99%—for the use of FRS, in reality police departments exercise [...]

Continue Reading




Future Forward: Data Arrangements During and After COVID-19

The need for speedy and more complete access to data is instrumental for healthcare providers, researchers, pharmaceutical, biotech and device companies and public health authorities as they work to quickly identify infection rates, disease trends, outcomes, including antibodies, and opportunities for treatments and vaccines for COVID-19.

A variety of data sharing and collaborations have emerged in the wake of this crisis, such as:

  • Requests and mandates by public health authorities, either directly or via providers’ business associates requesting real time information on infections and bed and equipment availability
  • Data sharing collaborations among providers for planning, anticipating and tracking COVID-19 caseloads
  • Data sharing among providers, professional societies and pharmaceutical, biotech and medical device companies in search of testing options, treatment and vaccine solutions, and evaluation of co-morbidities

CLICK HERE TO VIEW THE FULL INFOGRAPHIC.




Consumer Demand in Digital Health Data and Innovation

Digital health companies are producing increasingly innovative products at a rapidly accelerating pace, fueled in large part by the expansive healthcare data ecosystem and the data strategies for harnessing the power of that ecosystem. The essential role data strategies play make it imperative to address the data-related legal and regulatory considerations at the outset of the innovation initiative and throughout the development and deployment lifecycle so as to protect your investment in the short and long term.

The Evolution of Digital Health

Digital health today consists of four key components: electronic health records, data analytics, telehealth, and patient and consumer engagement tools. Electronic health records were most likely first, followed very closely by data analytics. Then telehealth deployment rapidly increased in response to both demand by patients and providers, the improved care delivery and access it offers, and more recently, the expanded reimbursement for telehealth solutions. Each component of digital health was developed somewhat independently, but they have now converged and are interrelated, integral parts of the overall digital health ecosystem.

The patient and consumer engagement dimension of digital health has exploded over the last five years. This is due, in large part, to consumer and patient demand for greater engagement in the management of their healthcare, as well as the entry of disruptors, such as technology service providers, e-commerce companies, consumer products companies and entrepreneurs. At this point in the evolution of the digital health landscape, the patient and consumer engagement tool dimension pulls in all other key components and no digital health consumer engagement tool is complete without the full package.

Data Strategies and Collaborations as Key Innovation Ingredients

No digital health initiative can be developed, pursued or commercialized without data. But the world of data aggregation and analytics has also changed significantly and become immensely complex in recent years. Digital health innovation is no longer working exclusively within the friendly confines of the electronic health record and the carefully regulated, controlled and structured data it holds. Today, digital health innovation relies on massive amounts of data in a variety of types, in various forms, from a wide variety of sources, and through a wide variety of tools, including patient and consumer wearables and mobile devices.

(more…)




Maximizing Your IP Protections in Digital Health

Digital health is experiencing a boom in investment as the regulatory environment becomes more supportive of digital health services. But as companies seek to make the most of their funding and protect the innovations that drive their product, it is imperative that they protect their intellectual property from being copied or duplicated by others in the market.

What exactly is IP?

Intellectual Property (IP) is generally non-tangible property. You can hold your laptop in your hands or you can stand on a piece of land — those are both tangible examples of property. Intellectual property cannot be physically held or touched. Protections available for intellectual property generally break down into one of four areas: patents; trade secrets, trademark, and copyright.

Patent protection offers an additional layer of protection for digital health solutions compared to copyrights. For example, a company may be eligible for a patent if it has innovated a new approach to identifying data, a new approach to storing data more efficiently, or a new approach to the data structure itself—those are all ways where innovations could be patentable and help extend protection around data.

How does IP apply to data?

If, in a digital health patent application, a company focuses on innovation for a computer-specific problem—such as keeping data private, keeping data secure, de-identifying data—that is usually a homerun argument to the patent office for crossing the first threshold of eligibility for patenting.

This is one of the few areas where the patent office has made it clear that these ideas and invention types are considered patent eligible. Thereafter, of course, remains the traditional challenge of getting a patent, which is to prove that no one before you has invented what you’ve invented. But lately, in the digital health space, that challenge seems to be less difficult to overcome compared to the eligibility challenge.

How to protect IP

(more…)




US Office of Management and Budget Calls for Federal Agencies to Reduce Barriers to Artificial Intelligence

On January 7, 2020, the Director of the US Office of Management and Budget (OMB) issued a Draft Memorandum (the Memorandum) to all federal “implementing agencies” regarding the development of regulatory and non-regulatory approaches to reducing barriers to the development and adoption of artificial intelligence (AI) technologies. Implementing agencies are agencies that conduct foundational research, develop and deploy AI technologies, provide educational grants, and regulate and provide guidance for applications of AI technologies, as determined by the co-chairs of the National Science and Technology Council (NSTC) Select Committee. To our knowledge, the NTSC has not yet determined which agencies are “implementing agencies” for purposes of the Memorandum.

Submission of Agency Plan to OMB

The “implementing agencies” have 180 days to submit to OMB their plans for addressing the Memorandum.

An agency’s plan must: (1) identify any statutory authorities specifically governing the agency’s regulation of AI applications as well as collections of AI-related information from regulated entities; and (2) report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within the agency’s regulatory authorities. OMB also requests but does not require agencies to list and describe any planned or considered regulatory actions on AI.

Principles for the Stewardship of AI Applications

The Memorandum outlines the following as principles and considerations that agencies should address in determining regulatory or non-regulatory approaches to AI:

  1. Public trust in AI. Regulatory and non-regulatory approaches to AI need to be reliable, robust and trustworthy.
  2. Public participation. The public should have the opportunity to take part in the rule-making process.
  3. Scientific integrity and information quality. The government should use scientific and technical information and processes when developing a stance on AI.
  4. Risk assessment and management.A risk assessment should be conducted before determining regulatory and non-regulatory approaches.
  5. Benefits and costs. Agencies need to consider the societal costs and benefits related to developing and using AI applications.
  6. Flexibility. Agency approaches to AI should be flexible and performance-based.
  7. Fairness and nondiscrimination. Fairness and nondiscrimination in outcomes needs to be considered in both regulatory and non-regulatory approaches.
  8. Disclosure and transparency. Agencies should be transparent. Transparency can serve to improve public trust in AI.
  9. Safety and security. Agencies should guarantee confidentiality, integrity and availability of data use by AI by ensuring that the proper controls are in place.
  10. Interagency coordination. Agencies need to work together to ensure consistency and predictability of AI-related policies.

(more…)




STAY CONNECTED

TOPICS

ARCHIVES