Although the Illinois Biometric Information Privacy Act has been on the books for almost 10 years, a recent surge in lawsuits has likely been brought on by developments in biometric scanning technology and its increased use in the workplace. At least 32 class action lawsuits have been filed in recent months by Illinois residents in state court challenging the collection, use and storage of biometric data by companies in the state. This could potentially cause a reevaluation of company strategies and development of new defenses in the use of advancing biometric technology.

Read “To Scan or Not to Scan: Surge in Lawsuits under Illinois Biometrics Law.”

In September, the Office of the National Coordinator for Health Information Technology (ONC) announced that it is scaling back requirements for third-party certification of criteria related to certified electronic health record (EHR) technology (CEHRT). Going forward, ONC will allow health developers to self-declare their products’ conformance with 30 of the 55 certification criteria.

ONC will also exercise discretion and not enforce the requirement that certification bodies conduct randomized surveillance of two percent of the health IT certifications they issue.

Read “ONC’s De-Regulatory Announcement Aims at Enticing Industry to Adopt 2015 Edition Criteria.”

Copyright 2017, American Health Lawyers Association, Washington, DC. Reprint permission granted.

The Electronic Health Records (EHR) Incentive Program run by Centers for Medicare and Medicaid Services (CMS) garnered attention again last week following the release of a report by the Office of Inspector General of the US Department of Health and Human Services (OIG) describing inappropriate payments to physicians under the program. The report follows on the heels of a high-profile settlement under the False Claims Act between the US Department of Justice and an EHR vendor related to certified electronic health record technology (CEHRT) used in the EHR Incentive Program (which we’ve previously discussed in-depth).

The OIG reviewed payments to 100 eligible professionals (EPs) who received EHR incentive payments between May 2011 and June 2014 and identified 14 inappropriate payments. OIG extrapolated the results of the review to the 250,470 total EPs who received incentive payments during that time period and estimated that CMS made approximately $729 million in inappropriate EHR incentive payments out of a total of just over $6 billion in such payments during the review period. Continue Reading OIG Reports More Than $731 Million in Inappropriate Medicare Meaningful Use Payments

On March 23, 2017, the New York Attorney General’s office announced that it has settled with the developers of three mobile health (mHealth) applications (apps) for, among other things, alleged misleading commercial claims. This settlement highlights for mHealth app developers the importance of systematically gathering sufficient evidence to support their commercial claims.

Read the full article.

After three government agencies collectively created an online tool to help developers navigate federal regulations impacting mobile health apps, McDermott partner Jennifer Geetter was interviewed by FierceMobileHealthcare on the need for mobile health development tools.

Read the full article from FierceMobileHealthCare.

At a recent public workshop, Dr. Janet Woodcock, director of the U.S. Food and Drug Administration’s (FDA) Center for Drug Evaluation and Research (CDER), announced plans to expand the agency’s use of the Sentinel infrastructure to conduct post-market effectiveness studies.

Sentinel is an electronic surveillance system that aggregates data from electronic medical records, claims and registries that voluntarily participate and allows the agency to track the safety of marketed drugs, biologics and medical devices. As of August 2015, the Sentinel database includes information from 193 million individuals, 4.8 billion instances of prescription dispensing, 5.5 billion unique encounters and 51 million acute inpatient stays.

The FDA currently uses the system to assess post-market safety issues. However, in a February 3, 2016, workshop, Dr. Woodcock announced that the FDA is in the early stages of adapting the Sentinel infrastructure to develop the “Guardian” system, which the agency intends to use to “actively gather information about the performance of regulated medical products” used in health care. At the same workshop, Dr. Steven Anderson of the FDA’s Center for Biologics Evaluation and Research (CBER) described the Guardian system as a parallel system to Sentinel that will rely on the Sentinel infrastructure to assess product effectiveness. According to Dr. Anderson, the FDA is currently assessing the feasibility of using Sentinel to perform effectiveness studies, and over the next five years, intends to develop the system to support a range of clinical trial designs.

The FDA envisions that the Guardian system will help the agency and external researchers quickly and less inexpensively answer questions about the performance of medical products that would otherwise require expensive, time-consuming clinical investigations to assess. The FDA did not specifically address how the agency intends to use the effectiveness data developed using the Guardian system.

The proposed Guardian system represents the FDA’s latest attempt to harness the power of “big data” and to participate in the changes precipitated by digital health strategies and tools to address FDA priorities. In 2014, the FDA launched its openFDA initiative, which gives the general public access to several of the agency’s public data sets (e.g., adverse event reports). Moreover, in December 2015, the FDA launched a beta version of its precisionFDA platform, which is an online, cloud-based platform that is intended to allow scientists from the public and private sectors to test, pilot and validate existing and new bioinformatics approaches for processing the large amounts of data collected using next-generation sequencing (NGS) technology.

The FDA’s efforts to launch the Guardian system mirror “big data” initiatives by other private and public stakeholders seeking to leverage data capture and data mining to pursue important public health, quality improvement, research and cost-containment efforts.

After intense negotiations, and after the official deadline had passed on Sunday, 31 January 2016, the United States and the European Union have finally agreed on a new set of rules—the “EU-U.S. Privacy Shield”—for data transfers across the Atlantic. The Privacy Shield replaces the old Safe Harbor agreement, which was struck down by the European Court of Justice (ECJ) in October 2015. Critics already comment that the Privacy Shield will share Safe Harbor’s fate and will be declared invalid by the ECJ; nevertheless, until such a decision exists, the Privacy Shield should give companies legal security when transferring data to the United States.

While a text of the new agreement is not yet published, European Commissioner Věra Jourvá stated that the Privacy Shield should be in place in the next few weeks. According to a press release from the European Commission, the new arrangement

…will provide stronger obligations on companies in the U.S. to protect the personal data of Europeans and stronger monitoring and enforcement by the U.S. Department of Commerce and Federal Trade Commission (FTC), including through increased cooperation with European Data Protection Authorities. The new arrangement includes commitments by the U.S. that possibilities under U.S. law for public authorities to access personal data transferred under the new arrangement will be subject to clear conditions, limitations and oversight, preventing generalized access. Europeans will have the possibility to raise any enquiry or complaint in this context with a dedicated new Ombudsperson.

One of the most known critics of the U.S. data processing practices and initiator of the ECJ Safe Harbor decision, Austrian Max Schrems, already reacted to the news. Schrems stated on social media that the ECJ Safe Harbor decision explicitly says that “generalized access to content of communications” by intelligence agencies violates the fundamental right to respect for privacy. Commissioner Jourová, referring to the Privacy Shield, stated that “generalized access … may happen in very rare cases”—which could be viewed as contradictory to the ECJ decision. Critics also argue that an informal commitment by the United States during negotiations with the European Union is not something on which European citizens could base lawsuits in the United States if their data is transferred or used illegally.

The European Commission will now prepare a draft text for the Privacy Shield, which still must be ratified by the Member States. The EU Parliament will also review the draft text. In the meantime, the United States will make the necessary preparations to put in place the new framework, monitoring mechanisms and new ombudsperson.

 

On January 6, the Federal Trade Commission (FTC) released a report that it hopes will educate organizations on the important laws and research that are relevant to big data analytics. The report, Big Data: A Tool for Inclusion or Exclusion? Understanding the Issues, looks specifically at how big data is used after it is collected and analyzed and provides suggestions aimed at maximizing the benefits and minimizing the risks of using big data.

Risk and Rewards

The report argues that big data analytics can provide numerous opportunities for improvements in society. In addition to more effectively matching products and services to consumers, big data can create opportunities for low income and underserved communities. The report highlights a number of innovative uses of big data that provide benefits to underserved populations, such as increased educational attainment, access to credit through nontraditional methods, specialized health care for underserved communities, and better access to employment.

At the same time, the report shows that potential inaccuracies and biases might lead to detrimental effects for low-income and underserved populations. For example, organizations  could use big data to inadvertently exclude low-income and underserved communities from credit and employment opportunities, which may reinforce existing disparities or weaken the effectiveness of consumer choice.

Considerations for Using Big Data

The report outlines some of the consumer protection laws (in particular, the Fair Credit Reporting Act and FTC Act)  and equal opportunity laws that apply to the use of big data, especially with regard to possible issues of discrimination or exclusion. It also recommends that an organization consider the following questions to help ensure that its use of big data analytics does not lead to unlawful exclusion or discrimination:

How representative is your data set? 

If the data set is missing information from particular populations, take appropriate steps to address this problem.

Does your data model account for biases? 

Review data sets and algorithms to ensure that hidden biases do not have an unintended impact on certain populations.

How accurate are your predictions based on big data? 

Balance the risks of using correlative results, especially where the business’ policies could negatively affect certain populations.

Does your reliance on big data cause ethical or fairness concerns?

Consider whether fairness and ethical considerations advise against using big data in certain circumstances and whether the business can use big data in ways that advance opportunities for previously underrepresented populations.

Monitoring and Enforcement Ahead

The FTC stated that its collective challenge is to make sure that big data analytics continue to provide benefits and opportunities to consumers while adhering to core consumer protection values and principles. It has committed to continue monitoring areas where big data practices could violate existing laws and to bring enforcement actions where appropriate.  With that in mind, organizations that already use big data and those that are have been persuaded by reported benefits of big data should heed the FTC’s advice. The FTC is highlighting its interest in the consumer protection and equal opportunity ramifications of big data use. This report serves as a warning—a statement of intent—that the FTC will be evaluating data practices in light of these concerns.  It is clear that organizations must identify and mitigate the risks in using big data, not only those dealing with privacy and data protection but also those presenting consumer protection and equal opportunity issues. Thinking critically about and taking corrective action in line with the considerations listed above, and creating a record that such steps have been taken, may help organizations using big data to avoid FTC regulatory scrutiny.

Earlier today, the Court of Justice of the European Union (CJEU) announced its determination that the U.S.-EU Safe Harbor program is no longer a “safe” (i.e., legally valid) means for transferring personal data of EU residents from the European Union to the United States.

The CJEU determined that the European Commission’s 2000 decision (Safe Harbor Decision) validating the Safe Harbor program did not and “cannot eliminate or even reduce the powers” available to the data protection authority (DPA) of each EU member country. Specifically, the CJEU opinion states that a DPA can determine for itself whether the Safe Harbor program provides an “adequate” level of personal data protection (i.e., “a level of protection of fundamental rights and freedoms that is essentially equivalent to that guaranteed within the European Union” as required by the EU Data Protection Directive (95/46/EC)).

The CJEU based its decision invalidating that Safe Harbor opinion in part on the determination that the U.S. government conducts “indiscriminate surveillance and interception carried out … on a large scale”.

The plaintiff in the case that gave rise to the CJEU opinion, Maximilian Schrems (see background below), issued his first public statement praising the CJEU for a decision that “clarifies that mass surveillance violates our fundamental rights.”

Schrems also made reference to the need for “reasonable legal redress,” referring to the U.S. Congress’ Judicial Redress Act of 2015. The Judicial Redress Act, which has bi-partisan support, would allow EU residents to bring civil actions in U.S. courts to address “unlawful disclosures of records maintained by an [U.S. government] agency.

Edward Snowden also hit the Twittersphere with “Congratulations, @MaxSchrems. You’ve changed the world for the better.”

Background

Today’s CJEU opinion invalidating the Safe Harbor program follows on the September 23, 2015, opinion from the advocate general (AG) to the CJEU in connection with Maximilian Schrems vs. Data Protection Commissioner.

In June 2013, Maximilian Schrems, an Austrian student, filed a complaint with the Irish DPA. Schrems’ complaint related to the transfer of his personal data collected through his use of Facebook. Schrems’ Facebook data was transferred by Facebook Ireland to Facebook USA under the Safe Harbor program. The core claim in Schrems’ complaint is that the Safe Harbor program did not adequately protect his personal data, because Facebook USA is subject to U.S. government surveillance under the PRISM program.

The Irish DPA rejected Schrems’ complaint because Facebook was certified under the Safe Harbor Program. Schrems appealed to the High Court of Ireland, arguing that the Irish (or any other country’s) DPA has a duty to protect EU citizens against privacy violations, like access to their personal data as part of U.S. government surveillance. Since Schrems’ appeal relates to EU law (not solely Irish law), the Irish High Court referred Schrems’ appeal to the CJEU.

What This Means for U.S. Business

The invalidation of the Safe Harbor program, which is effective immediately, means that a business that currently relies on the Safe Harbor program will need to consider another legally valid means to legally transfer personal data from the EU to the United States, such as the use of EU-approved model contractual clauses or binding corporate resolutions.

We believe, however, that this is not the final chapter in the Safe Harbor saga. Please check back soon for more details and analysis.

Remember KITT? KITT (the Knight Industries Two Thousand) was the self-directed, self-driving, supercomputer hero of the popular 1980s television show Knight Rider. Knight Rider was a science fiction fantasy profiling the “car of the future.” The self-directed car is science fiction no more. The future is now and, in fact, we’ve seen a lot of press this year about self-driving or driverless cars.

Driverless cars, equipped with a wide variety of connected systems including cameras, radar, sonar and LiDar (light detection and ranging), are expected on the road within the next few years. They can sense road conditions, identify hazards and negotiate traffic, all from a remote command center. Just as with most connected devices in the age of the Internet of Things (IoT), these ultra-connected devices claim to improve efficiency and performance, and enhance safety.

Though not quite driverless yet, connected vehicles are already on the market, in-market and on the road. Like many IoT “things”, ultra-connected vehicles systems may be vulnerable to hacker attacks.

Christopher Valasek and Charlie Miller, two computer security industry leaders, have presented on this topic at various events, including the 2014 Black Hat USA security conference . They analyzed the information security vulnerabilities of various car makes and models, rating the vehicles on three specific criteria: (1) the area of their wireless “attack surface” (i.e., how many data incorporating features such as Bluetooth, Wi-Fi, keyless entry systems, automated tire monitoring systems); (2) access to the vehicles network through those data points; and (3) the vehicle’s “cyberphysical” features (i.e., connected features such as parking assist, automated braking, and other technological driving aides). This last category of features, combined with access through the data points outlined in items (1) and (2), presented a composite risk profile of each vehicle make’s hackability. Their conclusions were startling: radios, brakes, steering systems were all found to be accessible.

Miller and Valasek claim that their intent was to encourage car manufacturers to consider security in vehicle system connectivity and cyberphysical attributes. They approached vehicle manufacturers and shared their report with the Department of Transportation and the Society of Automobile Engineers. Some manufacturers promised to investigate their vehicle systems and correct the deficiencies. Some seemingly ignored the report altogether. They did, however, catch the attention of Senators Ed Markey (D-MA) and Richard Blumenthal (D-CT). On July 21, 2015, Senators Markey and Blumenthal introduced legislation that would direct the National Highway Traffic Safety Administration (NHTSA) and the Federal Trade Commission (FTC) to establish federal standards to secure vehicles and protect drivers’ privacy. The Security and Privacy in Your Car Act, aptly coined “the SPY Car Act”, would also require manufacturers to establish a ‘cyber dashboard’ that rates vehicle security, informing consumers as to the security performance of their vehicle.

As proposed, the SPY Car Act would require that all motor vehicles manufactured in the U.S. be “equipped with reasonable measures to protect against hacking attacks.” All “entry points” are to be protected through “reasonable” measures against hacking. Internal networks are to be isolated to prevent hacking of the software managing critical vehicle controls, such as braking or steering. Vehicles would undergo a vulnerability analysis including penetration testing based on industry “best security practices.” Furthermore, “Any motor vehicle that presents an entry point shall be equipped with capabilities to immediately detect, report and stop attempts to intercept driving data or control the vehicle.”

The legislation, as well as the continued research efforts of experts such as Valasek and Miller, support the notion that today’s automobiles are not only transportation devices, but also sophisticated computer systems. And like any other computer system, the data processed through them is vulnerable to attack. The more “connected,” the system, the more entry points that are potentially exposed and arguably vulnerable. The “car of the future” is here and the experts and legislators seem to be pushing to keep consumer safety in the driver’s seat.