Although the Illinois Biometric Information Privacy Act has been on the books for almost 10 years, a recent surge in lawsuits has likely been brought on by developments in biometric scanning technology and its increased use in the workplace. At least 32 class action lawsuits have been filed in recent months by Illinois residents in state court challenging the collection, use and storage of biometric data by companies in the state. This could potentially cause a reevaluation of company strategies and development of new defenses in the use of advancing biometric technology.
In September, the Office of the National Coordinator for Health Information Technology (ONC) announced that it is scaling back requirements for third-party certification of criteria related to certified electronic health record (EHR) technology (CEHRT). Going forward, ONC will allow health developers to self-declare their products’ conformance with 30 of the 55 certification criteria.
ONC will also exercise discretion and not enforce the requirement that certification bodies conduct randomized surveillance of two percent of the health IT certifications they issue.
Copyright 2017, American Health Lawyers Association, Washington, DC. Reprint permission granted.
On September 29, the Federal Trade Commission (FTC) formally announced a December 12th workshop on informational injury—the injury a consumer suffers when information about them is misused. The workshop will address questions such as, how to characterize and measure such injury and what factors businesses and consumers should consider the benefits and risks of collecting, using and providing personal information so as to gain further perspective for how the FTC should apply its legal framework for privacy and security enforcement under 15 USC § 45 (Section 5). In her September 19th remarks to the Federal Communications Bar Association, Commissioner Maureen Ohlhausen, the Acting Chairman of the FTC, metaphorically characterized the workshop’s purpose as providing the next brushstrokes on the unfinished enforcement landscape the FTC is painting on its legal framework canvas. The full list of specific questions to be addressed may be accessed here.
Background. The FTC views itself as the primary US enforcer of data privacy and security, a role it recently assumed. While the FTC’s enforcement against practices causing informational injury through administrative proceedings goes back as far as 2002, its ability to pursue corporate liability for data security and privacy practices under its Section 5 “unfair or deceptive trade practices” jurisdiction was only ratified in 2015 by the US Court of Appeals for the Third Circuit in FTC v. Wyndham Worldwide Corporation. The FTC has actively invoked its enforcement authority but, in doing so, has been selective in determining which consumer informational injuries to pursue by questioning the strength of evidence connecting problematic practices with the injury, examining the magnitude of the injury and inquiring as to whether the injury is imminent or has been realized. Continue Reading Upcoming FTC Workshop on Informational Harm | Next Brushstrokes on the FTC’s Consumer Privacy and Security Enforcement Canvas
Following the first enforcement actions by local authorities in Shantou and Chongqing for violations of the new Network Security Law that came into effect this year, authorities in China have recently shown a clear initial focus with several new cases targeting provisions of the law that require monitoring of platform content. As of the start of October 2017, enforcement actions by authorities in China have targeted platform content violations in nearly 70 percent of all actions under the new provisions of the data protection rules.
The validity of Model Clauses for EU personal data transfer to the United States is now in real doubt as a result of a new Irish High Court judgment stating that there are “well founded grounds” to find the Model Clauses invalid. The issue of Model Clauses as a legitimate data transfer mechanism will now be adjudicated by the European Court of Justice (ECJ), the same court that previously overturned the Safe Harbor arrangement. EU and US companies will need to consider various strategies in anticipation of this decision.
The US Department of Transportation’s National Highway Traffic Safety Administration recently released A Vision for Safety 2.0, an update to its prior guidance on automated driving systems. The new guidance adopts a voluntary, flexible approach to regulation of automated driving systems and clarifies that it alone, and not the states, is responsible for regulating the safety design and performance aspects of such systems.
New cybersecurity regulations issued by the NYDFS define the nonpublic information they regulate in exceptionally broad terms. This expanded definition of Nonpublic Information will create major challenges for regulated companies and their third-party service providers that will likely ripple through other ancillary industries.
Although the incorporation of technology into human endeavours—commercial, political and personal—is a normal component of technological innovation, the advent of artificial intelligence technology is producing significant challenges we have not felt or understood with earlier innovations. For many years, for example, there has been speculation, research and public debate about the impact of the internet, the functioning of search engines, and online advertising techniques on commercial and political decisions.
The alleged “hacking” of the 2016 US presidential election, and the concerns about such activities in the 2017 European elections, will only heighten the interweaving discussions on free speech, national sovereignty, cyber security and the nature of privacy.
The use of artificial intelligence and machine-learning technologies has only added to the list of issues and areas of concern. The consequences of automobile accidents involving “self-driving” technologies, the “flash crashes” on securities markets due to algorithmic trading, and bias in systems designed to determine benefit eligibility, are requiring us to consider what happens when we defer judgment to machines, and highlighting the importance of quality in data sets and sensors.
The government is continuing to ask for more help from the private sector to defend against cyber attacks. The National Infrastructure Advisory Council (NIAC) recently published a report discussing current cyber threats and urging private companies and executives to join forces with the government to better address those threats. The report proposes “public-private and company-to-company information sharing of cyber threats at network speed,” among other things discussed here.
On 6 August 2017, the UK government released ‘The Key Principles of Vehicle Cyber Security for Connected and Automated Vehicles’, guidance aimed at ensuring minimum cybersecurity protections for consumers in the manufacture and operation of connected and automated vehicles.
Connected and automated vehicles fall into the category of so-called ‘smart cars’. Connected vehicles have gained, and will continue to gain, adoption in the market and, indeed, are expected to make up more than half of new vehicles by 2020. Such cars have the ability through the use of various technologies to communicate with the driver, other cars, application providers, traffic infrastructure and the Cloud. Automated vehicles, also known as autonomous vehicles, include self-driving features that allow the vehicle to control key functions–like observing the vehicle’s environment, steering, acceleration, parking, and lane changes–that traditionally have been performed by a human driver. Consumers in certain markets have been able to purchase vehicles with certain autonomous driving features for the past few years, and vehicle manufacturers have announced plans to enable vehicles to be fully self-driving under certain conditions, in the near future.