The Energy & Commerce Committee of the U.S. House of Representatives held a hearing on October 21st titled “Examining Ways to Improve Vehicle and Roadway Safety” to consider (among other matters) Vehicle Data Privacy legislation for internet-connected cars.

The proposed legislation includes requirements that auto manufacturers:

  • “Develop and implement” a privacy policy incorporating key elements on the collection, use and sharing of data collected through technology in vehicles. By providing the policy to the National Highway Traffic Safety Administration, a manufacturer earns certain protection against enforcement action under Section 5 of the Federal Trade Commission Act.
  • Retain data no longer than is determined necessary for “legitimate business purposes.”
  • Implement “reasonable measures” to ensure that the data is protected against theft/unauthorized access or use (hacking).

Manufacturers that fail to comply face a maximum penalty, per manufacturer, of up to $1 million. The penalty for failure to protect against hacking is up to $100,000 per “unauthorized” access.

Maneesha Mithal, Associate Director, Division of Privacy and Identity Protection, of the Federal Trade Commission (FTC), testified that the proposed legislation “could substantially weaken the security and privacy protections that consumers have today.”

The FTC’s criticism focuses on the proposed safe harbor against FTC enforcement for manufacturers. The FTC testified that a manufacturer should not earn immunity under the FTC Act if the privacy policy offers little or no privacy protection, or is not followed or enforced. The FTC expressed disapproval of provisions allowing retroactive application of a privacy policy to data previously collected. The FTC also advised against applying the proposed safe harbor to data outside of the vehicle, such as data collected from a website or mobile app.

Although the FTC applauded the goal of deterring criminal hacking of the auto systems, the FTC testified that the legislation, as drafted, may disincentivize manufacturers’ efforts in safety and privacy improvements. The testimony echoed that of other industry critics who believe that what is considered “authorized” access is too vague, which may prevent manufacturers from allowing others to access vehicle data systems, such as for repair or research on an auto’s critical systems.

Finally, the FTC criticized the provisions creating a council to develop cybersecurity best practices.  Since the council could operate by a simple majority, it could act without any government or consumer advocacy input, diluting consumer protections.

The hearing agenda, as well as the text of the draft legislation is available here.

The FTC’s prepared statement, as well as the text of the testimony is available here.

Remember KITT? KITT (the Knight Industries Two Thousand) was the self-directed, self-driving, supercomputer hero of the popular 1980s television show Knight Rider. Knight Rider was a science fiction fantasy profiling the “car of the future.” The self-directed car is science fiction no more. The future is now and, in fact, we’ve seen a lot of press this year about self-driving or driverless cars.

Driverless cars, equipped with a wide variety of connected systems including cameras, radar, sonar and LiDar (light detection and ranging), are expected on the road within the next few years. They can sense road conditions, identify hazards and negotiate traffic, all from a remote command center. Just as with most connected devices in the age of the Internet of Things (IoT), these ultra-connected devices claim to improve efficiency and performance, and enhance safety.

Though not quite driverless yet, connected vehicles are already on the market, in-market and on the road. Like many IoT “things”, ultra-connected vehicles systems may be vulnerable to hacker attacks.

Christopher Valasek and Charlie Miller, two computer security industry leaders, have presented on this topic at various events, including the 2014 Black Hat USA security conference . They analyzed the information security vulnerabilities of various car makes and models, rating the vehicles on three specific criteria: (1) the area of their wireless “attack surface” (i.e., how many data incorporating features such as Bluetooth, Wi-Fi, keyless entry systems, automated tire monitoring systems); (2) access to the vehicles network through those data points; and (3) the vehicle’s “cyberphysical” features (i.e., connected features such as parking assist, automated braking, and other technological driving aides). This last category of features, combined with access through the data points outlined in items (1) and (2), presented a composite risk profile of each vehicle make’s hackability. Their conclusions were startling: radios, brakes, steering systems were all found to be accessible.

Miller and Valasek claim that their intent was to encourage car manufacturers to consider security in vehicle system connectivity and cyberphysical attributes. They approached vehicle manufacturers and shared their report with the Department of Transportation and the Society of Automobile Engineers. Some manufacturers promised to investigate their vehicle systems and correct the deficiencies. Some seemingly ignored the report altogether. They did, however, catch the attention of Senators Ed Markey (D-MA) and Richard Blumenthal (D-CT). On July 21, 2015, Senators Markey and Blumenthal introduced legislation that would direct the National Highway Traffic Safety Administration (NHTSA) and the Federal Trade Commission (FTC) to establish federal standards to secure vehicles and protect drivers’ privacy. The Security and Privacy in Your Car Act, aptly coined “the SPY Car Act”, would also require manufacturers to establish a ‘cyber dashboard’ that rates vehicle security, informing consumers as to the security performance of their vehicle.

As proposed, the SPY Car Act would require that all motor vehicles manufactured in the U.S. be “equipped with reasonable measures to protect against hacking attacks.” All “entry points” are to be protected through “reasonable” measures against hacking. Internal networks are to be isolated to prevent hacking of the software managing critical vehicle controls, such as braking or steering. Vehicles would undergo a vulnerability analysis including penetration testing based on industry “best security practices.” Furthermore, “Any motor vehicle that presents an entry point shall be equipped with capabilities to immediately detect, report and stop attempts to intercept driving data or control the vehicle.”

The legislation, as well as the continued research efforts of experts such as Valasek and Miller, support the notion that today’s automobiles are not only transportation devices, but also sophisticated computer systems. And like any other computer system, the data processed through them is vulnerable to attack. The more “connected,” the system, the more entry points that are potentially exposed and arguably vulnerable. The “car of the future” is here and the experts and legislators seem to be pushing to keep consumer safety in the driver’s seat.

Last Friday, July 10, 2015, the Federal Communications Commission (FCC) released Declaratory Ruling and Order 15-72 (“Order 15-72”) to address more than 20 requests for clarity on FCC interpretations of the Telephone Consumer Protection Act (TCPA). The release of Order 15-72 follows a June 18th open meeting at which the FCC adopted the rulings now reflected in Order 15-72 that are intended to “close loopholes and strengthen consumer protections already on the books.”

Keys rulings in Order 15-72 include:

  • Confirming that text messages are “calls” subject to the TCPA;
  • Clarifying that consumers may revoke their consent to receive robocalls (i.e., telemarketing calls or text messages from an automated system or with a prerecorded or artificial voice) “at any time and through any reasonable means”;
  • Making telemarketers liable for robocalls made to reassigned wireless telephone numbers without consent from the current account holder, subject to “a limited,one-call exception for cases in which the caller does not have actual or constructive knowledge of the reassignment”;
  • Requiring consent for internet-to-phone text messages;
  • Clarifying that “nothing … prohibits” implementation of technology that helps consumers block unwanted robocalls;
  • Allowing certain parties an 89-day (after July 10, 2015) window to update consumer consent to “prior express written consent” as the result of an ambiguous provision in the 2012 FCC Order that established the “prior express written consent” requirement; and
  • Exempting from the consent requirement certain free “pro-consumer financial- and healthcare-related messages”.

We are reviewing the more than 135 pages of Order 15-72, as well as the separate statements of FCC Commissioners Wheeler, Clyburn, Rosenworcel (dissenting in part), Pai (dissenting) and O’Rielly (dissenting in part). Please check back soon for more information and analysis.

In a case that could shape the future of data privacy litigation, the Supreme Court recently agreed to review the decision by the U. S. Court of Appeals for the Ninth Circuit under the Fair Credit Reporting Act (FCRA) in Robins v. Spokeo, Inc.  At issue is the extent to which Congress may create statutory rights that, when violated, are actionable in court, even if the plaintiff has not otherwise suffered a legally-redressable injury.

Spokeo is a data broker that provides online “people search capabilities” and “business information search” (i.e., business contacts, emails, titles, etc.).   Thomas Robins (Robins) sued Spokeo in federal district court for publishing data about Robins that incorrectly represented him as married and having a graduate degree and more professional experience and money than he actually had.  Robins alleged that Spokeo’s inaccurate data caused him actual harm by (among other alleged harms) damaging his employment prospects.

After some initial indecision, the district court dismissed the case in 2011 on the grounds that Robins had not sufficiently alleged any actual or imminent harm traceable to Spokeo’s data.  Without evidence of actual or imminent harm, Robins did not have standing to bring suit under Article III of the U.S. Constitution.  Robins appealed.

On February 4, 2014, the Court of Appeals for the Ninth Circuit announced its decision to reverse the district court, holding that the FCRA allowed Robins to sue for a statutory violation: “When, as here, the statutory cause of action does not require proof of actual damages, a plaintiff can suffer a violation of the statutory right without suffering actual damages.” The Court of Appeals acknowledged limits on Congress’ ability to create redressable statutory causes of action but held that Congress did not exceed those limits in this case.  The court held that “the interests protected” by the FCRA were “sufficiently concrete and particularized” such that Congress could create a statutory cause of action, even for individuals who could not show actual damages.

Why Spokeo Matters

If the Supreme Court reverses the Ninth Circuit’s decision, the decision could dramatically redraw the landscape of data privacy protection litigation in favor of businesses by requiring plaintiffs to allege and eventually prove actual damages.  Such a ruling could severely limit lawsuits brought under several privacy-related statutes, in which plaintiffs typically seek statutory damages on behalf of a class without needing to show actual damages suffered by the class members.  Litigation under the FCRA, the Telephone Consumer Protection Act and the Video Privacy Protection Act (among others statutes) all could be affected.

On 11 May 2015, the UK Information Commissioner’s Office (ICO), the French data protection authority (CNIL) and the Office of the Privacy Commissioner of Canada (OPCC) announced their participation in a new Global Privacy Enforcement Network (GPEN) privacy sweep to examine the data privacy practices of websites and apps aimed at or popular among children. This closely follows the results of GPEN’s latest sweep on mobile applications (apps),which suggested a high proportion of apps collected significant amounts of personal information but did not sufficiently explain how consumers’ personal information would be collected and used. We originally reported the sweep on mobile apps back in September 2014.

According to the CNIL and ICO, the purpose of this sweep is to determine a global picture of the privacy practices of websites and apps aimed at or frequently used by children. The sweep seeks to instigate recommendations or formal sanctions where non-compliance is identified and, more broadly, to provide valuable privacy education to the public and parents as well as promoting best privacy practice in the online space.

Background

GPEN was established in 2010 on the recommendation of the Organisation for Economic Co-operation and Development. GPEN aims to create cooperation between data protection regulators and authorities throughout the world in order to globally strengthen personal privacy. GPEN is currently made up of 51 data protection authorities across some 39 jurisdictions.

According to the ICO, GPEN has identified a growing global trend for websites and apps targeted at (or used by) children. This represents an area that requires special attention and protection. From 12 to 15 May 2015, GPEN’s “sweepers”—comprised of 28 volunteering data protection authorities across the globe, including the ICO, CNIL and the OPCC—will each review 50 popular websites and apps among children (such as online gaming sites, social networks, and sites offering educational services or tutoring). In particular, the sweepers will seek to determine inter alia:

  • The types of information being collected from children;
  • The ways in which privacy information is explained, including whether it is adapted to a younger audience (e.g., through the use of easy to understand language, large print, audio and animations, etc.);
  • Whether protective controls are implemented to limit the collection of childrens’ personal information, such as requiring parental permission prior to use of the relevant services or collection of personal information; and
  • The ease with which one can request for personal information submitted by children to be deleted.

Comment

We will have to wait some time for in-depth analysis of the sweep, as the results are not expected to be published until the Q3 of this year. As with previous sweeps, following publishing of the results, we can expect data protection authorities to issue new guidance, as well as write to those organisations identified as needing to improve or take more formal action where appropriate.

As part of its four-part Digital Health webinar series, on April 14, 2015, McDermott Will & Emery presented “Telehealth: Implementation Challenges in an Evolving Dynamic.”

Telehealth (also known as telemedicine) generally refers to the use of technology to support the remote delivery of health care.  For example:

  • A health care provider in one place is connected to a patient in another place by video conference
  • A patient uses a mobile device or wearable that enables a doctor to monitor his or her vital signs and symptoms
  • A specialist is able to rapidly share information with a geographically remote provider treating a patient

While the benefits of telehealth are clear – for example, making health care available to those in underserved areas and for patients who cannot regularly visit their providers but need ongoing monitoring — implementing telehealth requires providers and patients, as well as payers, to adapt to a dynamic new health care, data sharing and reimbursement delivery framework.  The webinar explored these areas and more.

We are pleased to offer our readers access to the archived webinar and the slide presentation.  If you have questions or would like to learn more, please contact Dale Van Demark.

Two significant decisions under the Video Privacy Protection Act (VPPA) in recent weeks have provided new defenses to companies alleged to have run afoul of the statute.  Bringing the long-running litigation against Hulu to a close–at least pending appeal–the court in In re: Hulu Privacy Litigation granted summary judgment in favor of Hulu, holding that the plaintiffs could not prove that Hulu knowingly violated the VPPA.  A week later in a more recently filed case, Austin-Spearman v. AMC Network Entertainment, LLC, the court dismissed the complaint on the basis that the plaintiff was not a “consumer” protected by the VPPA.  Both rulings provide comfort to online content providers, while also raising new questions as to the scope of liability under the VPPA.

In re: Hulu Privacy Litigation

In a decision with wide-ranging implications, the Hulu court granted summary judgment against the plaintiffs, holding that they had not shown that Hulu knowingly shared their video selections in violation of the VPPA.  The plaintiffs’ allegations were based on Hulu’s integration of a Facebook “Like” button into its website.  Specifically, the plaintiffs alleged that when the “Like” button loaded on a user’s browser, Hulu would automatically send Facebook a “c_user” cookie containing the user’s Facebook user ID.  At the same time, Hulu would also send Facebook a “watch page” that identified the video requested by the user.  The plaintiffs argued that Hulu’s transmission of the c_user cookie and the watch page allowed Facebook to identify both the Hulu user and that user’s video selection and therefore violated the VPPA.

The plaintiffs’ case foundered, however, on their inability to demonstrate that Hulu knew that Facebook’s linking of those two pieces of information was a possibility.  According to the court, “there is no evidence that Hulu knew that Facebook might combine a Facebook user’s identity (contained in the c_user cookie) with the watch page address to yield ‘personally identifiable information’ under the VPPA.”  Without showing that Hulu had knowingly disclosed a connection between the user’s identity and the user’s video selection, there could be no VPPA liability.

The court’s decision, if upheld on appeal, is likely to provide a significant defense to online content providers sued under the VPPA.  Under the decision, plaintiffs must now be able to show not only that the defendant company knew that the identity and video selections of the user were disclosed to a third party, but also that the company knew that that information was disclosed in manner that would allow the third party to combine those pieces of information to determine which user had watched which content.  While Hulu prevailed only at the summary judgment stage after four years of litigation, other companies could likely make use of this same rationale at the pleadings stage, insisting that plaintiffs set out a plausible case in their complaint that the defendant had the requisite level of knowledge.

Austin Spearman v. AMC Network Entertainment

The AMC decision turned on the VPPA’s definition of the term “consumer” and illustrated how that seemingly innocuous definition could limit the scope of the statute.  The VPPA provides liability only for disclosing the personally identifiable information of a “consumer,” which the statute defines as “any renter, purchaser, or subscriber of goods or services from a video tape service provider.”  The plaintiff, who had not paid AMC for any of the content she had viewed (and thus presumably did not qualify as a “renter” or “purchaser”) argued that she qualified as a “subscriber” and thus fell within the VPAA’s protections.  The court disagreed, holding that the term “subscriber” requires “a deliberate and durable affiliation with the provider.”  The plaintiff, who had not signed up, registered an account, established a user ID, downloaded an app or “taken any action to associate herself with AMC,” failed to qualify according to the court.

In so holding, the court rejected the plaintiff’s argument that her use of AMC’s streaming service, thereby providing AMC with “access to the cookies installed on her computer,” made her a “subscriber” and therefore a “consumer” under the VPPA.  The court reasoned that such a broad meaning of “subscriber” would render meaningless the statute’s limitation that only the disclosure of information about “consumers” is restricted.

If followed by other courts, the AMC court’s reasoning has the potential to narrow the scope of VPPA liability for online content providers.  Viewers who have not registered for the provider’s service, downloaded the provider’s app or taken similar actions would be excluded from the ranks of potential VPPA plaintiffs.  As noted by the court in allowing the plaintiff to amend her complaint, however, the definition of what it means to be a subscriber is by no means firmly established: the plaintiff was given leave to amend on the basis that she had subscribed to an AMC newsletter that was completely unrelated to her online video viewing.  Given this uncertainty, online content providers should continue to treat every user as a “consumer” and a potential VPPA plaintiff.

Thursday, April 30, 2015, marks the last day a business can request a retroactive waiver for failing to comply with certain fax advertising requirements promulgated by the Federal Communications Commission (FCC). The scope of these requirements was clarified on October 30, 2014, when the FCC issued an Order (2014 Order) under the Junk Fax Prevention Act of 2005 (Junk Fax Act). The 2014 Order confirms that senders of all advertising faxes must include information that allows recipients to opt out of receiving future faxes from that sender.

The 2014 Order clarifies certain aspects of the FCC’s 2006 Order under the Junk Fax Act (the Junk Fax Order). Among other requirements, the Junk Fax Order established the requirement that the sender of an advertising fax provide notice and contact information that allows a recipient to “opt out” of any future fax advertising transmissions.

Following the FCC’s publication of the Junk Fax Order, some businesses interpreted the opt-out requirements as not applying to advertising faxes sent with the recipient’s prior express permission (based on footnote 154 in the Junk Fax Order). The 2014 Order provided a six-month period for senders to comply with the opt-out requirements of the Junk Fax Order for faxes sent with the recipient’s prior express permission and to request retroactive relief for failing to comply. The six-month period ends on April 30, 2015. Without a waiver, the FCC noted that “any past or future failure to comply could subject entities to enforcement sanctions, including potential fines and forfeitures, and to private litigation.”

For more information about the Junk Fax Act in general, or the waiver request process in particular, please contact Julia Jacobson or Matt Turnell.

2014 was a busy year for the Federal Trade Commission (FTC) with the Children’s Online Privacy Protection Act (COPPA).  The FTC announced something new under COPPA nearly every month, including:

  • In January, the FTC issued an updated version of the free consumer guide, “Net Cetera:  Chatting with Kids About Being Online.”  Updates to the guide include advice on mobile apps, using public WiFi securely, and how to recognize text message spam, as well as details about recent changes to COPPA.
  • In February, the FTC approved the kidSAFE Safe Harbor Program.  The kidSAFE certification and seal of approval program helps children-friendly digital services comply with COPPA.  To qualify for a kidSAFE seal, digital operators must build safety protections and controls into any interactive community features; post rules and educational information about online safety; have procedures for handling safety issues and complaints; give parents basic safety controls over their child’s activities; and ensure all content, advertising and marketing is age-appropriate.
  • In March, the FTC filed an amicus brief in the 9th U.S. Circuit Court of Appeals, arguing that the ruling of U.S. District  Court for the Northern District of California in Batman v. Facebook that COPPA preempts state law protections for the online activities of teenagers children outside of COPPA’ coverage is “patently wrong.”
  • In April, the FTC updated its “Complying with COPPA:  Frequently Asked Questions” (aka the COPPA FAQs) to address how COPPA applies in the school setting.  In FAQ M.2, the FTC discussed whether a school can provide the COPPA-required consent on behalf of parents, stating that “Where a school has contracted with an operator to collect personal information from students for the use and benefit of the school, and for no other commercial purpose, the operator is not required to obtain consent directly from parents, and can presume that the school’s authorization for the collection of students’ personal information is based upon the school having obtained the parents’ consent.”  But, the FTC also recommends as “best practice” that schools provide parents with information about the operators to which it has consented on behalf of the parents.  The FTC requires that the school investigate the collection, use, sharing, retention, security and disposal practices with respect to personal information collected from its students.
  • In July, COPPA FAQ H.5, FAQ H.10, and FAQ H.16 about parental consent verification also were updated.  In FAQ H.5, the FTC indicates that “collecting a 16-digit credit or debit card number alone” is not sufficient as a parental consent mechanism, in some circumstances, “collection of the card number – in conjunction with implementing other safeguards – would suffice.”  Revised FAQ H.10 indicates that a developer of a child-directed app may use a third party for parental verification “as long as [developers] ensure that COPPA requirements are being met,” including the requirement to “provide parents with a direct notice outlining [the developer’s] information collection practices before the parent provides his or her consent.” In revised FAQ H.16, the FTC addresses whether an app store operator that offers a verifiable parental consent mechanism is exposed to liability under COPPA.  Since an app store operator does not qualify as an “operator” under COPPA, the app store is not liable under COPPA “for failing to investigate the privacy practices of the operators for whom [they] obtain consent,” but could be liable under the FTC Act for false or deceptive practices.
  • In August, the FTC approved the Internet Keep Safe Coalition (iKeepSafe) program as a safe harbor oversight program. The FTC also called for public comments on AgeCheq, Inc.’s parental verification method, which sought to verify parental identity via a financial transaction or a hand-signed declaration.  The FTC subsequently rejected the proposed method in November because these methods have already been recognized as valid means of obtaining verifiable parental consent under COPPA and emphasized that companies are free to develop common consent mechanisms without Commission approval.
  • In September, Yelp was fined $450,000 for failing to comply with COPPA.  (See our blog post here).  Also in September, TinyCo (the developer of Tiny Pets, Tiny Zoo, Tiny Village, Tiny Monsters and Mermaid Resort) was fined $300,000 for collecting children’s email addresses, in exchange for in-game bonuses, without parental consent in violation of COPPA.
  • In November, AgeCheq, Inc. proposed a second parental consent verification method to ensure COPPA compliance.  The second proposed method consisted of a device-signed parental consent form with a multi-step method requiring entry of a code sent by text message to a mobile device. The Center for Digital Democracy urged the FTC to reject AgeCheq’s method in comments filed on December 29, 2014.  On January 29, 2015, the FTC announced its rejection of AgeCheq’s second proposed parental verification method.
  • In December, the FTC warned BabyBus, a China-based children’s app developer, that its apparent collection of user geolocation information may violate COPPA if (i) user geolocation information is indeed being collected and (ii) if the company does not get parents’ consent before collection the information from children under age 13.  The FTC noted that “COPPA and its related rules apply to foreign-based Web sites and online services that are involved in commerce in the United States.”

Given California’s new student privacy law, Student Online Personal Information Protection Act (effective January 1, 2016), and the recent increased focus on student privacy resulting from President Obama’s announcement about the Student Privacy Act, we expect that 2015 also will be an active year for children’s privacy.  Stay tuned!

The country awoke to what seems to be a common occurrence now: another corporation struck by a massive data breach.  This time it was Anthem, the country’s second largest health insurer, in a breach initially estimated to involve eighty million individuals.  Both individuals’ and employees’ personal information is at issue, in a breach instigated by hackers.

Early reports, however, indicated that this breach might be subtly different than those faced by other corporations in recent years.  The difference isn’t in the breach itself, but in the immediate, transparent and proactive actions that the C-Suite took.

Unlike many breaches in recent history, this attack was discovered internally through corporate investigative and management processes already in place.  Further, the C-Suite took an immediate, proactive and transparent stance: just as the investigative process was launching in earnest within the corporation, the C-Suite took steps to fully advise its customers, its regulators and the public at-large, of the breach.

Anthem’s chief executive officer, Joseph Swedish, sent a personal, detailed e-mail to all customers. An identical message appeared in a widely broadcast press statement.  Swedish outlined the magnitude of the breach, and that the Federal Bureau of Investigation and other investigative and regulatory bodies had already been advised and were working in earnest to stem the breach and its fallout.  He advised that each customer or employee with data at risk was being personally and individually notified.  In a humanizing touch, he admitted that the breach involved his own personal data.

What some data privacy and information security advocates noted was different: The proactive internal measures that discovered the breach before outsiders did; the early decision to cooperate with authorities and press, and the involvement of the corporate C-Suite in notifying the individuals at risk and the public at-large.

The rapid and detailed disclosure could indicate a changing attitude among the American corporate leadership.  Regulators have encouraged transparency and cooperation among Corporate America, the public and regulators as part of an effort to stem the tide of cyber-attacks.  As some regulators and information security experts reason, the criminals are cooperating, so we should as well – we are all in this together.

Will the proactive, transparent and cooperative stance make a difference in the aftermath of such a breach?  Only time will tell but we will be certain to watch with interest.