Social Media
Subscribe to Social Media's Posts

California Continues to Lead with New Legislation Impacting Privacy and Security

At the end of September, California Governor Edmund G. Brown, Jr. approved six bills designed to enhance and expand California’s privacy laws. These new laws are scheduled to take effect in 2015 and 2016.  It will be important to be mindful of these new laws and their respective requirements when dealing with personal information and when responding to data breaches.

Expansion of Protection for California Residents’ Personal Information – AB 1710

Under current law, any business that owns or licenses certain personal information about a California resident must implement reasonable security measures to protect the information and, in the event of a data or system breach, must notify affected persons.  See Cal. Civil Code §§ 1798.81.5-1798.83.  Current law also prohibits individuals and entities from posting, displaying, or printing an individual’s social security number, or requiring individuals to use or transmit their social security number, unless certain requirements are met.  See Cal. Civil Code § 1798.85.

The bill makes three notable changes to these laws.  First, in addition to businesses that own and license personal information, businesses that maintain personal information must comply with the law’s security and notification requirements.  Second, in the event of a security breach, businesses now must not only notify affected persons, but also provide “appropriate identity theft prevention and mitigation services” to the affected persons at no cost for at least 12 months, if the breach exposed or may have exposed specified personal information.  Third, in addition to the current restrictions on the use of social security numbers, individuals and entities now also may not sell, advertise to sell, or offer to sell any individual’s social security number.

Expansion of Constructive Invasion of Privacy Liability – AB 2306

Under current law, a person can be liable for constructive invasion of privacy if the person uses a visual or auditory enhancing device and attempts to capture any type of visual image, sound recording, or other physical impression of the person in a personal or familial activity under circumstances in which the person had a reasonable expectation of privacy.  See Cal. Civil Code § 1708.8.

The bill expands the reach of the current law by removing the limitation requiring the use of a “visual or auditory enhancing device” and imposing liability if the person uses any device to capture a visual image, sound recording, or other physical impression of a person in a personal or familial activity under circumstances in which the person had a reasonable expectation of privacy.

The law will also continue to impose liability on those who acquire the image, sound recording, or physical impression of the other person, knowing that it was unlawfully obtained.  Those found liable under the law may be subject to treble damages, punitive damages, disgorgement of profits and civil fines.

Protection of Personal Images and Videos (“Revenge Porn” Liability)– AB 2643

Assembly Bill 2643 creates a private right of action against a person who intentionally distributes by any means, without consent, material that exposes a person’s intimate body parts or the [...]

Continue Reading




read more

GPEN Publishes Privacy Sweep Results

On 10 September 2014, the Global Privacy Enforcement Network (GPEN) published the results of its privacy enforcement survey or “sweep” carried out earlier in 2014 with respect to popular mobile apps.  The results of the sweep are likely to lead to future initiatives by data protection authorities to protect personal information submitted to mobile apps.

The purpose of the sweep was to determine the transparency of the privacy practices of some 1,211 mobile apps and involved the participation of 26 data protection authorities across the globe.  The results of the sweep suggest that a high proportion of the apps downloaded did not sufficiently explain how consumers’ personal information would be collected and used.

Background

GPEN was established in 2010 on the recommendation of the Organisation for Economic Co-operation and Development.  GPEN aims to create cooperation between data protection regulators and authorities throughout the world in order to strengthen personal privacy globally.  GPEN is currently made up of 51 data protection authorities across some 39 jurisdictions.

Over the course of a week in May 2014, GPEN’s “sweepers” – made up of 26 data protection authorities across 19 jurisdictions, including the UK Information Commissioner’s Office (ICO) – participated in the survey by downloading and briefly interacting with the most popular apps released by developers in their respective jurisdictions, in an attempt to recreate a typical consumer’s experience.  In particular GPEN intended the sweep to increase public and commercial awareness of data protection rights and responsibilities as well as identify specific high-level issues which may become the focus of future enforcement actions and initiatives.

Sweep Results

The key negative findings of GPEN sweep include:

  • 85 percent of apps failed to clearly explain how personal information would be processed.
  • 59 percent of apps did not clearly indicate basic privacy information (with 11 percent failing to include any privacy information whatsoever).
  • 31 percent of apps were excessive in their permission requests to access personal information.
  • 43 percent of the apps had not sufficiently tailored their privacy communications for the mobile app platform – often instead relying on full version privacy policies found on websites.

However, the sweep results also highlighted a number of examples of best practices for app developers, including:

  • Many apps provided clear, easy-to-read and concise explanations about exactly what information would be collected, how and when it would be used and, in some instances, explained specifically and clearly what would not be done with the information collected.
  • Some apps provided links to the privacy policies of their advertising partners and opt-out elections in respect of analytic devices.
  • There were good examples of privacy policies specifically tailored to the app platform, successfully making use of just-in-time notifications (warning users when personal information was about to be collected or used), pop-ups and layered information, allowing for consumers to obtain more detailed information if required.

Many of the GPEN members are expected to take further action following the sweep results.  For its part, the UK ICO has commented that in light [...]

Continue Reading




read more

Digital Marketing Minute: A Bad Review for Yelp

The Federal Trade Commission (FTC) announced last week that Yelp – the online service through which consumers can read and write reviews about local businesses – has agreed to pay $450,000 to settle the FTC’s charges that Yelp knowingly and without verifiable parental consent (VPC), collected personal information from children under the age of 13 through its mobile app in violation of the federal law, the Children’s Online Privacy Protection Act (COPPA).

COPPA was enacted in 1998. The FTC, which is responsible for enforcing COPPA, implemented regulations in April 2000 that are known as the COPPA Rule. The FTC issued an amended COPPA Rule in December 2012, which became effective July 1, 2013. 

In general, COPPA and the COPPA Rule prohibit operators of websites, mobile applications or other digital services (collectively, “digital services”) from knowingly collecting personal information from children under age 13 unless and until the digital service operator has VPC. 

Under the amended COPPA Rule, COPPA has a broader scope than digital service operators may realize.  COPPA applies not only to digital services that are directed to children, but also to any general-audience digital service when the operator of the digital service has “actual knowledge” that the digital services is collecting personal information from children under age 13 without VPC. 

COPPA does not require operators of general-audience digital services to ask users for age or date of birth information but, under the actual knowledge test, if the digital service collects information that establishes that a user is under 13, the digital service must be COPPA compliant, which means (among other requirements) obtaining VPC before collecting personal information from the under-age-13 user.

The FTC concluded that Yelp had “actual knowledge” that it was collecting personal information from children under age 13 because the registration page on Yelp’s app asked users to enter their date of birth but did not block access to the app for users who were too young (i.e., under age 13).   

Key Takeaway: If your general-audience digital service asks a user for his or her birth date, make sure that a user who is under age 13 is blocked from using the digital service.  Also, to help prevent users who are too young from circumventing the block, consider one or all of the following techniques:

  1. Request birth date in a neutral manner, i.e., no prompt is given to the age of eligibility, such as “You must be age 13 or older to register.”
  2. Present a neutral on-screen error message when a user is under age 13, such as “Sorry, you’re not eligible,” rather than “Sorry, you are under age 13.”
  3. Deploy a cookie or other functionality to prevent an under-age user whose access was blocked from using the back button (or similar technique) to re-enter an old-enough birth date.      



read more

Wearable Technologies Are Here To Stay: Here’s How the Workplace Can Prepare

More than a decade ago, “dual use” devices (i.e., one device used for both work and personal reasons) began creeping into workplaces around the globe.  Some employees insisted on bringing fancy new smart phones from home to replace the company-issued clunker and, while many employers resisted at first, dual use devices quickly became so popular that allowing them became inevitable or necessary for employee recruitment and retention, not to mention the cost savings that could be achieved by having employees buy their own devices.  Because of early resistance, however, many HR and IT professionals found themselves scrambling in a reactive fashion to address the issues that these devices can raise in the workplace after they were already prevalent.  Today, most companies have robust policies and procedures to address the risks presented by dual use devices, setting clear rules for addressing privacy, security, protection of trade secrets, records retention and legal holds, as well as for preventing harassment, complying with the National Labor Relations Act (NLRA), protecting the company’s relationships and reputation, and more.

In 2014, there is a new trend developing in the workplace:  wearable technologies.   The lesson to be learned from the dual use device experience of the past decade: Companies should consider taking proactive steps now to identify the risks presented by allowing wearables at work, and develop a strategy to integrate them into the workplace in a way that maximizes employee engagement, but minimizes corporate risk.

An effective integration strategy will depend on the particular industry, business needs, geographic location and corporate culture, of course.  The basic rule of thumb from a legal standpoint, however, is that although wearables present a new technology frontier, the old rules still apply.  This means that companies will need to consider issues of privacy, security, protection of trade secrets, records retention, legal holds and workplace laws like the NLRA, the Fair Labor Standards Act, laws prohibiting harassment and discrimination, and more.

Employers evaluating use of these technologies should consider two angles.  First, some companies may want to introduce wearables into the workplace for their own legitimate business purposes, such as monitoring fatigue of workers in safety-sensitive positions, facilitating productivity or creating efficiencies that make business operations run more smoothly.  Second, some companies may want to consider allowing “dual use” or even just “personal use” wearables in the workplace.

In either case, companies should consider the following as part of an integration plan:

  • Identify a specific business-use case;
  • Consider the potential for any related privacy and security risks;
  • Identify how to mitigate those risks;
  • Consider incidental impacts and compliance issues – for instance, how the technologies impact the existing policies on records retention, anti-harassment, labor relations and more;
  • Build policies that clearly define the rules of the road;
  • Train employees on the policies;
  • Deploy the technology; and
  • Review the program after six or 12 months to confirm the original purpose is being served and whether any issues have emerged that should be addressed.

In other words, employers will need to run through [...]

Continue Reading




read more

Processing Personal Data in Russia? Consider These Changes to Russian Law and How They May Impact Your Business

Changes Impacting Businesses that Process Personal Data in Russia

On July 21, 2014, a new law Federal Law № 242-FZ was adopted in Russia (Database Law) introducing amendments to the existing Federal Law “On personal data” and to the existing Federal Law “On information, information technologies and protection of information.”  The new Database Law requires companies to store and process personal data of Russian nationals in databases located in Russia.  At a minimum, the practical effect of this new Database Law is that companies operating in Russia that collect, receive, store or transmit (“process”) personal data of natural persons in Russia will be required to place servers in Russia if they plan to continue doing business in that market.  This would include, for example, retailers, restaurants, cloud service providers, social networks and those companies operating in the transportation, banking and health care spheres.  Importantly, while Database Law is not scheduled to come into force until September 1, 2016, a new bill was just introduced on September 1, 2014 to move up that date to January 1, 2015.  The transition period is designed to give companies time to adjust to the new Database Law and decide whether to build up local infrastructure in Russia, find a partner having such infrastructure in Russia, or cease processing information of Russian nationals.  If the bill filed on September 1 becomes law, however, that transition period will be substantially shortened and businesses operating in Russia will need to act fast to comply by January 1.

Some mass media in Russia have interpreted provisions of the Database Law as banning the processing of Russian nationals’ personal data abroad.  However, this is not written explicitly into the law and until such opinion is confirmed by the competent Russian authorities, this will continue to be an open question.  There is hope that the lawmakers’ intent was to give a much needed boost to the Russian IT and telecom industry, rather than to prohibit the processing of personal data abroad.  If this hope is confirmed, then so long as companies operating in Russia ensure that they process personal data of Russian nationals in databases physically located in Russia, they also should be able to process this information abroad, subject to compliance with cross-border transfer requirements.  

The other novelty of this new Database Law is that it grants the Russian data protection authority (DPA) the power to block access to information resources that are processing information in breach of Russian laws.  Importantly, the Database Law provides that the blocking authority applies irrespective of the location of the offending company or whether they are registered in Russia.  However, the DPA can initiate the procedure to block access only if there is a respective court judgment.  Based on the court judgment the DPA then will be able to require a hosting provider to undertake steps to eliminate the infringements.  For example, the hosting provider must inform the owner of the information resource that it must eliminate the infringement, or the hosting [...]

Continue Reading




read more

Digital Marketing Minute: No More Like Gates

We are pleased to present this inaugural post of the Digital Marketing Minute.  Each week will provide a short post on some news in the digital marketing world.   This week’s post is about a change on Facebook’s platform that affects how marketers conduct promotions.

In an August 7 post on its Developers blog page, Facebook announced that, effective November 5, 2014, use of a “Like Gate,” which requires Facebook users to “Like” a page before participating in a brand’s promotional activity, is not allowed.  In other words, marketers cannot require consumers to “Like” a brand page before entering a sweepstakes or a contest, participating in an offer or accessing certain content.

Facebook reasons that banning the Like Gate will help “ensure quality connections and help businesses reach the people who matter to them” and that consumers “’Like’ pages because they want to connect and hear from the business, not because of artificial incentives” (see https://developers.facebook.com/blog/post/2014/08/07/Graph-API-v2.1/).




read more

More States Restrict Employers’ Access to Employees’ Social Media Accounts

As first discussed in McDermott Will & Emery’s Privacy and Data Protection 2013 Year In Review, state legislatures are enacting laws limiting employers’ ability to access the social media accounts of their employees.  Thus far in 2014, four more states – Louisiana, Oklahoma, Tennessee and Wisconsin – have enacted social media legislation, bringing the total number of states with such legislation to 16.

How State Social Media Laws Effect Employers

Generally, state social media laws bar employers from requiring or requesting that an employee or applicant provide log-in credentials for his/her personal social media account.  Some of these state social media laws also prohibit an employer from requiring an employee to add another employee or supervisor to a social media account “friends” or contacts list or to access personal social media accounts in the employer’s presence.  Many of the state social media laws also prohibit employers from basing adverse employment action on an employee’s refusal to comply with an employer’s request for social media account access.

While these laws offer employees added protection with respect to their personal social media accounts, most of the laws feature important carve-outs.  Among other exceptions, most state social media laws allow employers to: access publicly-available social media about employees, restrict employees’ access to social media during work hours and conduct certain types of employment-related investigations that may involve an employee’s social media account(s).

Notably, all four of the recently-enacted laws allow employers to monitor the social media activity of employees when employees access their social media accounts through employer-provided IT systems.

Compliance Tips

Since the terms of state social media laws vary, employers should consider establishing and following basic guidelines to ensure compliance with the myriad laws.  Key steps are:

  • Updating employer policies to clarify state-specific restrictions related to employee access to personal social media accounts through employer-provided information systems; and
  • Providing training to managers, Human Resources and IT professionals about the conduct prohibited by the different state social media laws.



read more

Disclosures Need Not Contain Customers’ Actual Names to Violate the Video Privacy Protection Act Rules Hulu Court

In the latest of a string of victories for the plaintiffs in the Video Privacy Protection Act (VPPA) class action litigation against Hulu, LLC, the U.S. District Court for the Northern District of California ruled that Hulu’s sharing of certain customer information with Facebook, Inc. may have violated the VPPA, even though Hulu did not disclose the actual names of its customers.  The ruling leaves Hulu potentially liable for the disclosures under the VPPA and opens the door to similar claims against other providers of online content.

The decision by U.S. Magistrate Judge Laurel Beeler addressed Hulu’s argument on summary judgment that it could not have violated the VPPA because Hulu “disclosed only anonymous user IDs and never linked the user IDs to identifying data such as a person’s name or address.”  The court rejected Hulu’s argument, stating that “[Hulu’s] position paints too bright a line.”  Noting that the purpose of the VPPA was to prevent the disclosure of information “that identifies a specific person and ties that person to particular videos that the person watched” the court held that liability turned on whether the Hulu’s disclosures were “merely an anonymized ID” or “whether they are closer to linking identified persons to the videos they watched.”

Under this principle, the court held that Hulu’s disclosures to comScore, a metrics company that Hulu employed to analyze its viewership for programming and advertising purposes, did not violate the VPPA.  According to the court, Hulu’s disclosure to comScore included anonymized user IDs and other information that could theoretically be used to identify the particular individuals and their viewing choices.  But the plaintiffs had no evidence that comScore had actually used the information in that way.  As the evidence did not “suggest any linking of a specific, identified person and his video habits” the court held that the disclosures to comScore did not support a claim under the VPPA.

But the court held that Hulu’s disclosure to Facebook had potentially violated the VPPA.  Hulu’s disclosures to Facebook included certain cookies that Hulu sent to Facebook that allowed Hulu to load a Facebook “Like” button on users’ web browsers.  The court held that the cookies that Hulu sent to Facebook to accomplish this task “together reveal information about what the Hulu user watched and who the Hulu user is on Facebook.”  The court noted that this disclosure was “not merely the transmission of a unique, anonymous ID”; rather it was “information that identifies the Hulu user’s actual identity on Facebook” as well as the video that the Facebook user was watching.  Thus, the court held, Hulu’s disclosures to Facebook potentially violated the VPPA.

The Court’s ruling that disclosure of seemingly anonymous IDs can potentially lead to liability under the VPPA should cause companies that are potentially covered by the law to reexamine the ways in which they provide data to third parties.  Such companies should carefully consider not only what information is disclosed but also how the recipients of that data can reasonably be expected [...]

Continue Reading




read more

McDermott to Host Social Media Best Practices Panel on June 4, 2014

Planning to be in the Chicago area on June 4?  Register to attend this interactive panel discussion regarding best practices for social media policies. The panelists multi-disciplinary perspective will include privacy, intellectual property and employment law issues related to the use of social media in the workplace.

For more information and to register, click here.




read more

FTC Enforces Facebook Policies to Stop Jerk

The Federal Trade Commission (FTC) recently accused the operator of www.Jerk.com (Jerk) of misrepresenting to users the source of the personal content that Jerk used for its purported social networking website and the benefits derived from a user’s purchase of a Jerk membership.   According to the FTC, Jerk improperly accessed personal information about consumers from Facebook, used the information to create millions of unique profiles identifying subjects as either “Jerk” or “Not a Jerk” and falsely represented that a user could dispute the Jerk/Not a Jerk label and alter the information posted on the website by paying a $30 subscription fee.  The interesting issue in this case is not the name of the defendant or its unsavory business model; rather, what’s interesting is the FTC’s tacit enforcement of Facebook’s privacy policies governing the personal information of Facebook’s own users.

Misrepresenting the Source of Personal Information

Although Jerk represented that its profile information was created by its users and reflected those users’ views of the profiled individuals, Jerk in fact obtained the profile information from Facebook.  In its complaint, the FTC alleges that Jerk accessed Facebook’s data through Facebook’s application programming interfaces (API), which are tools developers can use to interact with Facebook, and downloaded the names and photographs of millions of Facebook users without consent. The FTC used Facebook’s various policies as support for its allegation that Jerk improperly obtained the personal information of Facebook’s users and, in turn, misrepresented the source of the information.  The FTC noted that developers accessing the Facebook platform must agree to Facebook’s policies, which include (1) obtaining users’ explicit consent to share certain Facebook data; (2) deleting information obtained through Facebook once Facebook disables the developers’ Facebook access; (3) providing an easily accessible mechanism for consumers to request the deletion of their Facebook data; and (4) deleting information obtained from Facebook upon a consumer’s request.  Jerk used the data it collected from Facebook not to interact with Facebook but to create unique Jerk profiles for its own commercial advantage.  Jerk’s misappropriation of user data from Facebook was the actual source of the data contrary to Jerk’s representation that the data had been provided by Jerk’s users.

Misrepresenting the Benefit of the Bargain

According to the FTC, Jerk represented that purchase of a $30 subscription would enable users to obtain “premium features,” including the ability to dispute information posted on Jerk and alter or delete their Jerk profile and dispute the false information on their profile.  Users who paid the subscription often received none of the promised benefits.  The FTC noted that contacting Jerk with complaints was difficult for consumers:  Jerk charged $25 for users to email the customer service department.

A hearing is scheduled for January 2015. Notably, the FTC’s proposed Order, among other prohibitions, enjoins Jerk from using in any way the personal information that Jerk obtained prior to the FTC’s action – meaning the personal information that was obtained illegally from Facebook.




read more

STAY CONNECTED

TOPICS

ARCHIVES

2021 Chambers USA top ranked firm
LEgal 500 EMEA top tier firm 2021
U.S. News Law Firm of the Year 2022 Health Care Law