On 6 August 2017, the UK government released ‘The Key Principles of Vehicle Cyber Security for Connected and Automated Vehicles’, guidance aimed at ensuring minimum cybersecurity protections for consumers in the manufacture and operation of connected and automated vehicles.

Connected and automated vehicles fall into the category of so-called ‘smart cars’. Connected vehicles have gained, and will continue to gain, adoption in the market and, indeed, are expected to make up more than half of new vehicles by 2020. Such cars have the ability through the use of various technologies to communicate with the driver, other cars, application providers, traffic infrastructure and the Cloud. Automated vehicles, also known as autonomous vehicles, include self-driving features that allow the vehicle to control key functions–like observing the vehicle’s environment, steering, acceleration, parking, and lane changes–that traditionally have been performed by a human driver. Consumers in certain markets have been able to purchase vehicles with certain autonomous driving features for the past few years, and vehicle manufacturers have announced plans to enable vehicles to be fully self-driving under certain conditions, in the near future.

Continue Reading UK Government Issues Cybersecurity Guidance for Connected and Automated Vehicles

The US Department of Health and Human Services (HHS) Office for Civil Rights (OCR) recently posted guidance (OCR guidance) clarifying that a business associate such as an information technology vendor generally may not block or terminate access by a covered entity customer to protected health information (PHI) maintained by the vendor on behalf of the customer. Such “information blocking” could occur, for example, during a contract dispute in which a vendor terminates customer access or activates a “kill switch” that renders an information system containing PHI inaccessible to the customer. Many information vendors have historically taken such an approach to commercial disputes.

Read full article here.

If you haven’t heard about newest gaming craze yet, it’s based on what is called “augmented reality” (AR) and it could potentially impinge on your home life and workplace as such games allow users to “photograph” imaginary items overlaid with objects existing in the real world. An augmented reality game differs from “virtual reality” in that it mixes elements of the real world with avatars, made up creatures, fanciful landscapes and the like, rather than simply presenting a completely fictional scenario. Whether you play such games yourself or are merely existing in nearby surroundings, here are few things to think about as an active participant, and some tips regarding Intellectual Property and confidentiality issues that arise from others playing the game around you.

Augmented reality games are typically played on a smartphone app and some of them allow the user to capture images of the player’s experience and post it on social media, text it to friends or maintain it on the phone’s camera roll. However, special glasses could be used or other vehicles could deliver the augmented reality experience in different contexts—not just gaming. For example, technology in this area is rapidly advancing which will allow users to link up and “experience” things together way beyond what exists in the real world, i.e., in a “mixed world” experience, if you will. These joint holographic experiences are just one facet of the direction that augmented reality is taking.

As always, with new technological advancements, there are some caveats to using AR that you should be aware of.

Trademarks

If a company’s trademark is visible in the photo of your AR experience, you need to be mindful that you do not run afoul of trademark laws. For the same reasons that some trademarks are blurred out on TV shows, you should not be publishing such photos in any fashion that might draw negative attention from the trademark owner on social media accounts. Even if you are not selling competing goods, you could potentially be liable for trademark infringement. There is another, more important reason not to post such photos that is discussed below and can lead to a second cause of action against you arising from the same photo—the right of publicity, which is a personal right and is treated in vastly different ways in each state.

Right of Publicity

The Right of Publicity (ROP) protects everyone from misappropriation of his/her name, likeness, voice, image or other recognizable element of personal identity. It is protected by state law and many states vary greatly in their treatment of ROP. For example, some states protect a person’s ROP post-mortem, whereas others have no protection whatsoever. Due to the ease with which still or moving images can be reproduced and posted on the Internet, it is critical that you consider your postings from a ROP standpoint before you upload that image to a social media account. For instance, if your photo features your best friend taken in a shared AR experience, she may not object to you posting her photo to one of your social media accounts. However, if a brand name clothing manufacturer reposts it and somehow uses the momentum of the AR craze to show how game players and/or the avatars and creatures within the game are attracted to their brand of clothing, it could result in not just an issue with the game developer, but also your best friend, who may now be the unwitting spokesmodel for that brand of clothing. Basically, the manufacturer would be receiving an unfair free endorsement deal without ever having to negotiate with your best friend. In many states, she would have a ROP cause of action against the clothing manufacturer for commercial use of her image without her permission. This is exponentially dangerous if the best friend is a minor and her parents have not consented to this use of her image. As you can see, the landscape is fraught with potential pitfalls unless you are a news reporting agency or the like and your actions clearly fall under the First Amendment/free speech exception.

Confidential Information

One very important aspect of an AR game is a player’s ability to capture a photograph of the scene being explored or the personal experience of the user in a real world setting (e.g., it could show your desk at work, but in an outer space setting, or your car dashboard with the view from the driver’s perspective out the windshield showing a fairyland with mythical creatures in the distance). However, in taking these mixed virtual/real world photos, it is essential to be mindful of your surroundings. Doctors, lawyers, mental health professionals, bankers, and others with a much higher level of fiduciary duty to their clients must ensure that if they are taking such photos, no confidential information that would breach such duties is captured in the photos. Whether taken in the app itself or in screenshot form, these photos could prove to be problematic if they are automatically uploaded to the cloud or captured in the app. For example, a judge recently tweeted that defense counsel had beenplaying an AR game in the courtroom while court was in session. Setting aside the appropriateness of such behavior, query whether such actions violate confidentiality rules.

For all such professionals there are governing rules about the treatment of certain types of confidential information (The Gramm-Leach-Bliley (GLB) Act, The Health Insurance Portability and Accountability Act (HIPAA), etc.). If the game is set to capture images of the AR characters or scenes in the real world then anything within the player’s view or in the surrounding area is captured in the photograph with the character. To the extent that confidential personal information or trade secret information is being captured, this is a problem. The quick fix is to set the game to have a fully virtual background, rather than an AR one, a feature that some AR games already have. Although this is arguably less fun, it mitigates the danger of capturing sensitive data on your camera roll, in the cloud, or accidentally posting it, all of which could have very serious consequences.

In summary, the new AR games are wildly popular and likely are here to stay. Given that, it’s best to be mindful of your surroundings and make sure that you, and those around you, are playing responsibly.

On December 28, 2015, the Ministry of Industry and Information Technology of China released the newly revised Classification Catalogue of Telecommunications Services, which is due to take effect as of March 1st, 2016. This round of revision has long been awaited since its last amendment in 2003, and is expected to reflect the advancement and emergence of new technologies and business models in the telecommunication field as well as to help keep new telecommunication business models under the regulatory radar.

 

Read the full China Law Alert.

On April 1, 2015, the Office of the National Coordinator for Health Information Technology (ONC), which assists with the coordination of federal policy on data sharing objectives and standards, issued its Shared Nationwide Interoperability Roadmap and requested comments.  The Roadmap seeks to lay out a framework for developing and implementing interoperable health information systems that will allow for the freer flow of health-related data by and among providers and patients.  The use of technology to capture and understand health-related information and the strategic sharing of information between health industry stakeholders and its use is widely recognized as critical to support patient engagement, improve quality outcomes and lower health care costs.

On April 3, 2015, the Federal Trade Commission issued coordinated comments from its Office of Policy Planning, Bureau of Competition, Bureau of Consumer Protection and Bureau of Economics.  The FTC has a broad, dual mission to protect consumers and promote competition, in part, by preventing business practices that are anticompetitive or deceptive or unfair to consumers.  This includes business practices that relate to consumer privacy and data security.  Notably, the FTC’s comments on the Roadmap draw from both its pro-competitive experience and its privacy and security protection perspective, and therefore offer insights into the FTC’s assessment of interoperability from a variety of consumer protection vantage points.

The FTC agreed that ONC’s Roadmap has the potential to benefit both patients and providers by “facilitating innovation and fostering competition in health IT and health care services markets” – lowering health care costs, improving population health management and empowering consumers through easier access to their personal information.  The concepts advanced in the Roadmap, however, if not carefully implemented, can also have a negative effect on competition for health care technology services.  The FTC comments are intended to guide ONC’s implementation with respect to: (1) creating a business and regulatory environment that encourages interoperability, (2) shared governance mechanisms that enable interoperability, and (3) advancing technical standards.

Taking each of these aspects in turn, creating a business and regulatory environment that encourages interoperability is important because, if left unattended, the marketplace may be resistant to interoperability.  For example, health care providers may resist interoperability because it would make switching providers easier and IT vendors may see interoperability as a threat to customer-allegiance.  The FTC suggests that the federal government, as a major payer, work to align economic incentives to create greater demand among providers for interoperability.

With respect to shared governance mechanisms, the FTC notes that coordinated efforts among competitors may have the effect of suppressing competition.  The FTC identifies several examples of anticompetitive conduct in standard setting efforts for ONC’s consideration as it considers how to implement the Roadmap.

Finally, in advancing core technical standards, the FTC advised ONC to consider how standardization could affect competition by (1) limiting competition between technologies, (2) facilitating customer lock-in, (3) reducing competition between standards, and (4) impacting the method for selecting standards.

As part of its mission to protect consumers, the FTC focuses its privacy and security oversight of health- related information on companies and data sharing arrangements that sit outside the jurisdiction of the Health Insurance Portability and Accountability Act (HIPAA), which regulates the privacy and security practices of covered entity health care providers, health plans and health care clearinghouses, as well as the third parties that assist those covered entities, referred to as business associates.  Information regulated by HIPAA, called Protected Health Information (PHI) typically resides in the “traditional medical model” of providers and health plans.  Information regulated by the FTC, often called consumer-generated health information (CHI) tends to be generated outside of the traditional medical model, for example through the explosion of wearables and other digital, consumer-facing technologies.

As interoperability gathers steam, and as providers and plans increasingly look to mobile and digital health tools to maximize patient engagement and obtain additional “out of the exam room” data that they can leverage to improve patient outcomes and control costs, the divide between PHI and CHI collapses.  Not only will interoperability have to contend with PHI-centered systems effectively sharing information with one another, but it will also have to contend with the need for systems to move PHI and CHI, consistent with the different consumer expectations and regulatory frameworks for such information.  And then, of course, there is also state law.

The FTC’s comments highlight the central role that the FTC will play, alongside the Office of Civil Rights, which enforces HIPAA, in envisioning, deploying and overseeing the health information sharing systems beginning to emerge.

On the third anniversary of the EU Commission’s proposed new data protection regime, the UK ICO has published its thoughts on where the new regime stands. The message is mixed: progress in some areas but nothing definitive, and no real clarity as to when the new regime may come into force.

The legislative process involves the agreement of the European Commission, the European Parliament and the Council of Europe (representing the governments of the member states). So far the European Parliament has agreed its amendments to the Commission’s proposal and we are still waiting for the Council to agree it’s amendments before all three come together and try and find a mutually agreeable position.

The Council is guided by the mantra “nothing is agreed until everything is agreed”, and so even though there has been progress with the Council reaching “partial general agreement” on international transfers, risk-based obligations on controllers and processors, and the provisions relating to specific data processing situations such as research and an approach agreed on the one-stop shop principle (allowing those operating in multiple states to appointed and deal with a single authority), this progress means nothing until there is final agreement on everything. At this stage that means all informal agreements remain open to renegotiation.

It is noted that Latvia holds the presidency of the Council until June 2015. The Latvians have already noted that Anydata protection reform remains a key priority but progress has been slow and time may be against them. Where Latvia fails, Luxembourg will hopefully succeed as it takes up the presidency from June.

The ICO is urging all stakeholders to push on with the reform, although they see the proposed timetable of completion of the trilogue process by the end of 2015 as being optimistic. Instead a more reasonable timetable may be a final agreement by mid-2016 with the new regime up and running in 2018.

In 2014, regulators around the globe issued guidelines, legislation and penalties in an effort to enhance security and control within the ever-shifting field of privacy and data protection. The Federal Trade Commission confirmed its expanded reach in the United States, and Canada’s far-reaching anti-spam legislation takes full effect imminently. As European authorities grappled with the draft data protection regulation and the “right to be forgotten,” the African Union adopted the Convention on Cybersecurity and Personal Data, and China improved the security of individuals’ information in several key areas. Meanwhile, Latin America’s patchwork of data privacy laws continues to evolve as foreign business increases.

This report furnishes in-house counsel and others responsible for privacy and data protection with an overview of key action points based on these and other 2014 developments, along with advance notice of potential trends in 2015. McDermott will continue to report on future updates, so check back with us regularly.

Read the full report here.

On 10 September 2014, the Global Privacy Enforcement Network (GPEN) published the results of its privacy enforcement survey or “sweep” carried out earlier in 2014 with respect to popular mobile apps.  The results of the sweep are likely to lead to future initiatives by data protection authorities to protect personal information submitted to mobile apps.

The purpose of the sweep was to determine the transparency of the privacy practices of some 1,211 mobile apps and involved the participation of 26 data protection authorities across the globe.  The results of the sweep suggest that a high proportion of the apps downloaded did not sufficiently explain how consumers’ personal information would be collected and used.

Background

GPEN was established in 2010 on the recommendation of the Organisation for Economic Co-operation and Development.  GPEN aims to create cooperation between data protection regulators and authorities throughout the world in order to strengthen personal privacy globally.  GPEN is currently made up of 51 data protection authorities across some 39 jurisdictions.

Over the course of a week in May 2014, GPEN’s “sweepers” – made up of 26 data protection authorities across 19 jurisdictions, including the UK Information Commissioner’s Office (ICO) – participated in the survey by downloading and briefly interacting with the most popular apps released by developers in their respective jurisdictions, in an attempt to recreate a typical consumer’s experience.  In particular GPEN intended the sweep to increase public and commercial awareness of data protection rights and responsibilities as well as identify specific high-level issues which may become the focus of future enforcement actions and initiatives.

Sweep Results

The key negative findings of GPEN sweep include:

  • 85 percent of apps failed to clearly explain how personal information would be processed.
  • 59 percent of apps did not clearly indicate basic privacy information (with 11 percent failing to include any privacy information whatsoever).
  • 31 percent of apps were excessive in their permission requests to access personal information.
  • 43 percent of the apps had not sufficiently tailored their privacy communications for the mobile app platform – often instead relying on full version privacy policies found on websites.

However, the sweep results also highlighted a number of examples of best practices for app developers, including:

  • Many apps provided clear, easy-to-read and concise explanations about exactly what information would be collected, how and when it would be used and, in some instances, explained specifically and clearly what would not be done with the information collected.
  • Some apps provided links to the privacy policies of their advertising partners and opt-out elections in respect of analytic devices.
  • There were good examples of privacy policies specifically tailored to the app platform, successfully making use of just-in-time notifications (warning users when personal information was about to be collected or used), pop-ups and layered information, allowing for consumers to obtain more detailed information if required.

Many of the GPEN members are expected to take further action following the sweep results.  For its part, the UK ICO has commented that in light of the above results, it and other GPEN members intend to write to developers identified as deficient.  The Belgian Privacy Commission has, in addition, confirmed that cases of gross violations of data protection law identified in the sweep would be forwarded to and dealt with by the relevant authorities.

Changes Impacting Businesses that Process Personal Data in Russia

On July 21, 2014, a new law Federal Law № 242-FZ was adopted in Russia (Database Law) introducing amendments to the existing Federal Law “On personal data” and to the existing Federal Law “On information, information technologies and protection of information.”  The new Database Law requires companies to store and process personal data of Russian nationals in databases located in Russia.  At a minimum, the practical effect of this new Database Law is that companies operating in Russia that collect, receive, store or transmit (“process”) personal data of natural persons in Russia will be required to place servers in Russia if they plan to continue doing business in that market.  This would include, for example, retailers, restaurants, cloud service providers, social networks and those companies operating in the transportation, banking and health care spheres.  Importantly, while Database Law is not scheduled to come into force until September 1, 2016, a new bill was just introduced on September 1, 2014 to move up that date to January 1, 2015.  The transition period is designed to give companies time to adjust to the new Database Law and decide whether to build up local infrastructure in Russia, find a partner having such infrastructure in Russia, or cease processing information of Russian nationals.  If the bill filed on September 1 becomes law, however, that transition period will be substantially shortened and businesses operating in Russia will need to act fast to comply by January 1.

Some mass media in Russia have interpreted provisions of the Database Law as banning the processing of Russian nationals’ personal data abroad.  However, this is not written explicitly into the law and until such opinion is confirmed by the competent Russian authorities, this will continue to be an open question.  There is hope that the lawmakers’ intent was to give a much needed boost to the Russian IT and telecom industry, rather than to prohibit the processing of personal data abroad.  If this hope is confirmed, then so long as companies operating in Russia ensure that they process personal data of Russian nationals in databases physically located in Russia, they also should be able to process this information abroad, subject to compliance with cross-border transfer requirements.  

The other novelty of this new Database Law is that it grants the Russian data protection authority (DPA) the power to block access to information resources that are processing information in breach of Russian laws.  Importantly, the Database Law provides that the blocking authority applies irrespective of the location of the offending company or whether they are registered in Russia.  However, the DPA can initiate the procedure to block access only if there is a respective court judgment.  Based on the court judgment the DPA then will be able to require a hosting provider to undertake steps to eliminate the infringements.  For example, the hosting provider must inform the owner of the information resource that it must eliminate the infringement, or the hosting provider must restrict the owner’s access to the information that is processed with the infringements.  In case of the owner’s refusal or inaction, the hosting provider is obliged to restrict the access to the respective information resource altogether.  If the foregoing steps are not performed by the hosting provider in due course, the DPA may request that the communication service provider restrict access to the respective information resource altogether, in particular to web address, domain name and references to the web pages in the internet.

Changes Impacting Businesses that Process Internet Communications

In addition to the new Database Law, a new Federal Law № 97-FZ dated May 5, 2014 (Moderator Law) amends an existing Federal Law “On information, information technologies and information protection” to create new obligations for organizers of information distribution in the internet (moderators). The term “moderator” is defined as those maintaining information systems or software designed or used to receive, transfer, deliver or process electronic messages on the internet.  The relevant Russian regulator has clarified unofficially that the Moderator Law is addressed only to instant messaging, blogging, social media and e-mails (see clarification at http://rkn.gov.ru/press/publications/news26545.htm). However, the broad and ambiguous definition makes it possible to apply the Moderator Law to every website that has a chat or comment feature, or that is capable of sending or receiving messages from users.  The definition as it is might also apply to e-commerce, services of cloud storage, and more.

The amendments impose several new obligations on moderators, some of which give sweeping new rights of access to the Russian Government: 

  1. All moderators must file notification to the state authorities upon commencing moderating activity (meaning, upon maintaining information systems or software designed or used to receive, transfer, deliver or process electronic messages on the internet).  The entity shall file notification upon respective request of competent state authority or at its own initiative. The entity then will be qualified as moderator after its inclusion into special Register of moderators. The particular procedure of notification is specified in the Governmental Regulation №746 dated July 31, 2014 which became effective August 12, 2014.
  2. All moderators are obliged to store (in the territory of Russia for not less than six months) information on the facts of reception, transfer, delivery, processing of electronic messages of users and the data of such users.  The types of information to be stored are determined in recently published in Governmental Regulation № 759 dated July 31, 2014 which became effective August 14, 2014.  The Regulation also specifies categories of the users whose electronic messages and data should be stored.  Moderators also are under obligation to transfer such information to competent state authorities upon their request.  The requested information should be provided by the moderator within the specified term which is under general rule 30 days.  However, there might be urgent requests which imply requirement to provide information within three days.
  3. The moderators are obliged to comply with requirements for technical equipment as well as software and hardware tools established by the state authorities responsible to ensure security (for example, Federal Security Service), as well as those conducting criminal investigation in order to let them perform their functions.  For example, if the state authority cannot decrypt requested information on the moderator’s information systems, the moderator must assist authorities by taking required steps to grant access to the information it needs. The detailed procedure on liaising of moderators with state authorities on technical requirements is specified in the Governmental Regulations №743 dated July 31, 2014, which became effective August 12, 2014.

Note, however that the outlined obligations are not applicable to operators of state (municipal) information systems, communications operators (i.e., legal entities rendering communications services under the respective license) as well as to the individuals acting as moderators for private (personal) purposes.

If a moderator fails to comply with the Moderator Law or its implementing Regulations, the competent state authority is entitled to restrict access to the informational resources of the moderator by following statutory specified procedure set forth in the Governmental Regulation №745 dated July 31, 2014, which became effective August 12, 2014.

The violation of the Moderator Law and its implementing Regulations exposes the company and its officers to the following potential fines:

  • Failure to file required notifications can result a company fine from 100,000 to 300,000 RUR ($2,695.84 up to $8,087.52 USD) and a fine for company officers ranging from 10,000 to 30,000 RUR ($269.58 to $808.75 USD);
  • Failure to store information or ensure access by authorities can result in a company fine ranging from 300,000 to 500,000 RUR ($8,087.52 up to $13,479.20 USD) , and a fine for or company officers – from 30,000 to 50,000 RUR ($808.75 up to $1,347.92 USD); and
  • Failure to comply with technical requirements can result in a company fine ranging from 300,000 to 500,000 RUR ($8,087.52 up to $13,479.20 USD) and a fine for company officers ranging from 30,000 to 50,000 RUR ($808.75 up to $1,347.92 USD).

Guest author, Maria Ostashenko is Of Counsel at ALRUD Law Firm based in Moscow, Russia.  Ms. Ostashenko and the ALRUD Firm are part of McDermott’s worldwide network of local privacy counsel who enable us to deliver seamless advice to multinational clients with the speed, efficiency and quality that our clients have come to expect from our team.    

In building a stout privacy and security compliance program that would stand up well to federal HIPAA audits, proactive healthcare organizations are generally rewarded when it comes to data breach avoidance and remediation. But an important piece of that equation is performing consistent risk analyses.

McDermott partner, Edward Zacharias, was interviewed by HealthITSecurity to discuss these topics and more.

Read the full interview.