Designed to provide business leaders and their key advisors with the knowledge and insight they need to grow and sustain successful digital health initiatives, we are pleased to present The Law of Digital Health, a new book edited and authored by McDermott’s team of distinguished digital health lawyers, and published by AHLA.

Visit www.mwe.com/lawofdigitalhealth to order this comprehensive legal and regulatory analysis, coupled with practical planning and implementation strategies. You can also download the Executive Summary and hear more about how Digital Health is quickly and dynamically changing the health care landscape.

Explore more!

Throughout 2017, the health care and life sciences industries experienced a widespread proliferation of digital health innovation that presents challenges to traditional notions of health care delivery and payment as well as product research, development and commercialization for both long-standing and new stakeholders. At the same time, lawmakers and regulators made meaningful progress toward modernizing the existing legal framework in a way that will both adequately protect patients and consumers and support and encourage continued innovation, but their efforts have not kept pace with what has become the light speed of innovation. As a result, some obstacles, misalignment and ambiguity remain.

We are pleased to bring you this review of key developments that shaped digital health in 2017, along with planning considerations and predictions for the digital health frontier in the year ahead.

Read the full Special Report.

On 6 August 2017, the UK government released ‘The Key Principles of Vehicle Cyber Security for Connected and Automated Vehicles’, guidance aimed at ensuring minimum cybersecurity protections for consumers in the manufacture and operation of connected and automated vehicles.

Connected and automated vehicles fall into the category of so-called ‘smart cars’. Connected vehicles have gained, and will continue to gain, adoption in the market and, indeed, are expected to make up more than half of new vehicles by 2020. Such cars have the ability through the use of various technologies to communicate with the driver, other cars, application providers, traffic infrastructure and the Cloud. Automated vehicles, also known as autonomous vehicles, include self-driving features that allow the vehicle to control key functions–like observing the vehicle’s environment, steering, acceleration, parking, and lane changes–that traditionally have been performed by a human driver. Consumers in certain markets have been able to purchase vehicles with certain autonomous driving features for the past few years, and vehicle manufacturers have announced plans to enable vehicles to be fully self-driving under certain conditions, in the near future.

Continue Reading UK Government Issues Cybersecurity Guidance for Connected and Automated Vehicles

The US Department of Health and Human Services (HHS) Office for Civil Rights (OCR) recently posted guidance (OCR guidance) clarifying that a business associate such as an information technology vendor generally may not block or terminate access by a covered entity customer to protected health information (PHI) maintained by the vendor on behalf of the customer. Such “information blocking” could occur, for example, during a contract dispute in which a vendor terminates customer access or activates a “kill switch” that renders an information system containing PHI inaccessible to the customer. Many information vendors have historically taken such an approach to commercial disputes.

Read full article here.

If you haven’t heard about newest gaming craze yet, it’s based on what is called “augmented reality” (AR) and it could potentially impinge on your home life and workplace as such games allow users to “photograph” imaginary items overlaid with objects existing in the real world. An augmented reality game differs from “virtual reality” in that it mixes elements of the real world with avatars, made up creatures, fanciful landscapes and the like, rather than simply presenting a completely fictional scenario. Whether you play such games yourself or are merely existing in nearby surroundings, here are few things to think about as an active participant, and some tips regarding Intellectual Property and confidentiality issues that arise from others playing the game around you.

Augmented reality games are typically played on a smartphone app and some of them allow the user to capture images of the player’s experience and post it on social media, text it to friends or maintain it on the phone’s camera roll. However, special glasses could be used or other vehicles could deliver the augmented reality experience in different contexts—not just gaming. For example, technology in this area is rapidly advancing which will allow users to link up and “experience” things together way beyond what exists in the real world, i.e., in a “mixed world” experience, if you will. These joint holographic experiences are just one facet of the direction that augmented reality is taking.

As always, with new technological advancements, there are some caveats to using AR that you should be aware of.

Trademarks

If a company’s trademark is visible in the photo of your AR experience, you need to be mindful that you do not run afoul of trademark laws. For the same reasons that some trademarks are blurred out on TV shows, you should not be publishing such photos in any fashion that might draw negative attention from the trademark owner on social media accounts. Even if you are not selling competing goods, you could potentially be liable for trademark infringement. There is another, more important reason not to post such photos that is discussed below and can lead to a second cause of action against you arising from the same photo—the right of publicity, which is a personal right and is treated in vastly different ways in each state.

Right of Publicity

The Right of Publicity (ROP) protects everyone from misappropriation of his/her name, likeness, voice, image or other recognizable element of personal identity. It is protected by state law and many states vary greatly in their treatment of ROP. For example, some states protect a person’s ROP post-mortem, whereas others have no protection whatsoever. Due to the ease with which still or moving images can be reproduced and posted on the Internet, it is critical that you consider your postings from a ROP standpoint before you upload that image to a social media account. For instance, if your photo features your best friend taken in a shared AR experience, she may not object to you posting her photo to one of your social media accounts. However, if a brand name clothing manufacturer reposts it and somehow uses the momentum of the AR craze to show how game players and/or the avatars and creatures within the game are attracted to their brand of clothing, it could result in not just an issue with the game developer, but also your best friend, who may now be the unwitting spokesmodel for that brand of clothing. Basically, the manufacturer would be receiving an unfair free endorsement deal without ever having to negotiate with your best friend. In many states, she would have a ROP cause of action against the clothing manufacturer for commercial use of her image without her permission. This is exponentially dangerous if the best friend is a minor and her parents have not consented to this use of her image. As you can see, the landscape is fraught with potential pitfalls unless you are a news reporting agency or the like and your actions clearly fall under the First Amendment/free speech exception.

Confidential Information

One very important aspect of an AR game is a player’s ability to capture a photograph of the scene being explored or the personal experience of the user in a real world setting (e.g., it could show your desk at work, but in an outer space setting, or your car dashboard with the view from the driver’s perspective out the windshield showing a fairyland with mythical creatures in the distance). However, in taking these mixed virtual/real world photos, it is essential to be mindful of your surroundings. Doctors, lawyers, mental health professionals, bankers, and others with a much higher level of fiduciary duty to their clients must ensure that if they are taking such photos, no confidential information that would breach such duties is captured in the photos. Whether taken in the app itself or in screenshot form, these photos could prove to be problematic if they are automatically uploaded to the cloud or captured in the app. For example, a judge recently tweeted that defense counsel had beenplaying an AR game in the courtroom while court was in session. Setting aside the appropriateness of such behavior, query whether such actions violate confidentiality rules.

For all such professionals there are governing rules about the treatment of certain types of confidential information (The Gramm-Leach-Bliley (GLB) Act, The Health Insurance Portability and Accountability Act (HIPAA), etc.). If the game is set to capture images of the AR characters or scenes in the real world then anything within the player’s view or in the surrounding area is captured in the photograph with the character. To the extent that confidential personal information or trade secret information is being captured, this is a problem. The quick fix is to set the game to have a fully virtual background, rather than an AR one, a feature that some AR games already have. Although this is arguably less fun, it mitigates the danger of capturing sensitive data on your camera roll, in the cloud, or accidentally posting it, all of which could have very serious consequences.

In summary, the new AR games are wildly popular and likely are here to stay. Given that, it’s best to be mindful of your surroundings and make sure that you, and those around you, are playing responsibly.

On December 28, 2015, the Ministry of Industry and Information Technology of China released the newly revised Classification Catalogue of Telecommunications Services, which is due to take effect as of March 1st, 2016. This round of revision has long been awaited since its last amendment in 2003, and is expected to reflect the advancement and emergence of new technologies and business models in the telecommunication field as well as to help keep new telecommunication business models under the regulatory radar.

 

Read the full China Law Alert.

On April 1, 2015, the Office of the National Coordinator for Health Information Technology (ONC), which assists with the coordination of federal policy on data sharing objectives and standards, issued its Shared Nationwide Interoperability Roadmap and requested comments.  The Roadmap seeks to lay out a framework for developing and implementing interoperable health information systems that will allow for the freer flow of health-related data by and among providers and patients.  The use of technology to capture and understand health-related information and the strategic sharing of information between health industry stakeholders and its use is widely recognized as critical to support patient engagement, improve quality outcomes and lower health care costs.

On April 3, 2015, the Federal Trade Commission issued coordinated comments from its Office of Policy Planning, Bureau of Competition, Bureau of Consumer Protection and Bureau of Economics.  The FTC has a broad, dual mission to protect consumers and promote competition, in part, by preventing business practices that are anticompetitive or deceptive or unfair to consumers.  This includes business practices that relate to consumer privacy and data security.  Notably, the FTC’s comments on the Roadmap draw from both its pro-competitive experience and its privacy and security protection perspective, and therefore offer insights into the FTC’s assessment of interoperability from a variety of consumer protection vantage points.

The FTC agreed that ONC’s Roadmap has the potential to benefit both patients and providers by “facilitating innovation and fostering competition in health IT and health care services markets” – lowering health care costs, improving population health management and empowering consumers through easier access to their personal information.  The concepts advanced in the Roadmap, however, if not carefully implemented, can also have a negative effect on competition for health care technology services.  The FTC comments are intended to guide ONC’s implementation with respect to: (1) creating a business and regulatory environment that encourages interoperability, (2) shared governance mechanisms that enable interoperability, and (3) advancing technical standards.

Taking each of these aspects in turn, creating a business and regulatory environment that encourages interoperability is important because, if left unattended, the marketplace may be resistant to interoperability.  For example, health care providers may resist interoperability because it would make switching providers easier and IT vendors may see interoperability as a threat to customer-allegiance.  The FTC suggests that the federal government, as a major payer, work to align economic incentives to create greater demand among providers for interoperability.

With respect to shared governance mechanisms, the FTC notes that coordinated efforts among competitors may have the effect of suppressing competition.  The FTC identifies several examples of anticompetitive conduct in standard setting efforts for ONC’s consideration as it considers how to implement the Roadmap.

Finally, in advancing core technical standards, the FTC advised ONC to consider how standardization could affect competition by (1) limiting competition between technologies, (2) facilitating customer lock-in, (3) reducing competition between standards, and (4) impacting the method for selecting standards.

As part of its mission to protect consumers, the FTC focuses its privacy and security oversight of health- related information on companies and data sharing arrangements that sit outside the jurisdiction of the Health Insurance Portability and Accountability Act (HIPAA), which regulates the privacy and security practices of covered entity health care providers, health plans and health care clearinghouses, as well as the third parties that assist those covered entities, referred to as business associates.  Information regulated by HIPAA, called Protected Health Information (PHI) typically resides in the “traditional medical model” of providers and health plans.  Information regulated by the FTC, often called consumer-generated health information (CHI) tends to be generated outside of the traditional medical model, for example through the explosion of wearables and other digital, consumer-facing technologies.

As interoperability gathers steam, and as providers and plans increasingly look to mobile and digital health tools to maximize patient engagement and obtain additional “out of the exam room” data that they can leverage to improve patient outcomes and control costs, the divide between PHI and CHI collapses.  Not only will interoperability have to contend with PHI-centered systems effectively sharing information with one another, but it will also have to contend with the need for systems to move PHI and CHI, consistent with the different consumer expectations and regulatory frameworks for such information.  And then, of course, there is also state law.

The FTC’s comments highlight the central role that the FTC will play, alongside the Office of Civil Rights, which enforces HIPAA, in envisioning, deploying and overseeing the health information sharing systems beginning to emerge.

On the third anniversary of the EU Commission’s proposed new data protection regime, the UK ICO has published its thoughts on where the new regime stands. The message is mixed: progress in some areas but nothing definitive, and no real clarity as to when the new regime may come into force.

The legislative process involves the agreement of the European Commission, the European Parliament and the Council of Europe (representing the governments of the member states). So far the European Parliament has agreed its amendments to the Commission’s proposal and we are still waiting for the Council to agree it’s amendments before all three come together and try and find a mutually agreeable position.

The Council is guided by the mantra “nothing is agreed until everything is agreed”, and so even though there has been progress with the Council reaching “partial general agreement” on international transfers, risk-based obligations on controllers and processors, and the provisions relating to specific data processing situations such as research and an approach agreed on the one-stop shop principle (allowing those operating in multiple states to appointed and deal with a single authority), this progress means nothing until there is final agreement on everything. At this stage that means all informal agreements remain open to renegotiation.

It is noted that Latvia holds the presidency of the Council until June 2015. The Latvians have already noted that Anydata protection reform remains a key priority but progress has been slow and time may be against them. Where Latvia fails, Luxembourg will hopefully succeed as it takes up the presidency from June.

The ICO is urging all stakeholders to push on with the reform, although they see the proposed timetable of completion of the trilogue process by the end of 2015 as being optimistic. Instead a more reasonable timetable may be a final agreement by mid-2016 with the new regime up and running in 2018.

In 2014, regulators around the globe issued guidelines, legislation and penalties in an effort to enhance security and control within the ever-shifting field of privacy and data protection. The Federal Trade Commission confirmed its expanded reach in the United States, and Canada’s far-reaching anti-spam legislation takes full effect imminently. As European authorities grappled with the draft data protection regulation and the “right to be forgotten,” the African Union adopted the Convention on Cybersecurity and Personal Data, and China improved the security of individuals’ information in several key areas. Meanwhile, Latin America’s patchwork of data privacy laws continues to evolve as foreign business increases.

This report furnishes in-house counsel and others responsible for privacy and data protection with an overview of key action points based on these and other 2014 developments, along with advance notice of potential trends in 2015. McDermott will continue to report on future updates, so check back with us regularly.

Read the full report here.

On 10 September 2014, the Global Privacy Enforcement Network (GPEN) published the results of its privacy enforcement survey or “sweep” carried out earlier in 2014 with respect to popular mobile apps.  The results of the sweep are likely to lead to future initiatives by data protection authorities to protect personal information submitted to mobile apps.

The purpose of the sweep was to determine the transparency of the privacy practices of some 1,211 mobile apps and involved the participation of 26 data protection authorities across the globe.  The results of the sweep suggest that a high proportion of the apps downloaded did not sufficiently explain how consumers’ personal information would be collected and used.

Background

GPEN was established in 2010 on the recommendation of the Organisation for Economic Co-operation and Development.  GPEN aims to create cooperation between data protection regulators and authorities throughout the world in order to strengthen personal privacy globally.  GPEN is currently made up of 51 data protection authorities across some 39 jurisdictions.

Over the course of a week in May 2014, GPEN’s “sweepers” – made up of 26 data protection authorities across 19 jurisdictions, including the UK Information Commissioner’s Office (ICO) – participated in the survey by downloading and briefly interacting with the most popular apps released by developers in their respective jurisdictions, in an attempt to recreate a typical consumer’s experience.  In particular GPEN intended the sweep to increase public and commercial awareness of data protection rights and responsibilities as well as identify specific high-level issues which may become the focus of future enforcement actions and initiatives.

Sweep Results

The key negative findings of GPEN sweep include:

  • 85 percent of apps failed to clearly explain how personal information would be processed.
  • 59 percent of apps did not clearly indicate basic privacy information (with 11 percent failing to include any privacy information whatsoever).
  • 31 percent of apps were excessive in their permission requests to access personal information.
  • 43 percent of the apps had not sufficiently tailored their privacy communications for the mobile app platform – often instead relying on full version privacy policies found on websites.

However, the sweep results also highlighted a number of examples of best practices for app developers, including:

  • Many apps provided clear, easy-to-read and concise explanations about exactly what information would be collected, how and when it would be used and, in some instances, explained specifically and clearly what would not be done with the information collected.
  • Some apps provided links to the privacy policies of their advertising partners and opt-out elections in respect of analytic devices.
  • There were good examples of privacy policies specifically tailored to the app platform, successfully making use of just-in-time notifications (warning users when personal information was about to be collected or used), pop-ups and layered information, allowing for consumers to obtain more detailed information if required.

Many of the GPEN members are expected to take further action following the sweep results.  For its part, the UK ICO has commented that in light of the above results, it and other GPEN members intend to write to developers identified as deficient.  The Belgian Privacy Commission has, in addition, confirmed that cases of gross violations of data protection law identified in the sweep would be forwarded to and dealt with by the relevant authorities.