In recent weeks and months, legal and technical issues related to use of facial recognition systems in the United States have received national attention, including concerns that the technology lacks accuracy in identifying non-white individuals and that its widespread use by police departments may play a role in racially discriminatory policing. Privacy considerations will play a key role in the ongoing debate over the future of facial recognition technology.

Facial recognition systems (FRS) are automated or semi-automated technologies that analyze an individual’s features by extracting facial patterns from video or still images. FRS use attributes or features of an individual’s face to create data that can be used for the unique personal identification of a specific individual. FRS use has grown exponentially in recent years. In addition to widespread adoption by law enforcement agencies, FRS are also frequently used in retail, banking and security sectors, such as airport screening. Particularly in recent weeks and months, legal and technical issues associated with FRS have come to the forefront, including concerns that the technology lacks accuracy in identifying non-white individuals and that its widespread use by police departments may play a role in racially discriminatory policing.

In response to the global Coronavirus (COVID-19) pandemic, public health agencies and private sector companies have considered ways that FRS might be used in conjunction with proximity and geolocation tracking data to control the disease’s spread. Some foreign governments have implemented extensive biometric and behavioral monitoring to track and contain the spread of the virus, and have used FRS to identify persons who have been in contact with COVID-19-positive individuals and to enforce quarantine or stay-at-home orders. By contrast, use of FRS in the United States already faced opposition because of pre-COVID-19 data privacy concerns, and has encountered increased backlash after the civil rights protests of the past month due to concerns over the technology’s accuracy and accompanying questions regarding its use by law enforcement agencies.

Accuracy Concerns

There are currently no industry standards for the development of FRS, and as a result, FRS algorithms differ significantly in accuracy. A December 2019 National Institute of Standards and Technology (NIST) study, the third in a series conducted through its Face Recognition Vendor Test program, evaluated the effects of factors such as race and sex on facial recognition software. The study analyzed 189 facial recognition algorithms from 99 developers, using collections of photographs with approximately 18 million images of eight million people pulled from databases provided by the US Department of State, the Department of Homeland Security and the Federal Bureau of Investigation. The study found disproportionately higher false positive rates for African American, Asian and Native American faces for one-to-one matching, and higher rates of false positives for African American females for one-to-many matching. The effect of the high rate of false positives for African American females put this group at the greatest risk of misidentification. While law enforcement is encouraged to adopt a high threshold recognition percentage—often 99%—for the use of FRS, in reality police departments exercise [...]

Continue Reading




read more