It's rare to say for the average American that they don't recognize a security camera at a grocery store or are not aware of how many police departments are required to wear body-cameras while on duty. You have most likely normalized surveillance as an everyday encounter and let that occurrence pass subconsciously, especially if you have never been accused of a crime. However, there is not a guarantee that technology will be on your side with facial recognition as a prominent tool in finding suspects; the possibility of being falsely accused by algorithmic systems that compare your face to thousands.
In 2016, the Georgetown Center on Privacy and Technology said police in most US states had access to the tech and that photos of about half of US adults were in a facial recognition database. The number of Americans in national databases could be higher today. Face recognition systems work by using algorithms that pick out certain details of an individual's face, including the shape of an eye or how the head is featured. Others' faces are compared to in the same or shared database through a mathematical representation, giving matches that can identify who law enforcement or government agencies such as the Department of Motor Vehicles, are looking for.
Complications of these face recognition databases are poor lighting, low quality image resolution, and suboptimal angle of view (such as in a photograph taken from above looking down on an unknown person). This is where it gets REALLY complicated: the inaccuracies of these systems have lead to many ordinary people (particularly people of color) with minimal to no criminal background, be charged with a crime they never committed. The worst part is, those are allegedly criminalized, and their legal counsel are never informed if the method of their suspicion was based on facial recognition.
Read story here about Robert Julian-Borchak Williams, an African American man, was arrested after a facial recognition system falsely matched his photo with security footage of a shoplifter: Facial Recognition Leads To False Arrest Of Black Man In Detroit : NPR
Racial bias in these databases is not incidental as the history of anti-activist and racist surveillance has contributed to disparities that we see in our justice system. Additionally, face recognition can potentially target other marginalized populations, such as undocumented immigrants by ICE, or Muslim citizens by the NYPD according to Najibi. The software to identify faces also has difficulties when it comes to recognizing people of color and women as compared to their white male counterparts. In addition to mugshots being logged into police departments' databases (with renown racist practices), it is more likely that non-white individuals are in these systems to be constantly monitored and targeted for false suspicions.
Here is more technical information about how law enforcement identifies people in their databases with yet so many factors that are not reliable, provided by the Electronic Frontier Foundation:
- A “false negative” is when the face recognition system fails to match a person's face to an image that is, in fact, contained in a database. In other words, the system will erroneously return zero results in response to a query.
- A “false positive” is when the face recognition system does match a person's face to an image in a database, but that match is incorrect. This is when a police officer submits an image of “Joe,” but the system erroneously tells the officer that the photo is of “Jack.”
Racial profiling can be difficult to detect as the courts or law enforcement are not required to inform if face recognition was involved in any investigative or judicial process. Human analysts who are also a key player in discovering matches in these databases are predominantly white and male, closing off the authority in a crucial role that has put many people through psychological harm and humiliation. This form of surveillance is hardly educated on, and it is unfortunately up to attorneys to do their own research about their local databases impact clients.
“The longer things remain secret, the harder it is to challenge them, and the harder it is to challenge them, the longer police go without courts putting limits on what they can do,” says Nathan Wessler, who leads the Speech, Privacy, and Technology Project at the ACLU.
Education on racial literacy has been advocated by many organizations and has led to lawmakers passing statutes that help improve the procedures of face recognition. For example, the Safe Face Pledge calls on organizations to address bias in their technologies and evaluate their application. Such efforts have already achieved some progress. The 2019 Algorithmic Accountability Act empowered the Federal Trade Commission to regulate companies, enacting obligations to assess algorithmic training, accuracy, and data privacy. Furthermore, several Congressional hearings have specifically considered anti-Black discrimination in face recognition (Najibi, 2020).
Face recognition regardless in invasive and is still a developing phenomenon in the legal world. There are a lot of unknowns about what it does and why there is a huge gap in its transparency. Each individual should give consent to have them identify visualized and distributed onto these databases. It is more than the technicalities of these systems but also how the evolution of the technology not being informed publicly by state actors is with intention.
- Racial Discrimination in Face Recognition Technology - Science in the News (harvard.edu)
- Face Recognition | Electronic Frontier Foundation (eff.org)
- Face Off: Law Enforcement Use of Face Recognition Technology | Electronic Frontier Foundation (eff.org)
- Ban dangerous facial recognition technology that amplifies racist policing - Amnesty International
- Facial Recognition Leads To False Arrest Of Black Man In Detroit : NPR
- The Fight to Stop Face Recognition Technology | American Civil Liberties Union (aclu.org)
- The Microsoft Police State: Mass Surveillance, Facial Recognition, and the Azure Cloud - The Intercept