Facial Recognition Reading List

Gargi Sharma
5 min readDec 18, 2020

I am a human rights advocate and a data justice researcher. This literature review / reading list keeps me accountable and also creates a public log of writing on facial recognition technologies. I would love to add notes on each of these, but that might not always be possible. It is likely that this list will end up being so long, it will only be useful to me.

I try to use as much open access literature as possible, but some links might be behind paywalls.

Photo by Kelly Sikkema on Unsplash

2020

January

Making Facial Recognition Easier Might Make Stalking Easier Too, by Rachel Charlene Lewis (31.01.2020, bitchmedia) — Clearview AI can and will make stalking easier and more dangerous. It will also impact those who want to fly under the radar (for safety reasons, or just because they want to). It is clear that the risk of abuse far outweighs any potential benefits, especially considering the gender and race dimensions of stalking (and surveillance in particular).

The Secretive Company That Might End Privacy as We Know It, by Kashmir Hill (18.01.2020, New York Times) — Clearview AI scrapes 3 billion images from Facebook, YouTube, Venmo and millions of other websites. In contrast, the FBI database has only 411 million images. More than 600 law enforcement agencies started using Clearview AI in 2019, but we don’t have a list. Photographs taken by the police are sent to and stored in the Clearview server, allowing other users (including non-law enforcement) to rely on them in the future. NYTimes found that programming language would allow the software to be used with augmented-reality glasses, making it possible to identify every person you see and know where they live and work — a stranger at a cafe, fellow train rider, activists at a protest, migrants crossing borders. The company has licensed the app to companies for security purposes too. In the absence of public scrutiny, people won’t know if their police departments, employers, housing societies, or local businesses are keeping a record of them. The company’s who’s who include those with relationships to former NYC mayor, Rudy Giuliani and Facebook and Palantir’s Peter Thiel. Police departments in the US have been using facial recognition tools for almost two decades, but those were based on pictures provided to the government — mug shots, driver’s license photos. Clearview AI’s scraping allows them access to photographs from social media and other websites. This raises a question: is publicly-available information fair game for surveillance and profit-making? Social media sites like Twitter ban use of its data for facial recognition, but that didn’t stop Clearview AI. The company says it is accurate 75% of the time, but there is lack of data on false matches. Arrests made solely on the basis of facial recognition are liable to be deemed improper. The company also has the ability to manipulate and hide results.

“Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name.”

Photo by Maxim Hopman on Unsplash

2019

November

Facebook built a facial recognition app for employees, by Queenie Wong (22.11.2019, CNET) — “Employees would point their phone camera at another person and it would display their name and Facebook profile picture after a few seconds.”

September

Facebook replaces setting that only suggested friends to tag in photos, by Queenie Wong (03.09.2019, CNET) — users who do not have facial recognition turned on will not appear as suggestions to be tagged in photos, follwing a lawsuit that alleged the company violated Illinois biometric privacy law.

August

Memo from Clearview AI’s lawyer on the Legal implications of Clearview Technology, by Paul D. Clement,Esq. (14.07.2019, Kirkland & Ellis LLP) — according to the memo, not only does the use of Clearview AI’s tech by law enforcement not violate the US Constitution and state biometric and privacy laws, it promotes constitutional values in a manner superior to traditional and competing technologies. It compares Clearview AI to Google in that it acts as a search engine of publicly available images. It argues that a Clearview search is the beginning, not the end, of the identification process — and as such, not intended or designed to be used as evidence in court. This raises a question: if identification of an individual or a pool of suspects is done using this technology, leading to arrest, would it still be admissible? Also, once the technology is in private hands, even if the result fails to rise to rise to the level of a legal harm, provided it is prosecuted, it still has the ability to cause material harm. It also states that state law regulates use of facial recognition technologies for commercial purposes, and as such, does not impact use of Clearview’s technology by law enforcement. The memo repeatedly mentions that Clearview’s technology is ‘objective’ and ‘race-neutral’, but research has shown that no current technology has the same identification rate across ethnicities. It also tries to make the claim that identifying suspects using Clearview’s technology minimised police-citizen contact like neighbourbood canvasses or stopping and questioning potential witnesses.

Photo by Matthew T Rader on Unsplash

2018

Photo by Fredrik Bedsvaag on Unsplash

2017

Photo by Jon Tyson on Unsplash

2016

Photo by Etienne Girardet on Unsplash

2015

Photo by Noelle Otto from Pexels

2014

2013

2012

2011

June

Facebook faces fresh privacy palaver over face recognition for photo tagging, by Richard Trenholm (08.06.2011, CNET) — Facebook uses facial recognition to suggest auto-tagging; turned on by default.

Facial Recognition: The One Technology Google Is Holding Back, Bianca Bosker (01.06.2011, HufPost) — Google CEO Eric Schmidt described a scenario where an evil dictator could identify people in a crowd and use the technology against its citizens.

2010

--

--