Facial Recognition and How it Works
By Susan Gonzales and Zachary Solomon
Facial recognition seems like the stuff of science fiction. Technology that’s capable of identifying a human face from a photograph or video and matching it from among hundreds of millions of other faces stored in a database of faces—if that doesn’t sound futuristic, we don’t know what does.
Here’s how it works: facial recognition systems use artificial intelligence algorithms (step-by-step instructions written by a programmer to be followed by a computer to solve a problem) to identify specific details in a person’s face in an image, details such as the distance between one’s eyes or the distance between their ears. That image is then cropped to size, converted into a format that is easily readable by a search engine, and then matched against all the other images in a database. Sometimes, if the algorithm can’t find a positive match, it will suggest several options—images of people who look similar enough to the person in the original image.
For a real-life example of facial recognition, look to the January 6th attack on the U.S. Congress.. The FBI, which itself has access to over 640 million photos of American citizens in its facial recognition database, has been working to identify the hundreds of people who terrorized the Capitol Building. And they have had some success. Using facial recognition technology, law enforcement has been matching faces from the images and videos taken during the insurgency—often by the insurgents themselves—against the faces in the various databases available to them.
With law enforcement, facial recognition technology has the potential to be a force for good in the world. Being able to identify criminals through a random image or blurry video using intelligent algorithms can make justice swifter and safer.
But unfortunately, it’s not all roses. The common belief is that facial recognition can be a tool for good as long as it is heavily regulated. Facial recognition tech has a lot of room for improvement. There have been numerous reports of inaccurate matches; people are routinely misidentified, especially people of color and women. A recent study showed that a popular facial recognition system was incorrect 34.7% of the time when it came to identifying black women. And last year, the city of San Francisco banned the use of facial recognition technology, citing its potential abuse.
There are countless examples where facial recognition has gone wrong. Elected officials and community leaders are demanding changes based on the unintended fallout of facial recognition tech on people of color and women in particular. Improvements to the technology are expected in the coming months and years that may address these issues. Stay tuned.
For more detailed information about facial recognition technology and bias, check out the Algorithmic Justice League’s primer and the ACLU factsheet.