Face recognition is frequently in the news, and often not in a good way. Unfortunately, many of the negative claims come from people unfamiliar with how the technology works and is being used. While it might be tempting to reduce face recognition to an inevitable Orwellian nightmare, its benefits cannot be realized unless we educate ourselves about how the technology really works, separate fact from fiction and pass common sense regulation that set guidelines for use. Here are five popular misconceptions about face recognition and privacy to help set the record straight on this powerful, emerging technology.

 

Misconception #1: Face recognition can track and surveil anyone.

False.

Contrary to popular assumptions about how this technology is used, facial recognition systems can’t identify people they are not looking for. If you have not opted yourself into a facial recognition system or been added as part of a watchlist you cannot be identified by that system.

One of the most common use cases of face recognition is to monitor for “persons of interest”. In accordance with biometric security regulations, individuals known to be threats can be added to the system in order to aid surveillance teams in their efforts to safeguard people and spaces. Who might a security guard be on the lookout for? Perhaps an abusive parent who has lost custody showing up at a school? A known shoplifter? Or a disgruntled former employee who has made serious threats against their former place of business?

This “watchlist” approach can improve safety at airports, schools, and other locations by immediately recognizing individuals of concern for that location. In these scenarios there are very few individuals that the facial recognition system “sees” — everyone who is not a match is ignored and, by default, face data is automatically deleted within seconds if it doesn’t match a face in the watchlist. Using face recognition to spot known threats is much more efficient and accurate than relying solely on 24/7 manual monitoring from security guards at each entrance. Facial recognition systems are much better than humans at remembering what many persons of interest look like and spotting them as soon as they enter.

Another popular face recognition use case is access control for secure areas. In this scenario people “opt-in” so they can use their face to gain entrance to buildings or restricted areas. Biometrics can replace or augment other methods currently used to safeguard perimeters and control access to buildings. For example, someone might borrow or steal a vendor’s badge to get into a sports stadium via a service entrance. Anyone in possession of the badge can get in, but if face recognition is used, the door won’t open unless the correct badge owner is the one at the door.

In both watchlist and secure access use cases, there are very few people actually registered within the facial recognition system. And everyone else is ignored.

 

Misconception #2: If your “face ID” data is stolen, hackers can track your every move.

False.

Face signature data is actually less hackable than other unique identifiers. Facial recognition systems translate digital images into a numerical representation based on the unique features of a face. This creates a unique face signature, that can then be associated with an identity in a database and compared against faces appearing on camera to determine if there is a match. Each facial recognition system has its own proprietary way in which to store this data.

The bigger issue around face recognition and privacy is that everyone’s face is already out there — on public Facebook pages, LinkedIn profiles, and more. If an individual wants to stalk someone, they’ll have a much easier time using social media than trying to utilize a facial recognition system.

Technically once a face image is available publicly, a facial recognition system could create a face signature from that image. This is another reason that strong biometric privacy legislation — including strict regulations around how and when you can add somebody to a facial recognition system — is needed to help prevent abuse of the technology.

 

Misconception #3: Face recognition should be banned because it exhibits racial bias.

False.

Providers and operators of any emerging technology must be held to high standards to ensure the technology is developed and deployed in a manner consistent with human and consumer rights. Face recognition is a powerful tool, but not a substitute for human surveillance operators making deliberate and actionable decisions based on corroborating evidence. Its job is to present data in real time that can be used to help security staff of potential issues or investigate incidents post-event.

Yes, some facial recognition algorithms do currently show unacceptable levels of racial bias, however, the technology is far too valuable to be banned outright. Why continue to advocate use of face recognition when results aren’t always perfect?
 

  1. Some facial recognition systems exhibit lower levels of bias than others: A recent National Institute of Standards and Technology (NIST) study found that Asian and African-American faces had false-positive match rates 10 to 100 times higher than white faces across many tested algorithms. While these levels of bias are clearly unacceptable, the study also identified several algorithms that were “important exceptions.” These algorithms had fairly consistent results across racial groups with accuracy variances as low as 0.19%. This shows that facial recognition systems don’t inherently have high rates of bias and can be improved. Rather than ban this technology outright, facial recognition providers should be held accountable for reducing bias in their algorithms. A labeling system — akin to nutrition labels on foods — based on results from an independent evaluator such as NIST could provide transparency. Systems that don’t meet minimum consistency requirements across race, age, and gender should not be considered by purchasing committees.  
  2. Using face recognition can reduce bias in real-world situations: The core function of face recognition — attempting to identify a face based on a knowledge bank of known faces — is something humans do all the time. Take an eyewitness to a crime, a police officer looking for a suspect based on a surveillance image, or a store clerk watching for shoplifters. Each has inherent bias informed by past interactions, the media, and amplified by the cross-race effect. All people have inherent bias — the bias found in facial recognition algorithms stems from bias in the humans developing them. There is no way to completely eradicate bias in humans or algorithms, but facial recognition technology is already as good as, or better than, humans at finding correct matches when comparing images of faces. It can also do so exponentially faster.
  3. Facial recognition algorithms are getting better: It’s much easier to train an AI model to reduce bias and eliminate the cross-race effect than it is to eradicate bias in every security guard, law enforcement officer, and witness to a crime. Face recognition has come a long way since it was first developed, and the technology continues to improve with regards to accuracy across skin tone and gender. Bias thresholds could evolve over time as the technology improves to ensure false-match rates continue to drop for all users as the technology becomes more widespread. A complete ban on face recognition would prevent this continual improvement of a technology that could be an equalizer. Accurate, low bias systems leveraged by humans who are educated on how they work — and how to account for their limitations — have the potential to dramatically reduce bias across a range of security, law enforcement, and criminal justice use cases. 

 

Misconception #4: All facial recognition systems are the same.

False.

Algorithms used to recognize and match faces can differ widely based on how they were developed and what data was used to train and test them. Humans still build and train these models, so human bias can creep in if not accounted for. Did the developers consider racial and gender bias when building the algorithm? Did they have access to a suitably diverse dataset of faces of varying ages, genders, races, and skin tones? Or did they feed the model primarily white, male faces when training it to recognize a human?  These factors directly affect the performance and accuracy of a facial recognition system. 

Beyond the way the algorithm performs, systems can also vary in how they handle sensitive data. Is data hosted on premises or in the cloud? Who owns and has access to the data — just the customer? Or the technology vendor as well? Can any part of the data be sold to third parties? How a facial recognition system protects the rights and privacy of individuals that interact with the system can be a determining factor when winning acceptance from civil rights organizations and consumers skeptical of the technology. Methods for ensuring privacy should be built into a facial recognition system by design — another area where legislation regulating biometric technology can help.

Not all facial recognition systems are created equal. Accuracy and bias results vary widely. Some vendors sell data to third parties for profit. Good biometrics technology providers have recommended usage policies, build accurate algorithms with low bias, follow privacy by design principles, and provide training that guides end users on responsible data capture, data retention, and transparent documentation of data collection practices.

 

Misconception #5: You could be wrongfully convicted of a crime solely from face recognition results identifying you as someone you are not.

False.

While face recognition can be a valuable tool for law enforcement, it does not operate without human oversight. This misconception assumes that facial recognition systems have the “last word” during a criminal investigation. In a law enforcement context, face recognition is used to present trained analysts with a selection of potential matches based on a similarity score. Humans use this data, along with independent corroborating evidence to make a final determination.

Before face recognition, police would have to manually look through hundreds of mugshots with fatigued or stressed victims or canvass areas on foot with photos. Face recognition doesn’t do anything inherently new, it simply augments the existing investigative process, delivering a level of efficiency previously impossible to achieve.  This allows agencies and security departments to investigate incidents they previously couldn’t, or that were of low enough severity that the resources needed to investigate weren’t able to be allocated. Facial recognition can benefit victims of these types of crimes by quickly identifying potential suspects — bringing restitution and closure to those who have been harmed.

It’s important to educate the public, civil rights activists and legislators about how facial recognition systems actually work and how this technology can be used for good. Face recognition can find missing children. It can be used to speed up check in at hospitals, enhance security at airports, or provide touchless access control. Face recognition isn’t inherently bad, or inherently good — there’s potential for abuse as with any technology. But with sensible legislation, and responsible development and deployment of systems that are designed with privacy in mind we can realize all the benefits face recognition has to offer.