Biometrics, Facial Recognition, Privacy, Security and the Law
The danger in using biometrics and facial recognition is that they’re not always accurate.
A recent article in the L.A. Times indicated that facial recognition software proposed to be used for police bodycams falsely indicated that about 20% of California legislators were criminals (insert political joke here), just as a previous study of members of Congress showed 28 legislators “matched” a database of criminals. The use of facial recognition software on massive databases like those of bodycams or dashcams has been challenged on the basis that such software is inaccurate and might lead to the wrongful arrest or even shooting of individuals based on incorrect identification. Indeed, while many states are banning the use of such body cam facial recognition, some states such as Illinois generally prohibit the collection and use of biometric information without a written policy and informed consent.
Passwords Must Die
For security professionals, authentication, access control and authorization are important as both concepts and as technologies. The goal of security is to “let good people in” and also to “keep bad people out.” OK. People and processes. “Good” means authorized people doing authorized or at least “permitted” things and “bad” means anything else.
We typically do “access control” by providing the user with a token—a user ID, a password, a dial-back, a multifactor “key” of some kind or the use of a biometric token. But mostly a password. Even strong passwords or pass-phrases have significant weaknesses for authentication or security. They are subject to theft and loss, whether in storage or transmission (think keyloggers, etc.) They can be forgotten, reused and represented. They require a reset option, which can be spoofed or fooled. They can be brute-forced. They are evil. Truly evil. They must die.
But the alternatives may be worse. Biometrics have the advantage that they may (or may not) be easy to present, are unique (mostly, and I say that as an identical twin) and provide for (mostly) strong authentication. If deployed correctly. But they can be spoofed or represented, provide a false sense of “strong” authentication and may (depending on implementation) require the creation and storage of massive amounts of biometric data. Most recently, biometric company Suprema was hacked and the attackers obtained biometric data on 27.8 million people. You think getting a new credit card is a pain? Imagine having to get a new face (with apologies to Nicholas Cage).
While biometrics have some promise for authentication, the better approach is to allow the user to retain the token and authenticate to a device they maintain control over (um, such as a phone). While this is not “true” biometric authentication, it is biometrically assisted authentication. It’s better than a password, but almost anything is better than a password.
Privacy
While most of the objections to biometrics in society focus on their errors, for privacy purposes the more troubling aspect of biometrics is their accuracy—either now or in the relatively near future. Right now, police use automated license plate readers (ALPR’s) to scan cars as they drive and to compile a database not only of stolen cars or those whose owners (not necessarily the operators) may have arrest warrants, but of every place every car is seen. They use them to issue speeding and parking tickets. And to solve crimes in the neighborhood. They collect a massive database of where any car is. It can be used to identify cheating spouses, business mergers or to repossess cars. Pretty cool. And pretty scary.
In China and elsewhere, this ALPR type of technology is being deployed against people. It’s being used to keep track of protesters or Uighurs. It’s being used to identify jaywalkers. To compile credit scores. To attract or reward customers.
In the U.S., the FBI uses facial recognition to identify criminals. It accesses state and local DMV or other databases and applies facial recognition to video surveillance cameras both fixed and mobile. The organization uses it at sporting events such as the Super Bowl without the knowledge or consent of the attendees. And, the FBI may (or may not) access social media sites such as Facebook, Twitter, Instagram or others to access their facial recognition software or simply dump the data into their own database and use their own algorithms.
The problem with facial recognition and privacy is not that it doesn’t work—or doesn’t work very well—but that it might work—and works very well. We can know where everybody is and was, who they were with and what they were doing. We can apply AI protocols to profile people. It’s “Minority Report” on steroids.
For U.S. security companies that provide this technology (hardware and software) to U.S. law enforcement and intelligence agencies—and potentially oppressive foreign governments—this may present moral or ethical (and not just export control) questions. This is the ultimate “dual-use” technology. It’s scary and creepy. In the words of Sgt. Phil Esterhaus (Hill Street Blues for you young’uns) “Hey, Let’s be careful out there.”