Biometrics, Facial Recognition, Privacy, Security and the Law

The danger in using biometrics and facial recognition is that they’re not always accurate.

A recent article in the L.A. Times indicated that facial recognition software proposed to be used for police bodycams falsely indicated that about 20% of California legislators were criminals (insert political joke here), just as a previous study of members of Congress showed 28 legislators “matched” a database of criminals. The use of facial recognition software on massive databases like those of bodycams or dashcams has been challenged on the basis that such software is inaccurate and might lead to the wrongful arrest or even shooting of individuals based on incorrect identification. Indeed, while many states are banning the use of such body cam facial recognition, some states such as Illinois generally prohibit the collection and use of biometric information without a written policy and informed consent.

Passwords Must Die

For security professionals, authentication, access control and authorization are important as both concepts and as technologies. The goal of security is to “let good people in” and also to “keep bad people out.” OK. People and processes. “Good” means authorized people doing authorized or at least “permitted” things and “bad” means anything else.

We typically do “access control” by providing the user with a token—a user ID, a password, a dial-back, a multifactor “key” of some kind or the use of a biometric token. But mostly a password. Even strong passwords or pass-phrases have significant weaknesses for authentication or security. They are subject to theft and loss, whether in storage or transmission (think keyloggers, etc.) They can be forgotten, reused and represented. They require a reset option, which can be spoofed or fooled. They can be brute-forced. They are evil. Truly evil. They must die.

But the alternatives may be worse. Biometrics have the advantage that they may (or may not) be easy to present, are unique (mostly, and I say that as an identical twin) and provide for (mostly) strong authentication. If deployed correctly. But they can be spoofed or represented, provide a false sense of “strong” authentication and may (depending on implementation) require the creation and storage of massive amounts of biometric data. Most recently, biometric company Suprema was hacked and the attackers obtained biometric data on 27.8 million people. You think getting a new credit card is a pain? Imagine having to get a new face (with apologies to Nicholas Cage).

While biometrics have some promise for authentication, the better approach is to allow the user to retain the token and authenticate to a device they maintain control over (um, such as a phone). While this is not “true” biometric authentication, it is biometrically assisted authentication. It’s better than a password, but almost anything is better than a password.

Privacy

While most of the objections to biometrics in society focus on their errors, for privacy purposes the more troubling aspect of biometrics is their accuracy—either now or in the relatively near future. Right now, police use automated license plate readers (ALPR’s) to scan cars as they drive and to compile a database not only of stolen cars or those whose owners (not necessarily the operators) may have arrest warrants, but of every place every car is seen. They use them to issue speeding and parking tickets. And to solve crimes in the neighborhood. They collect a massive database of where any car is. It can be used to identify cheating spouses, business mergers or to repossess cars. Pretty cool. And pretty scary.

In China and elsewhere, this ALPR type of technology is being deployed against people. It’s being used to keep track of protesters or Uighurs. It’s being used to identify jaywalkers. To compile credit scores. To attract or reward customers.
In the U.S., the FBI uses facial recognition to identify criminals.  It accesses state and local DMV or other databases and applies facial recognition to video surveillance cameras both fixed and mobile. The organization uses it at sporting events such as the Super Bowl without the knowledge or consent of the attendees. And, the FBI may (or may not) access social media sites such as Facebook, Twitter, Instagram or others to access their facial recognition software or simply dump the data into their own database and use their own algorithms.

The problem with facial recognition and privacy is not that it doesn’t work—or doesn’t work very well—but that it might work—and works very well. We can know where everybody is and was, who they were with and what they were doing. We can apply AI protocols to profile people. It’s “Minority Report” on steroids.

For U.S. security companies that provide this technology (hardware and software) to U.S. law enforcement and intelligence agencies—and potentially oppressive foreign governments—this may present moral or ethical (and not just export control) questions. This is the ultimate “dual-use” technology. It’s scary and creepy. In the words of Sgt. Phil Esterhaus (Hill Street Blues for you young’uns) “Hey, Let’s be careful out there.”

Featured eBook
The State of DevSecOps

The State of DevSecOps

For years now, IT’s mantra has been “move quickly and break things.” To increase agility, companies adopted innovative and quick development practices. Great redesigns took place in the wake of DevOps. However, in this rush to implement forward-thinking practices, many teams eschewed security. No longer can institutions disregard security requirements within their DevOps environment. The ... Read More
Security Boulevard
Mark Rasch

Mark Rasch

Mark Rasch is a lawyer and computer security and privacy expert in Bethesda, Maryland. where he helps develop strategy and messaging for the Information Security team. Rasch’s career spans more than 35 years of corporate and government cybersecurity, computer privacy, regulatory compliance, computer forensics and incident response. He is trained as a lawyer and was the Chief Security Evangelist for Verizon Enterprise Solutions (VES). He is recognized author of numerous security- and privacy-related articles. Prior to joining Verizon, he taught courses in cybersecurity, law, policy and technology at various colleges and Universities including the University of Maryland, George Mason University, Georgetown University, and the American University School of law and was active with the American Bar Association’s Privacy and Cybersecurity Committees and the Computers, Freedom and Privacy Conference. Rasch had worked as cyberlaw editor for SecurityCurrent.com, as Chief Privacy Officer for SAIC, and as Director or Managing Director at various information security consulting companies, including CSC, FTI Consulting, Solutionary, Predictive Systems, and Global Integrity Corp. Earlier in his career, Rasch was with the U.S. Department of Justice where he led the department’s efforts to investigate and prosecute cyber and high-technology crime, starting the computer crime unit within the Criminal Division’s Fraud Section, efforts which eventually led to the creation of the Computer Crime and Intellectual Property Section of the Criminal Division. He was responsible for various high-profile computer crime prosecutions, including Kevin Mitnick, Kevin Poulsen and Robert Tappan Morris. Prior to joining Verizon, Mark was a frequent commentator in the media on issues related to information security, appearing on BBC, CBC, Fox News, CNN, NBC News, ABC News, the New York Times, the Wall Street Journal and many other outlets.

mark has 42 posts and counting.See all posts by mark