Face Off: When Biometrics, Technology and the Law Collide

In August, the FBI in Columbus, Ohio, executed a search warrant on a suspect, Grant Michalski, for evidence related to possession of child pornography. No big deal—happens all the time. What was unusual was what that warrant permitted.

A subsequent warrant application to authorize a detailed inspection of an iPhone seized in the initial search indicated that:

“On August 10, 2018, a search warrant was executed. Grant MICHALSKI was at the residence at the time that it was executed, and, pursuant to authorization provided in the search warrant, was required by law enforcement to place his face in front of an iPhone X that was found on MICHALSKl’s person when the search warrant was executed. The phone was unlocked pursuant to the facial recognition feature on the iPhone X, and your affiant was able to briefly review the contents of the phone.”

In other words, the police obtained a warrant not just to search for and seize the iPhone, but to compel the target of the investigation to “place his face in front of an iPhone X” to unlock the phone. Once FBI agents got the phone unlocked, they were able to paw through the phone and look for images covered by the warrant as well as the contents of a KIK messenger profile and messages. Unfortunately, because they only unlocked the phone through facial recognition rather than a passcode, once it went into sleep mode, they were unable to unlock it again without either the face (again) or a password. So, they went to the court for another warrant. (One confusing issue is that the “SUBJECT DEVICE” is referred to as an iPhone X, but the application for a warrant is for an iPhone 8, so it’s not clear if the FBI wants to search one device or two.) The FBI was not trying to get the second court to authorize a second “facial” unlock of the iPhone previously examined or to get the subject to provide the passcode. The FBI agent indicated that both the Columbus Police Department and the Ohio Bureau of Investigation “have technological devices that are capable of obtaining forensic extractions from locked iPhones without the passcode.”

Face, Finger, Key or Testimony

Ask a technologist to rank the following methods of access to a device in order of levels of protection. (1) password or passphrase; (2) token; (3) fingerprint; (4) voiceprint; (5) facial or iris scan or recognition. To a technologist, the “strongest” protection would be provided by the technology that is the most difficult to capture, reproduce and re-present. Simple passwords or PINs would be at the bottom of the list, as they are, well, simple. A four-digit PIN, while representing 10,000 possible combinations, can frequently be guessed based on some knowledge of the subject—birthdate, anniversary, etc.—or just by inputting 1234 or 2580 (top to bottom on a keypad). A simple password is similarly vulnerable to both a simple guess (“password” or P4ssw0rd!) and a brute-force attack. Tokens are better, since they add a second factor to authentication—something you know and something you have—and the biometrics similarly add something you “are.” For biometrics, the complexities are introduced based on what is measured, how it is measured, how the measurement is then digitized and tokenized and how the token is presented. For example, an image—even a 3D image—of a face is easy to capture. That does not mean that a biometric authentication based on that 3D image is easy to spoof.

Now try asking lawyers—particularly those who do Fourth and Fifth Amendment law—the same question: Which is more secure, a biometric or a PIN? A good lawyer will rephrase that question into one that the courts would understand: Which is more difficult to compel, the production of a biometric or the testimony or “admission” of a password or phrase? You get the exact opposite result.

So what is reasonable security from a technological perspective is weak security from a legal perspective, and vice versa. Let me explain why.

Drunk Driving vs. Compelled Testimony

The Fifth Amendment prohibits a person from being compelled to testify against themselves in a criminal case. The specific wording provides that no person “shall be compelled in any criminal case to be a witness against himself.” As interpreted by the courts, this does not mean that a person cannot be compelled to provide incriminating evidence (mostly); rather, they cannot be compelled to provide incriminating testimony (again, mostly). So if you have voluntarily written a diary, incriminating e-mails or a 1982 calendar of your activities, the contents of those incriminating records voluntarily created are not protected by the Fifth Amendment. But if the cops asked you if you kept a diary or wrote those e-mails, your testimony would be protected under the Fifth Amendment. One little wrinkle is that while the contents of the e-mails may be compelled, the act of producing the e-mails cannot be used against the target. They can’t use the act of production (something they compelled you to do) as evidence of, well, the fact that you possessed the phone or sent the emails or were aware of the contents. They have to treat the documents like manna from heaven—they can’t force you to testify about how you produced them, and they can’t elicit testimony about the act of production (from an individual or sole proprietorship, not a corporation—there are different rules for some artificial entities).

So, what’s the difference between a biometric and a password, from a legal standpoint? The simple answer is, “the spoken (or written) word.” The concept of being forced to “testify” against yourself is embedded in the Fifth Amendment. Testify. To speak. To say or write something. Imagine being tied to a chair under hot lights with two cops (one good, one bad, right?) saying “OK, we have your phone—now TELL US the password …” A Fifth Amendment issue in almost every episode of Law and Order (dum dum).

On the other end of the spectrum is the biometric. The Supreme Court has found no Fifth Amendment violation in compelling a drunk driver to give a blood sample or to take hair, fingerprint or DNA samples from a suspect. Same with taking pictures of a suspect or requiring a voice exemplar. (No. 1, step forward and say, “Hand me the keys, you f-ing, c-sucker.”)
Something you are? Fine. Something you have? Fine. Something you know? Fifth Amendment.

Security Considerations

When we are building technologies to be “secure” we have to consider, “Secure against what?” and “Secure for what purpose?” Security from a technological perspective is not the same thing as security from a legal perspective. If I want to unlock an iPhone X, I just have to wave it in front of the owner (well, they probably have to be alive and awake, but a coma might work, right?) For an iPhone 6 or 7, a fingerprint (alive, or recently deceased, but still attached) would suffice. But all of this can be compelled, just like “my voice is my password” or some other vocal recognition. But a lowly, stupid, simple, four-digit PIN may be the best protection of all. Under the law, a key to a lock can be compelled, but the combination to that lock cannot. Maybe.

All of this means that security persons have to consider what they are protecting, why and from what kind of attack. In the privacy arena, we already see the balkanization of data based on perceived legal protections of that data. When you consider how the confidentiality, integrity and availability of data might be compromised, you have to consider all of the potential threats to those features. It’s a game of “what ifs …” What if a traveller with a laptop was forced by customs to provide a password or passphrase? What if the laptop and token were both lost? What if the thief had access to the subject’s fingerprint? What if … what if?

Security is a process. You have to understand the entire process, because we know that we’re only as secure as the weakest link. So the next time the police stop you and ask you to unlock your iPhone 6, think twice before you give the cop the finger.

Avatar photo

Mark Rasch

Mark Rasch is a lawyer and computer security and privacy expert in Bethesda, Maryland. where he helps develop strategy and messaging for the Information Security team. Rasch’s career spans more than 35 years of corporate and government cybersecurity, computer privacy, regulatory compliance, computer forensics and incident response. He is trained as a lawyer and was the Chief Security Evangelist for Verizon Enterprise Solutions (VES). He is recognized author of numerous security- and privacy-related articles. Prior to joining Verizon, he taught courses in cybersecurity, law, policy and technology at various colleges and Universities including the University of Maryland, George Mason University, Georgetown University, and the American University School of law and was active with the American Bar Association’s Privacy and Cybersecurity Committees and the Computers, Freedom and Privacy Conference. Rasch had worked as cyberlaw editor for SecurityCurrent.com, as Chief Privacy Officer for SAIC, and as Director or Managing Director at various information security consulting companies, including CSC, FTI Consulting, Solutionary, Predictive Systems, and Global Integrity Corp. Earlier in his career, Rasch was with the U.S. Department of Justice where he led the department’s efforts to investigate and prosecute cyber and high-technology crime, starting the computer crime unit within the Criminal Division’s Fraud Section, efforts which eventually led to the creation of the Computer Crime and Intellectual Property Section of the Criminal Division. He was responsible for various high-profile computer crime prosecutions, including Kevin Mitnick, Kevin Poulsen and Robert Tappan Morris. Prior to joining Verizon, Mark was a frequent commentator in the media on issues related to information security, appearing on BBC, CBC, Fox News, CNN, NBC News, ABC News, the New York Times, the Wall Street Journal and many other outlets.

mark has 203 posts and counting.See all posts by mark

Secure Coding Practices