Security Lessons From the CPB Biometric Data Breach

On June 10, the U.S. Customs and Border Protection (CPB) announced that it—well, a contractor of its—had suffered a data breach involving photographs of persons and license plates of cars at its ports of entry. There are a bunch of lessons to be learned from this breach. In no particular order, they are:

If you have data, it will be breached

As I have often said, the goal of cybersecurity—well, of good cybersecurity—is to give “good” people access to data for “good” purposes and deny access to “bad” people or for “bad” purposes. Anytime something other than that happens, it’s, in the words of Dr. Peter Venkman, “bad.” The more data you have, the more people who have access to it, the more distributed it is, the longer you have it, the more complicated the networks or applications with access, the less trusted the individuals with access and a thousand other factors, the more likely it is to be “breached”—that is, accessed or used by a bad person or for a bad reason (or at least an unauthorized one). As the threat environment becomes more sophisticated, the capabilities of adversaries more sophisticated, the technology more refined and the landscape more—well, more threatening—the likelihood of a “breach” increases. Defenses that were perfectly fine 18 months ago may be inadequate to your needs today. And artificial intelligence (AI), data analytics and sophisticated analysis create a target rich environment in which the attacker can go after either the raw data or the data analytics. So, really it’s not if—it’s when.

The more sensitive the data, the more impact

Not only are we collecting exponentially more data than ever before, but the data we are collecting is exponentially more sensitive, more intrusive and, therefore, more valuable to hackers than ever before. The biometric (or biometric-capable) data purloined from CPB’s border records, coupled with license plate and travel records, create an attractive target for identity thieves and foreign governments alike. Add some AI analytics to the mix, and you’ve got some pretty valuable stuff. The more sensitive the data, the more attractive and the greater the impact.

You can’t uncork the genie

The remedies that the law provides for data breaches are both a blunt instrument and inadequate to the task. There are the Kubler-Ross stages of data breach: Denial (We didn’t suffer a breach), Anger (Who screwed up here? Not me), Bargaining (Counsel, we don’t have to tell people about the breach because a license plate isn’t “personal” data), Depression (well, more like resignation—actual or implied) and then Acceptance (Hey, you screwed up—you trusted us). The data breach disclosure laws, for example, relate to specific and discrete types of information relating to the kinds of breaches that were prevalent 20 years ago. Credit card numbers, expiration dates, account numbers, access codes. Not biometrics, DNA databases, user behavior data, and other what GDPR and CCPA would call “personal” data.

Similarly, courts have found that consumers who have suffered a loss of privacy (or a potential loss of privacy) as a result of a breach have not suffered a tangible “injury” unless they can show something such as a lost job opportunity, or having been denied some benefit from the breach. The fact that their personal information is out there is not a sufficient “injury.” Similarly, the remedies of credit freezes or free credit reports don’t really help me get my Social Security number back, my face out of hacker’s hands or prevent the (mis)use of my travel or browsing habits. Privacy is its own right—not just a hanger-on.

Stolen data migrates

The CBP issued a statement that concluded by stating none of the purloined records had found its way to the Deep Dark Web (DDW). Yet. Although this may not be true. In May, it was widely reported in security circles that data from the automated license plate reader provider Perceptics had found its way to the DDW after a breach at that company. A hacker known under the nom de guerre “Boris Bullet-Dodger” reported the hack to The Register and reportedly provided the Register with proof of these files. While the purloined contractor data may be different from the CPB stolen data, the public statement by CPB announcing the breach was titled, “CBP Perceptics Public Statement.” Probably just a coincidence, right?

Someone is being thrown under the bus

In “The Hitchhiker’s Guide to the Galaxy,” there is a powerful field of invisibility called the “SEP field”—Someone Else’s Problem. One big goal of data breaches is to make it someone else’s problem. In this case, CPB pointed out that the breach occurred to an unnamed “contractor” that transferred the photographic and license plate data without CPB’s knowledge or consent.

And how, exactly did the contractor do that? If a contractor is downloading your data and you don’t know about it, then that is in and of itself a data breach. You are supposed to know where your data is and why it’s there, and what’s being done with it. If you don’t know, then you haven’t been doing security right. Know who is responsible for what. In the contract. And in reality.

Consent won’t cut it

Much of the data in a modern data breach is collected, stored, sliced, diced and analyzed without the effective consent of the data subject. I recently tried to read an article online, and the article allowed me to “opt out” of information-sharing of my IP address, searching and other data. I counted more than 70 different entities that the publisher had listed as having access to my data and almost all of these were advertisers and data aggregators, which means that the true number of potential beneficiaries of analyzing my reading of the article was likely in the tens of thousands.

In the case of the CPB data, the travelers had no choice to “opt-in” or “opt out” of the collection of their images or those of their license plates. That’s why laws like GDPR don’t rely on consent for data collection; they look at whether there was a “lawful basis” for the data collection. So, why was CPB collecting (and storing) photos of people legally entering the U.S.? What were (and are) they doing with that data? With whom is that data shared? How is it analyzed? How long is it kept? How is it validated? Remember Lesson 1: The more data, the longer it’s kept and the more it is shared, the more it will be abused.

We overvalue reward and undervalue risk

Whenever we want to collect and use data, we have thousands of wonderful benefits from the collection and analysis: self-driving cars. Pacemakers that can correct themselves. Targeted ads. Improved customer experience. Safety. Security. Profits. Whatever. I’m sure that CPB thought that scanning in and recording every license plate of every vehicle coming into the country provided some modicum of safety. Sure, you can check if the license plate owner has a warrant for their arrest. Or if the car is stolen. But you don’t need automated license plate readers for that—and you don’t need to keep the data. Instead, some functionary at CPB figured that the creation of a massive database of travel patterns and images could root out terrorism, prevent drug smuggling, limit illegal immigration and reduce cholesterol. But, what if the same database was in the hands of the cartels? Or the GRU? Or cyberstalkers? Or the Texas Democratic Party? Sauce, goose, gander.

Secret things need more publicity

Some things absolutely must happen in secret. We don’t want those who pose a threat to know how we are surveilling them, who we are targeting or why. But, the overall parameters of our surveillance programs must be known to the public. If we are creating a database of biometric data of travelers, we all—citizen and alien alike—must know why. If we are doing facial recognition on the streets, at the airports, etc.—the same thing. We need to know what data is being collected about us every day and why. And we need a vigorous and public debate about each of these programs. The free market can’t decide to give up liberty for security (apologies to B. Franklin) if the decision to take away liberty is hidden from them. “This breach comes just as CBP seeks to expand its massive face recognition apparatus and collection of sensitive information from travelers, including license plate information and social media identifiers,” said Neema Singh Guliani, senior legislative counsel at the American Civil Liberties Union. “This incident further underscores the need to put the brakes on these efforts and for Congress to investigate the agency’s data practices. The best way to avoid breaches of sensitive personal data is not to collect and retain it in the first place.”

That which is dead can never be killed

While there are degrees of privacy, once privacy has been invaded, you really can’t get it back. It’s like those old courtroom TV shows in which the judge instructs the jury to disregard the defendant’s confession. Can’t be done. Now that you know about the breach, are you able to get a new (free) license plate? A new face? Nope. Rewards are measurable; risks are distributed.

As I have said before, we don’t value privacy and security because we don’t put a value on it. What’s the value of one human face? What’s the value of some hacker knowing about my soiree to Suarez? We are aghast at the last data breach and move on to the next, after lawyers, lawsuits and recriminations. And little by little we are enured into accepting invasions of privacy big and small.

And that’s the big lesson. At the end of the day, everyone moves on as privacy dies a little bit more.

Mark Rasch

Avatar photo

Mark Rasch

Mark Rasch is a lawyer and computer security and privacy expert in Bethesda, Maryland. where he helps develop strategy and messaging for the Information Security team. Rasch’s career spans more than 35 years of corporate and government cybersecurity, computer privacy, regulatory compliance, computer forensics and incident response. He is trained as a lawyer and was the Chief Security Evangelist for Verizon Enterprise Solutions (VES). He is recognized author of numerous security- and privacy-related articles. Prior to joining Verizon, he taught courses in cybersecurity, law, policy and technology at various colleges and Universities including the University of Maryland, George Mason University, Georgetown University, and the American University School of law and was active with the American Bar Association’s Privacy and Cybersecurity Committees and the Computers, Freedom and Privacy Conference. Rasch had worked as cyberlaw editor for SecurityCurrent.com, as Chief Privacy Officer for SAIC, and as Director or Managing Director at various information security consulting companies, including CSC, FTI Consulting, Solutionary, Predictive Systems, and Global Integrity Corp. Earlier in his career, Rasch was with the U.S. Department of Justice where he led the department’s efforts to investigate and prosecute cyber and high-technology crime, starting the computer crime unit within the Criminal Division’s Fraud Section, efforts which eventually led to the creation of the Computer Crime and Intellectual Property Section of the Criminal Division. He was responsible for various high-profile computer crime prosecutions, including Kevin Mitnick, Kevin Poulsen and Robert Tappan Morris. Prior to joining Verizon, Mark was a frequent commentator in the media on issues related to information security, appearing on BBC, CBC, Fox News, CNN, NBC News, ABC News, the New York Times, the Wall Street Journal and many other outlets.

mark has 203 posts and counting.See all posts by mark