It started with an e-mail to a Capital One “responsible disclosure” email address early Wednesday, July 17, at 1:25 a.m. The note was short and cryptic. It simply said that “there appears to be some leaked s3 data of yours on someone’s github/gist.” The provided address included the full name of the hacker. Not great operational security. The discloser went on to note, “Let me know if you want help tracking them down.”
Some five months earlier, starting March 12, this hacker (or someone working with them) attempted to and then successfully logged into an Amazon Cloud that hosted data from Capital One from a VPN or TOR, using a “misconfiguration [which] permitted [the hacker] to reach and [have commands] be executed by that server, which enabled access to folders or buckets of data …” The file on the GitHub server contained certain executable code, including one that obtained “security credentials” for the cloud server, a “list buckets” command which—duh—listed the files available for download on these folders and a “sync command” which extracted the data from these “buckets.” Get in. Get access. Get data. Get out.
The evidence, provided in an FBI/Department of Justice complaint filed in Seattle July 29, showed that Paige Adele Thompson—hacker handle “erratic”—used her real full name on the GitHub account that hosted the crack files. The IP addresses of the multiple attacks on the Amazon server (through TOR or VPN) linked up to her account. Oh, and she bragged about her exploits (pun intended) on various Meetup and Slack groups. On June 27, erratic—afraid that she might “go to jail”—expressed her desire to find a place to host the purloined data (encrypted) and also informed the group that she had a “leak-proof iPredator router setup.”
Then things started heating up.
Using her Twitter handle, erratic posts online June 18: “I’ve basically strapped myself with a bomb vest, fucking dropping Capital One’s dox and admitting it.” She then noted that the documents contain Social Security numbers (SSNs) and names and dates of birth, and she wanted to “distribute those buckets first.”
On July 29, the FBI arrested Thompson, a 2006 graduate of Bellevue Community College who reportedly worked for Amazon’s Simple Storage Services (s3) from May 2015 to September 2016. They confiscated her electronic devices and have charged her with one count of computer crime—but you can expect that number to go up as the investigation continues.
Capital One’s Legal Repercussions
First, the bad. Capital One has admitted that erratic was able to obtain files on more than 106 million individuals, (100 million in the U.S. and 6 million in Canada) including full names and SSNs (and Canadian Social Insurance Numbers) for about 140,000 people. About 80,000 bank account numbers were also compromised. The information came from credit card applications and other sources and related to both individuals and small businesses.
Think of the data you put on a credit card application: full name, address, SSN or SIN, banking information, employment information, residence(s), etc.—all the information necessary for someone to create a synthetic you. That’s why Capital One, like every other company, proudly tells its customers and applicants for its credit cards, “Your security is a top priority.” The company explains that “Capital One associates are required to participate in annual security training” and it “prohibit[s] the unlawful disclosure of your Social Security number [and] restrict[s] access to your Social Security number except when required for authorized business purposes.” Finally, the company notes that it “build[s] information security into our systems and networks using internationally recognized security standards, regulations, and industry-based best practices.”
A Ponemon study earlier this year estimated the average cost of a data breach response to be $150 per purloined record. These costs include the costs of investigation, notification, mitigation (credit reporting, credit freeze, etc.), credit card reissuing, FTC investigations and fines, PCI DSS investigations and fines and class action litigation and fines/damages, not to mention damages to reputation, lost income and frightened customers. (Jennifer Garner may be pulling out her Bank of America credit card just about now.) Just doing quick “back of the envelope” math, the losses may be northward of $16 billion.
This, from a cloud server misconfiguration detected and exploited by a former employee of the cloud company.
Now the not-so-bad.
Sure, it looks terrible for Capital One. But it may not be as bad as it looks. First, the vast majority of the losses and damages that result from this and any data breach are to prevent future harm to customers resulting from the breach. You know, to prevent the identity fraud and identity theft that could occur if the names, SSNs, etc. are used to perpetuate fraud. In fact, while courts have found (and the recent Equifax settlement reinforces) that such costs are compensable, courts are also reluctant to award damages to individuals simply for “loss of privacy” or for what they term “speculative” damages that may result from customers’ “fear” that their data may later be used improperly. So things such as “pain and suffering,” mental distress or anxiety as a result of a data breach and other costs including the consumer’s time and energy required to cancel cards and reinput credit card numbers into accounts or even missed payments to creditors resulting from denied automatic payments to now-canceled cards are generally not compensated in these settlements.
Also working in Capital One’s favor is the fact that it owns (and therefore can reissue) the bulk of the potentially compromised credit cards. That means it doesn’t have to bill itself for the costs of reissuance; it just absorbs them—at a cost, but not at a markup. But remember, since it was credit card application data that was compromised, this would include banking and credit card information at other institutions. So it’s bigger than Capital One.
The other good news for Capital One—and it’s not that good, really—is that the hack appears to have occurred not on their site, but the site of their cloud provider. This means lawyers will be poring over the cloud contract to determine the scope and extent of liability of the cloud provider for the breach. That’s why getting cloud contracts right in advance is critically important.
Among the issues that will be decided (usually after tens of thousands of billable hours by fresh-faced lawyers) is, Who had ultimate responsibility for the (mis)configurations on the system? What auditing did Capital One do of the security? What was Amazon required to do, and did the company do it? Who was monitoring access to the server(s) and what did they see or fail to see? How was the data exfiltrated and who had a duty to observe, monitor, prevent, etc.?
While to customers, Capital One is responsible for the breach, Capital One may see it as an Amazon breach. Whenever you have multiple entities responsible for something, there’s the risk that each one thinks that it’s the others’ responsibility. Amazon has, of course, denied any responsibility for the vulnerability or its mitigation, noting, “AWS was not compromised in any way and functioned as designed. The perpetrator gained access through a misconfiguration of the web application and not the underlying cloud-based infrastructure. As Capital One explained clearly in its disclosure, this type of vulnerability is not specific to the cloud.” Which (maybe) addresses the configuration issue, but not necessarily the monitoring and mitigation issue.
And there’s one final (possible) saving grace for Capital One and Amazon.
While erratic sought to broadly disseminate the stolen documents and records and exfiltrated them, and may have stored them somewhere on the DDW or elsewhere, it’s not yet clear that anyone other than erratic has seen or has access to these records. Certainly, the “responsible discloser” saw something—but not necessarily the full data dump. This raises the moral, legal and philosophical question: If files are stolen, but nobody sees them, are they really stolen? Or, in legal parlance, are there any compensable damages resulting from a breach which both does not, nor is reasonably likely to, result in ID fraud and ID theft? That all hinges on whether erratic did or did not further disclose the data—something that we don’t know yet.
For incident responders, validating this information could be the difference between a data breach involving thousands or millions of dollars or a data breach in the billions. However, the law is not too clear on this. Data breach disclosure and response laws generally define a data breach as the “unauthorized disclosure” or access to certain data such as SSNs, account information, etc., and not on the harm of damage that might result from this disclosure. If there’s been unauthorized access to the data, this triggers a host of notification and remediation efforts (including credit watch, credit freeze, etc.) regardless of whether there is a likelihood of harm under the specific facts of the case. Assuming that we could show—and I mean really show—that erratic did not further disseminate the stolen data and that nobody else exploited the server and got the same or similar data, are these mitigation and notification efforts REALLY necessary? Do they really protect the public, and, if so, against what?
This illustrates one problem with the “one size fits all” nature of our data breach and data mitigation laws—and perhaps an inevitable one. We force breach notification and mitigation costs on entities that suffer breaches (sometimes on their own, sometimes through third parties) regardless of whether those “mitigation” efforts are likely to actually mitigate anything. While some data breach laws (including HIPAA) permit entities to determine that unauthorized access to the data did not result in any actual real or potential damage to customers (and you make that determination at your own peril), most state laws don’t do that. You MUST disclose and mitigate every unauthorized access to the data.
Of course, if you gave companies the out of determining that the breach was “harmless,” then a lot of companies would conclude that every breach was “harmless” and the disclosure/mitigation requirements would soon become toothless. So we need a balance between spending billions to mitigate harms that may never occur and sweeping breaches under the rug.
In the end, erratic’s activities will cost someone a bucket of money. We just don’t know who or how much. Stay tuned.