Carbon Black having some customer data leaking issues

My buddy Jim Broome at Direct Defense is stirring the pot a bit today with his latest blog post. Seems like that during an investigation of a potential breach they were performing for a customer, they accidentally discovered that it is possible to harvest some very sensitive data from the Carbon Black Cb Response product.

They discovered it when someone at Direct Defense was “analyzing a potential piece of malware using the analyst interface of a large cloud-based multiscanner”. When they started using that multiscanner to look for some similar pieces of malware to get context, they discovered some files that weren’t related to their customer. When they started digging, they found a bunch of very sensitive data that was getting uploaded to the multiscanner from Carbon Black. The reason all these non-malware files are on there is because of the way Carbon Black has structured it’s service. Jim explains it much better than I can, so I’ll just pull an excerpt from his post:

How could this happen? As previously stated, when a new file appears on a protected endpoint, a cryptographic hash is calculated. This hash is then used to look the file up in Carbon Black’s cloud. If Carbon Black has a score for this file, it gives the existing score, but if no entry exists, it requests an upload of the file. Since Carbon Black doesn’t know if this previously unseen file is good or bad, it then sends the file to a secondary cloud-based multiscanner for scoring. This means that all new files are uploaded to Carbon Black at least once.

Generally speaking, this isn’t such a big deal. Take a Windows update for example. The first customer of Carbon Black that gets a Windows update and then uploads it doesn’t leak much information. However, let’s extrapolate these along real-world lines. Not every file is a Windows update, and many of them contain sensitive details and change frequently. This degree of change is what spurred Carbon Black in its Bit9 form to create this system in the first place.

Imagine you have this solution deployed on a developer workstation. Each time a new piece of code is compiled, that new complied code is a file that nobody has ever seen. It gets uploaded. Now imagine a build or deployment system that packages up a bunch of executables (and configuration files). You could easily imagine the types of combined data that could constitute a “new file”.

Jim and I have talked about this in the past. One of the issues we’ve had in the past with some AV solutions is that a few had default settings that uploaded suspected malware into Virustotal (this is not on by default in Cb Response). As Jim says above, if something one of your developers wrote gets thrown up into the cloud because it smells like malware, then you are potentially throwing up sensitive data to the world. What kind of sensitive data? I’ll steal from Jim’s post again to explain:

  • Cloud keys (AWS, Azure, Google Compute) – which could provide you with access to all cloud resources
  • App store keys (Google Play Store, Apple App Store) – letting you upload rogue applications that will be updated in place
  • Internal usernames, passwords, and network intelligence
  • Communications infrastructure (Slack, HipChat, SharePoint, Box, Dropbox, etc.)
  • Single sign-on/two factor keys
  • Customer data
  • Proprietary internal applications (custom algorithms, trade secrets)
  • Jim sums it up nicely (as he usually does) by this line: “Welcome to the world’s largest pay-for-play data exfiltration botnet.” Ouch.

    Edit: added clarification above that Carbon Black does not have the file upload feature turned on by default. Also, Carbon Black has responded to Direct Defense.

    Edit: Direct Defense’s response to Carbin Black’s response

    This is a Security Bloggers Network syndicated blog post authored by Michael Farnum. Read the original post at: An Information Security Place