Once again, accusations and hyperbole touch on important issues… and ultimately serve as a distraction
What’s going on? A security services firm called DirectDefense published a report, penned by the firm’s CEO, Jim Broome. This report accuses Carbon Black products of leaking customer data.
What’s the issue? Carbon Black’s Cb Response product can be configured to automatically upload samples to VirusTotal. Sometimes these samples contain customer-sensitive data.
What’s the accusation? DirectDefense asserts that, with the Share binaries with VirusTotal option enabled, Cb Response is “the world’s largest pay-for-play data exfiltration botnet”.
What’s the evidence? DirectDefense found that they can isolate samples uploaded to VirusTotal using Cb Response’s API key, leading them to attribute customer leaks to Carbon Black.
Conclusion: Is this bullshit? Short version? Yes.
As you might imagine, there’s a whole lot more to it. This post isn’t intended as a rebuttal, but as an analysis of the real issue at hand here that customers need to be educated on (hint: it isn’t Cb Response).
What should we do about it? There’s no TL;DR for this — you’ll have to scroll down and read it for yourself.
The tweet above and associated blog post by DirectDefense kicked off the conversation and analysis that led to this post. Steve Ragan of CSO also published a story based on the blog by DirectDefense, and Carbon Black has posted its own rebuttal. For additional context, go read each of these, in order.
Here’s how the issue DirectDefense is referring to would occur, step-by-step, with all the hyperbole stripped out.
- A customer buys the Carbon Black product, Cb Response.*
- The customer decides to enable a feature to upload suspicious binaries to VirusTotal**
- Carbon Black decides a proprietary customer binary, executable or script might be malicious.***
- It uploads it to VirusTotal using a Carbon Black-owned API key
- The app or script is now accessible to anyone with a private VirusTotal account.
* This is the EDR product originally known as ‘Carbon Black’, not the product Bit9 is originally known for or the NGAV product from the Confer acquisition. Come on, it isn’t that confusing, is it? (Yes, it is.)
** This feature is disabled by default. Also, many instances of Cb Response are installed and managed by contractors or third parties, so the customer might not be the one making the decision to enable this feature.
*** It isn’t just binaries that Carbon Black uploads; anything determined to be executable and malicious might get uploaded: scripts or Java JAR/WAR files, for example.
Welcome to the world’s largest pay-for-play data exfiltration botnet
Is this a fair statement? No — of course not. It’s intended to capture clicks and maximize the marketing value of the post. I definitely don’t think it’s accurate to say ‘Carbon Black is leaking customer data’, as headlines are suggesting. I’ll admit to having fun with ‘click-baity’ titles myself, but as the wise Wendy Nather once instructed me, “you’d better be able to defend your statements”.
Harvesting Cb Response Data Leaks for fun and profit
Broome’s title suggests Carbon Black’s product is responsible for leaking customer data. Is it?
As it turns out, this isn’t a terribly simple question to answer. Kudos to Carbon Black for being quick on the response, pointing out that they’re aware of the risks, and feel that they give the customer adequate warning in the configuration screens (screenshot below). They also point out that this feature is disabled by default.
A few things are apparent at this point — Broome’s write-up isn’t terribly balanced, as it points out the negatives, but none of the positives of using Virustotal as a secondary detection mechanism. Most strange here, however, is the fact that Carbon Black is singled out at all. I believe Broome when he explains it was pure chance that they ran across binaries uploaded by Carbon Black — it’s one of the most popular post-AV products on the market right now.
However, Broome and his staff should be well aware that dozens of other security vendors either have an option to automatically submit binaries (yes, whole binaries, not just the hash) to VirusTotal or do it without the customers knowledge altogether? In singling out Carbon Black, DirectDefense opens itself up to criticism and closer scrutiny. As is the case with the security community, a thread of controversy didn’t take long to emerge.
I personally don’t believe DirectDefense is a shill for Cylance, but in singling out one of many vendors that do the same thing, they’ve stepped into a classic PR gaffe that makes them look like one.
The real kicker here is that I’ve been covering endpoint security as an industry analyst since 2013, and the ability to upload samples to VirusTotal is something customers were practically begging for a few years ago. It wouldn’t surprise me if Carbon Black only added this at customers’ requests.
What should we do about it?
First off, how do we even classify this issue? Is this a vulnerability in Carbon Black’s product (my opinion: no)? Is this irresponsible disclosure on DirectDefense’s part (my opinion: no)? Is this a natural risk/reward tradeoff associated with crowdsourced threat analysis (my opinion: bingo)?
So what, if anything, should Carbon Black do about this? Are they liable for sensitive data their product uploads to VirusTotal? Should they educate customers more about the risks? Honestly, I’m not sure yet, but here’s a few considerations rolling around in my head:
- Carbon Black has a huge network of professional services firms, MSPs and MSSPs that use Cb Response, both reactively and proactively. The result is that a significant percentage of Carbon Black customers may be unaware of how the product is configured, and therefore, may be left out of the risk decision associated with enabling or using the ‘submit to VirusTotal’ feature.
- I’m no lawyer, but with the disclaimer in the product, it seems like the liability would fall to the customer or third party MSP to communicate the risks of enabling the feature. I’m not aware of any landmark cases involving MSP liability, but as we see more and more security functions outsourced, it’s only a matter of time before the issue comes up. Caveat Emptor.
- Should we consider a systematic way to ensure proprietary binaries and scripts don’t get uploaded? Most of the approaches that come to mind might be too much work for the customer to be practical: 1) a dynamically updated whitelist, built by appdev pipelines, 2) customer signs binaries, and Cb excludes files marked as ‘known good’ by checking certs, or even 3) an opportunity to queue up files to be submitted and give final approval to a human.
- This isn’t something that happens for every binary Responder comes across — it’s at the very end of a long decision tree. Cb Responder has already checked against behavioral rules and checked the hash against known bad sources (including checking VirusTotal for the binary’s hash) before deciding to send off a binary for analysis.
- Interestingly, this blog post comes just days after Safebreach gives a talk about using anti-virus sandboxes to exfiltrate data. In the proof-of-concept, malware steals data, obfuscates it and gets caught on purpose. Getting caught gives it a one-way ticket to the AV sandbox, in theory.
- On the disclosure debate: The fact that DirectDefense notified affected organizations, but not Carbon Black is significant. This implies that DirectDefense realizes this wasn’t Carbon Black’s responsibility, but the responsibility of organizations that enabled the feature, accepting the risk. Otherwise, it would have made sense to disclose directly to Carbon Black, letting them notify the affected customers.
- I’m also still thinking through it — my first impression is that what DirectDefense has done doesn’t qualify as disclosure, as this isn’t a new finding. Most who use publicly accessible sandboxes are well aware of the risks. DirectDefense says they notified organizations affected, which constitutes responsible disclosure in this case. Actually, the fact
Sharing is Carin… whoa, back up — that’s enough
You’ve probably heard it before, that the ‘bad guys’ share intelligence on targets, so it only behooves the ‘good guys’ to share details on what the attackers are doing.
For the most part, this is true and definitely helps defense get that much better; however, it’s worth noting that not all data needs to be sent to a third-party provider for analysis. Much like this example of customers enabling the feature to send data to VirusTotal, care should be taken in determining what types of data to send. The same goes for other cloud-based analysis platforms, such as those that are built in to NGFWs.
In one engagement, we had to carefully weigh the risk vs. reward of sending specific types of files from specific network segments and user groups to be analyzed by a vendor’s cloud-based sandbox. Remember, once the data leaves your network, there’s no ‘getting it back’. Sharing information can definitely help us all become better at defense, but remember to think carefully about exactly what it is you’re sharing.
The Bigger Issue
In my experience as a pentester, SIEM admin, chief incident handler and PCI QSA, I feel qualified to state that it is the norm for security tools to ‘leak’ sensitive data. The average SIEM, IPS and network forensics appliance are chock full of passwords, private keys, PII and payment data. I’ve seen cases where nearly every security product and the email servers were technically in-scope for PCI before we started cleaning things up.
Example 1: In the early days of breach detection products (think FireEye, Lastline, Cyphort), the product we used didn’t support LDAPS for Active Directory integration. We’d regularly find each others’ passwords in the SIEM and have to reset them. We ended up going back to local authentication until the security vendor implemented a secure way to integrate with AD.
Example 2: In even earlier days, when HIPS was hot stuff, I found that a particular product would save packet captures whenever a detection rule was triggered. HIPS products, like their network-oriented brethren, traditionally have a high false-positive rate. In this case, this HIPS product often alerted when employees were using an internal Chargebacks system. The result, I found, was that every workstation that was used to log into this system had PCAPs containing unencrypted cardholder data under c:\Program Files\Productname\*.pcap.
This leads me to my opinion that the bigger issue here is the lack of data visibility and control. This is such a difficult problem that the security industry has practically abandoned it for easier problems.
Someone should get an alert when a script with hardcoded AWS API keys leaves the organization. Most don’t. Alarm bells should go off when proprietary software leaves corporate-controlled systems. Ideally, these incidents should be automatically detected and prevented. Nearly every major breach you can quickly pull to mind involved copying gigabytes or terabytes of data off the victim’s network without anyone noticing. Simple statistical analysis can detect that stuff — we’re not even talking DLP at that level.
When we are talking DLP, we still hear horror stories of implementations kicking out over 100,000 alerts every day. Until we can solve the problem of visibility into where corporate data is going, these kinds of issues will continue to take us by surprise.
This is a Security Bloggers Network syndicated blog post authored by Adrian Sanabria. Read the original post at: Savage Security Blog - Medium