Dow Jones Hack Highlights the Problems with Containers
Those who think that technology will solve their security problems, understand neither technology nor security.
Dow Jones just announced that about 2.2 million customers’ details were exposed due to an Amazon S3 container misconfiguration. The problem is not with the containers. The problem is with how the containers were configured.
Containers are a method of operating system virtualization that enables you to run an application and its dependencies in isolated processes. Containers allow you to easily package an application’s code, configurations, and dependencies into easy to use building blocks that deliver consistency, operational efficiency, productivity, and version control. All of which is good stuff.
Containers can also help applications to deploy quickly and reliably, regardless of the deployment environment and give you more detailed control over your resources.
The downside is that containers are less isolated from one another than virtual machines are. And of course, because containers are an easy way to package and distribute applications, many developers and programmers are doing just that. The problem is that not all of the containers available on the web can be trusted, and not all of the libraries and components included in those containers are patched and up-to-date.
As a result, they are not a universal replacement for every existing virtual machine (VM) deployment, nor are they self-contained like VMs. By contrast, containers require a physical OS kernel and share much of the underlying kernel with many libraries and binaries. As a consequence, flaws and attacks have a much greater potential to carry down into an underlying OS and over into other containers — potentially propagating malicious activity far beyond the original event.
And because containers can be spun up and duplicated at an astonishing rate, they are often not spun down or deleted when they’re no longer needed. The costs to scale them up can easily result in significant (and unnecessary) cloud computing costs for the enterprise. This is of course why cloud providers love containers.
Even though and by default, AWS S3 containers are not publicly accessible, and access is tightly restricted, many user organizations inadvertently misconfigure their containers to allow “public” or semi-public access, which results in data being exposed anyway.
In addition to Dow Jones, we have recently seen organizations including Verizon, World Wrestling Entertainment, Scottrade and Deep Root Analytics being hacked and blaming user error for the contents of their S3 containers being exposed.
I believe that in the haste to seize any new technological advantage, companies put so much pressure on harried programmers that these sorts of human errors, mistakes and failures to envision a broader context for their work are bound to happen and will increase over time.
In this case, the exposed containers containing customer data had been configured via permission settings to allow any AWS ‘authenticated users’ to download the data via the repository’s URL. In AWS-speak, an authenticated user is anyone who has registered for a free AWS account. There are now more than 1 million active authenticated AWS users. The configuration of cloud-based storage by enterprises to allow public or semi-public access is by now an all-too-common story and the problem is that while enterprise users should be able to prevent these types of errors with policies, practices and monitoring, very few do.
The configuration problem is not limited to containers. The Dow Jones hack simply spotlights one instance and use case. Configuration inconsistencies and mishaps abound throughout IT installations and are responsible for at least half of the internally sourced vulnerabilities. And it is also not a recent or sudden problem. Back in 2013, Rapid7 audited over 12,000 Amazon containers and found that 15% of them were publicly accessible. Going further back, audited containers in 2100 revealed that 13% had public accessibility and many included sensitive data like social security numbers.
We now see the U.S. Senate urging the Federal adoption of a DMARC standard for email with little apparent regard for how difficult and tricky it is to configure and maintain the filters. Instead of phishing attacks, the new Federal plague will be a failure to receive critical email messages or the uncontrolled and un-notified redirection of confidential and sensitive messaging to unsecured destinations.
So, it is hardly a surprise that we see a cyber-attack or a hack every day now. Between the bad guys getting better, the attacks getting more sophisticated, the malware learning how to avoid detection and the copious amount of user error, we should have an attack every hour.
Until companies begin to address these issues, and insist on the education of all engineers to properly secure all cloud instances under all development scenarios and to also establish standards to protect the data and controls and perform regular scans to assure security, we will continue to see a steady increase in the number of successful attacks and the amount of damage they will inflict.
DMARC is another story entirely.
This is a Security Bloggers Network syndicated blog post authored by Steve King. Read the original post at: News and Views – Netswitch Technology Management