Poorly Managed Public Cloud Databases: A Public Health Hazard

Another week, another poorly configured cloud storage container exposes buckets of data to anyone with access to the Internet. According to Security Boulevard’s Lucian Constantin’s story, “Another Cloud Storage Leak Exposes Verizon IT Files,” the S3 bucket contained around 100MB of data, including internal files, usernames, passwords and email messages from U.S. telecommunications provider Verizon Wireless.

Many of the files were associated with an internal middleware application used by Verizon called Distributed Vision Services (DVS) that’s used to link front-end applications to billing data, Constantin wrote.

The story of poorly secured cloud-based data stores is a story that is recurring all too often. It seems every few weeks a new story breaks regarding a bevy of files exposed due to poorly configured databases or Amazon S3 object storage.

A little more than a week before the most recent misconfigured Amazon S3 data exposure, thousands of misconfigured Elasticsearch servers, most running on Amazon Web Services, placed the internet at risk by enabling the hosting of malicious command-and-control servers, as Constantin also reported in his story, “Insecure Elasticsearch Nodes Host Malware Command-and-Control Servers.”

Organizations use Elasticsearch as a search engine for large data pools.

In that instance, according to the security firm Kromtech Alliance, which found the unsecured servers, wrote in its post that the firm found the unsecured servers while researching Elasticsearch servers that were not properly authenticated and publicly accessible. Kromtech found 15,000 at-risk servers and identified 4,000 that were infected with the AlinaPOS and JackPOS malware.

“The lack of authentication allowed the installation of malware on the ElasticSearch servers. The public configuration allows the possibility of cyber criminals to manage the whole system with full administrative privileges. Once the malware is in place criminals could remotely access the server’s resources and even launch a code execution to steal or completely destroy any saved data the server contains,” the Kromtech Alliance researchers wrote in their post.

Unfortunately, poorly secured Elasticsearch servers are an increasingly common problem. Earlier this year attackers began pillaging open Elasticsearch clusters on AWS, MongoDB and Hadoop systems.

In those instances, attackers did everything from try to extort ransom from users to destroying all the data they could find on vulnerable targets. In those previous events, the numbers of affected instances were also quite high. In a blog post at the time, Fidelis Threat Research Team tallied the number of exposed Hadoop installations to between 8,000 and 10,000 worldwide. “A core issue is similar to MongoDB, namely the default configuration can allow access without authentication. This means an attacker with basic proficiency in HDFS can start deleting files,” they wrote.

That lack of authentication was the same problem affecting those 15,000 unsecured Elasticsearch servers discovered earlier this month.

All organizations that run public cloud services should take steps to ensure those resources require proper authentication to access. Even if the owners of those cloud resources don’t value what is running in those clouds, or don’t believe their data is of value to attackers and criminals, the cloud resources and capacity is actually valuable to all types of attackers.

That means these things can be used to launch denial-of-service attacks, attacks against PoS systems (as was the case discovered this week) or any other type of attack the criminals can imagine.

How do should teams secure these systems?

“Data stores host your company’s private data and they should be treated accordingly. Your company’s API may be public but its data stores should (almost) never be made public. Simply putting these Elasticsearch nodes behind a firewall would have prevented most of the issue by forcing attackers to attack Elasticsearch indirectly as opposed to directly,” says Naftuli Kaym, Lead Site Reliability Engineer at Grindr.

Robert Reeves, chief technology officer at database automation provider Datical, says it’s time to treat the database as a first-class citizen, automate change deployments and enforce strict permissions. “It really is that simple. The database is the hardest part in the application stack to get right, so it just doesn’t make sense that it’s always the forgotten piece of the puzzle,” he says.

“We’re so focused on time-to-market and getting development to push out applications at the speed of light, but we’re still manually managing the change process of databases that contain massive amounts of information,” he adds.

Reeves contends that automating database management will help to eliminate human error. “By automating your database change process, you can limit who has access to the database and tighten restrictions to mitigate data loss. It takes the human error element of the equation and makes sure no one does anything stupid.”

“Database change automation also can make sure grants are not misplaced and standards are enforced during app development to shift security to the left. Cloud providers like AWS and Azure already have enforcement standards that organizations can turn on to track which machines aren’t following protocol. Take advantage of these services,” Reeves says.

All of us, as users of the internet and cloud resources, are placed at risk by these users who don’t take security and good system hygiene seriously. And it’s past time everyone takes securing the systems they control seriously if they don’t already.

With reporting from Lucian Constantin

George V. Hulme

One thought on “Poorly Managed Public Cloud Databases: A Public Health Hazard

Comments are closed.