SBN

21 InfoSec and AWS Experts Reveal the #1 Mistake Companies Make When It Comes to AWS Security (and How to Avoid It)

More companies are moving to the cloud than ever before. Amazon Web Services (AWS) is one of the most popular cloud platforms, and for good reason: AWS provides a robust set of features and services that give it broad appeal among businesses of all sizes. But when it comes to security, many companies continue to fall short, putting their sensitive data at risk. In a recent Threat Stack study, for example, we discovered that 73% of companies have at least one critical AWS security misconfiguration that enables an attacker to gain access directly to private services or the AWS console, or that could be used to mask criminal activity from monitoring technologies.

To gain some insight into the biggest (and potentially most devastating) mistakes companies are making related to AWS security as well as tips and strategies for avoiding them, we reached out to a panel of InfoSec pros and AWS experts and asked them to answer this question:

“What’s the number one mistake companies make when it comes to AWS security (and how can they avoid it)?”

Meet Our Panel of InfoSec Pros and AWS Experts:

Read on to learn what our experts had to say about the biggest mistakes companies make when it comes to AWS security — and what you can do to avoid making the same mistakes.

Patrick Cable

Patrick Cable

@patcable
Patrick Cable is a Sr. Infrastructure Security Engineer at Threat Stack. Patrick works on building the bridge between Infrastructure Operations and Security by writing new tooling and implementing security technologies.

“In an age of credential theft, there is a solution…”

I was recently reading the Verizon Data Breach Intelligence Report and noticed that Credential Theft is still one of the most common ways that infrastructure gets compromised. Machine to machine credentials (and user to machine credentials) are complicated to get right. One way that you can reduce your attack surface is to start using shorter-lived access tokens — and making it easy for employees and applications to request and change them. Implementing something like Hashicorp’s Vault can help provide the backend infrastructure that can make this possible for people and machines alike. As an added bonus, you can use Vault to generate all kinds of temporary credentials, not just ones for AWS.

Cody Swann

Cody Swann

@CodySwannGT
Cody Swann is the CEO of Gunner Technology. They build custom software for the public and private sector as well as entrepreneurs, using JavaScript and AWS.

“Prior to founding Gunner…”

I worked for one of the largest sports websites in the world, and we were early adopters of AWS.

The team I worked with made the mistake of putting all our applications and environments in a single AWS account, and everyone on that team had super user access. Long story short, while trying to modify the staging environment, we accidentally crippled the production environment as well.

Luckily, we were able to recover fairly quickly, but this is the biggest mistake I see even to this day, and it’s so easily preventable.

More than once, we’ve taken over a large client who fired their previous provider for something similar.

In one situation, the staging environment was compromised, and the attackers gained control over the entire AWS account, which caused a week-long outage.

The solution is simple: Silo your environments and applications. That way, you can restrict access to the production environments, and if a staging or test environment is compromised, it’s isolated to that single environment.

Margaret Ho

Margaret Ho

@SecInnovation
Margaret Ho is a security engineer at Security Innovation. She enjoys both breaking things and making things, and she is passionate about privacy, security, and equality.

“One of the most commonly seen and dangerous mistakes companies make when they use AWS services is…”

Allowing sensitive data stored in S3 buckets to be publicly accessible. There have been many widely reported and well-known instances of this error, which have led to the exposure of top secret Army intelligence, consumer data collected by Experian, and Accenture customer and proprietary data, to name just a few. Fortunately, with the new Permission Checks feature that was announced in November 2017, a prominent Public tag is now displayed next to any buckets that are publicly accessible, which should make this issue much easier to detect and fix. It should be noted, however, that individual objects are not similarly marked when they are made publicly accessible. This is one of the reasons security experts advocate for managing access policies in groups or sets instead of individually, since auditing individual access is tedious and allows more room for error.

Mike Baker

Mike Baker

@Mosaic451
Mike Baker is Founder and Managing Partner at Mosaic451, a managed cyber security service provider (MSSP) with expertise in building, operating, and defending some of the most highly secure networks in North America. Baker has decades of security monitoring and operations experience within the US government, utilities, and critical infrastructure.

“Keeping your AWS environment safe from hackers and out of the news is not rocket science…”

It’s preventable.

Barely a day goes by without news of yet another breach of an AWS S3 bucket, but these breaches are preventable. AWS is a powerful and highly secure cloud environment, but it must be configured and maintained properly. The most careless of mistakes that many companies make is not knowing what they are doing with default settings and not knowing what data they are actually making available.

The default privacy setting for AWS S3 buckets is owner-only. Most AWS breaches involve organizations choosing the “all authorized users” setting when expanding access to their buckets, not realizing that this setting includes all authorized users of Amazon Web Services, not just their account. This means that anyone with an AWS account can access that bucket with whatever permissions are granted to that level of access. It’s a free-for-all.

Organizations must understand what level of access they’re granting to their data and who they are granting it to. A good rule of thumb is, if you’re not sure, don’t do it! Get help before you end up exposing your data to the world.

The other half of this critical, but preventable, mistake is not knowing what data you actually have. Data governance is one of the pillars of cloud security. You cannot secure your data if you don’t know what you have. Is your data important? Is it unique? Does it have value?

Many organizations fall prey to the “camel’s nose under the tent” problem. They find that cloud computing is easy, so easy that they start migrating all manner of data into the cloud without evaluating it or considering whether it even belongs there. Eventually, really sensitive data ends up being stored in the cloud. Even worse, the IT people may not know this data exists, and it becomes a shadow IT problem. Always identify your data and run an assessment before putting it into the cloud. If you only have two levels of classification —Private or Public — treat everything as private until you are sure it’s public. Assume it’s private until proven otherwise.

Jonathan LaCour

Jonathan LaCour

@cleverdevil
Jonathan LaCour is the CTO at Reliam, a managed services provider for AWS and Azure. Prior to joining Reliam in 2018, LaCour held several technical leadership positions at DreamHost, including Senior Vice President of Product and Technology most recently.

“Architecting and operating secure solutions on AWS requires…”

A strong grasp of the security primitives and best practices built into AWS — there’s no way around that. The most common mistake that AWS users make when it comes to security is going to production before attaining this full understanding. With that proper preparation, application of best practices, and continuous auditing, businesses can create secure environments in AWS with confidence. But it takes a deeper initial understanding than many organizations realize. It’s part of the reason we perform regular Well-Architected Reviews for our customers, delivering detailed audits across all five pillars of the AWS Well-Architected Framework, including security.

Mike Hall

Mike Hall

@turbothq
Michael V.N. Hall is Director of Operations for Turbot. With two decades of technology leadership, Mr. Hall has been responsible for the development of products and services in the manufacturing, financial services, healthcare, and IT Security sectors, and is regarded as a transformational leader. Previously, Mr. Hall was Vice President of Information Technology for UnitedHealth Group.

“The number one mistake companies make when it comes to AWS security is…”

Assuming that it’s a once and done process. Many organizations spend countless hours designing their platform, defining their governance model, and applying configurations to their landing zones when preparing to migrate their workloads to AWS. And, weeks or months later, someone forgets about the governance, misconfigures a new service, and a data breach occurs. We see countless headlines about corporations with misconfigured EC2 computing instances, or S3 storage buckets that led to data exposure. These data leaks are typically not an issue with defining a solid security baseline; most times it’s a problem with configuration drift. AWS security needs to be an ongoing effort to ensure continuous  compliance, and eliminating configuration drift from those golden governance standards. The risk is not with the cloud platform, it’s in what people do with the cloud platform.

Bob Herman

Bob Herman

@TropolisGroup
Bob Herman is the Co-Founder and President of IT Tropolis. He is an engineer with over 30 years’ of experience. His areas of expertise include managed IT services, data protection, cybersecurity, cloud computing, technology implementations, project management, IT operations, business continuity, network topology, and virtualization technologies.

“I think the number one mistake people make when setting up Amazon services is…”

Not creating and applying an AWS Security Group to resources. For example, if a server is provisioned to host a web site, then a Security Group should be configured that allows only ports 80 and 443 inbound. Of course, since remote folks will need access, ports 22 and possibly 3389 could be allowed, but only from trusted remote IPs, or perhaps opened/allowed on-demand if accessed infrequently. Locking down exposed resources in this manner helps protect against numerous types of attack bots, including brute force login attacks.

Tyler Riddell

Tyler Riddell

@tgr_360
Tyler Riddell is Vice President of Marketing for eSUB Construction Software with over 15 years’ experience in Marketing, Product Management, Advertising, and Public Relations. He has a proven track record for successful go to market and corporate communication programs in multiple vertical tech markets.

“The first step towards preventing problems with your AWS is…”

To know what type of set up is best for your organization. Cloud service is a shared responsibility between you and your provider. The problem is that many admins don’t know what AWS takes care of and which security controls they themselves need to apply. Not all AWS default configurations are appropriate for your company’s workloads, so you will need to check and manage some settings.

Ryan Kroonenburg

Ryan Kroonenburg

@KroonenburgRyan
Ryan Kroonenburg is the founder of A Cloud Guru, a world leader in cloud computing training with Amazon, Google, and Azure. He holds every associate certification, is a certified AWS Solutions Architect Professional, and has 17 years’ experience in IT. A Cloud Guru’s courses are all cloud-based and aimed to accommodate both absolute beginners and professionals.

“The number one security mistake companies make in using AWS is that…”

They fail to adequately secure their S3 object storage buckets.

These buckets are secure by default. Yet multiple, very public breaches of S3 security have made the news in recent weeks. These breaches are, without exception, the result of human error — not a failing with the basic security of the S3 service.

AWS administrators need to be sure to grant access to S3 (or any other AWS service) following the principle of “least privilege,” making certain that only those properly trained and authorized to make changes within S3 have the access necessary to do so.

Vivek Chugh

Vivek Chugh

@chughtweets
Vivek Chugh is the Founder and CEO of Listables. He is an accomplished technology leader with domestic and international experience in all business cycles, a recognized authority on the strategic application of technology to drive revenue, engineering and manage world-class development teams, enhance service quality, improve production, and control costs.

“The #1 mistake companies make when it comes to AWS security is…”

Having a login into their root user account that does not have two factor authentication (2FA) and provisioning services.

To avoid this, they should setup two factor authentication on the root user account and hide the details. Provision users who will have access to different parts of the AWS environment using AWS IAM (Identity and Access Management) and only allow provisioning of services using those accounts.

Peter Ayedun

Peter Ayedun

@TruGridApp
Peter Ayedun is the CEO and co-founder of TruGrid, a company that specializes in Simple & Secure Workspaces for businesses.

“One of the biggest mistakes companies make with AWS or any cloud server is…”

Opening ports and not locking ports down to specific source IP addresses. This exposes their servers to constant hacking attempts even from automated hacking scripts that are constantly looking for open ports to target. Companies should always secure their servers and preferably not open any ports to their servers except what is required. One of the ways companies can avoid opening ports on servers and networks is by using our product, TruGrid to hide the location of their network from hackers, avoid opening ports, and add additional security measures on top of it, such as Multi-Factor Authentication and Dark Web scanning.

Rohit Akiwatkar

Rohit Akiwatkar

@RohitAkiwatkar
Rohit Akiwatkar is the Cloud Technology consultant at Simform. Rohit has deep experience in cloud technologies and serverless in particular, which enables organizations to translate their vision into powerful software solutions.

“The #1 mistake that most companies make is…”

Not following the principle of least privilege and thus ending up exposing their username and keys.

The concept is pretty simple: Every module must be able to access only the information and resources that are necessary for its legitimate purpose.

Every AWS service requires an IAM role. Most of the time, these roles are not properly managed, and wildcard access to everything is approved.

Access keys are leaked all the time. It might happen when someone checks them on GitHub or someone hardcodes it into the script on a server that may get compromised. Third-party modules can easily expose these keys as well.

Apart from following the principle of least privilege, here are the three things you need to exercise:

  • Every developer should have separate keys and limits to what they can do.
  • You should rotate your keys on a regular basis. At Simform we have picked up a “keymaster” which does this rigorous job.
  • Even if developers need to access EC2 instances or VPCs, which is a rare case, provide them with separate keys.
Zach Fierstadt

Zach Fierstadt

@lightcrest
Zach Fierstadt is the CEO of Lightcrest, the creator of the Kahu computing platform. Kahu provides customers with the most cost-effective means of deploying secure hybrid cloud environments by levering Kahu’s hyperconverged, software-defined platform.

“The number one mistake companies make when it comes to AWS security is…”

Not taking into account the fact that ease-of-provisioning and the on-demand resources of public cloud make security harder, not easier. VM sprawl from different groups within the organization, including development, operations, and shadow IT, make it more difficult than ever to minimize an organization’s attack surface. Leveraging IAM is not enough — what happens when your portal access is compromised? Your VMs can be deleted and trojaned just as fast as they’re provisioned, which is why hybrid cloud architectures are gaining steam as a means to cleave the attack surface and maintain data sovereignty.

Mihai Corbuleac

Mihai Corbuleac

@csITsupport
Mihai Corbuleac is the Senior IT Consultant at ComputerSupport.com, an IT support company providing professional IT support, AWS/Azure consulting, and information security services to businesses across the US since 2006.

“Cybersecurity is volatile and AWS Security is no exception…”

First, user access control, access keys, and assigned roles are extremely important to AWS security. We all know that broad roles can be risky. I know, it’s probably tempting to give developers admin rights to handle specific tasks, but you shouldn’t. Policies can handle most situations. Assign minimum permissions required to perform a certain task. There are studies revealing that more than 30% of privileged users in AWS have full rights to a wide variety of services, including the ability to terminate the entire customer AWS environment. Admins often fail to set up rigorous policies for a variety of user scenarios; instead, they choose to make their policies so unfocused that they lose their effectiveness. Applying policies and roles to limit and control access reduces the attack surface, and it also eliminates the possibility of the whole AWS environment being compromised because a certain key was disclosed, account credentials were stolen, or a team member made a configuration error. I would use AWS IAM (Identity and Access Management) to assign roles followed by a policy to each role.

Pieter VanIperen

Pieter VanIperen

@code_defenders
Pieter VanIperen is a veteran programmer and security expert and a founding member of Code Defenders.

“The biggest mistake companies make when it comes to AWS security is…”

Management and use of IAM.

Everything from smaller companies sharing the root account credentials to large companies not using roles and/or not being granular enough in permissions to actually adhere to least privileges and separation of duties models.

Cris Daniluk

Cris Daniluk

@RhythmicTech
Cris Daniluk leads Rhythmic Technologies, an innovative, compliance-oriented managed cloud and security services firm based in the Washington, D.C. area. Before founding Rhythmic, Cris was responsible for project management and business development at Claraview, where his work in securing projects worth over $100 million helped key the company’s acquisition by Teradata.

“Companies often mistake the investments AWS has made in protecting its infrastructure with…”

The protection of their own services built on top of AWS. While AWS clearly communicates where their security responsibility ends and the customer’s begins in its Shared Responsibility Model, many companies see what they want to see. Given the challenges of securing services built outside the company’s perimeter, inside of the environment of a third-party cloud provider, it is understandable that many adopt such a hopeful but ultimately dangerous attitude.

AWS provides a wealth of tools and best practices to help their customers protect themselves in the cloud, but it is up to each customer to take advantage of them. AWS offers few protections out of the box, and because they are running in the cloud, they’ve left behind nearly all their existing security services and entered a brave new land. Companies must fully understand the Shared Responsibility Model and take responsibility for protecting their IP and their customers’ data. Leveraging the great information that AWS has already provided along with a cloud-oriented security mindset is critical to success.

Evaldas Alexander

Evaldas Alexander

@rankpay
Evaldas Alexander is the CTO at RankPay, a top-rated SEO service that helps thousands of small businesses earn higher rankings.

“The single biggest (and most common) mistake that companies make with AWS security is a failure to leverage IAM roles…”

They’re simple to set up and provide a high level of added security. It’s also best practice to use multifactor authentication (MFA) for the root account, and preferably other accounts as well. It’s all too common for companies to skip that step despite how easy it is to use.

Gary Watson

Gary Watson

@GaryMWatson
Gary Watson is the Founder and CTO of Nexsan.

“The number one mistake companies make when it comes to AWS security has to be…”

Trusting the cloud platform with all their data, no matter how sensitive. Cloud providers like AWS are great for certain data — often short-term or low-sensitivity — but most existing implementations cannot provide the necessary security and availability for highly sensitive, mission-critical data. This has been proven on several occasions including when AWS experienced a five hour outage in 2017. To avoid data loss or corruption, companies should also have an on-premises storage system to house their most critical data in an environment where they have complete control over that data. This provides higher security and reliability, lowering risk of data corruption or loss.

Scott Penney

Scott Penney

@BlueCatNetworks
Scott Penney spent 20 years in cybersecurity working with some of the world’s largest companies (like AT&T) to define powerful, practical security architectures. Scott currently drives security solutions innovation at BlueCat, using the power of DNS to proactively act on emerging security threats.

“Organizations mistakenly manage cloud-based infrastructure as…”

Something wholly separate from their on-premises environment. They assume it doesn’t need to emulate controls that their own system has in place. For example, they allow the cloud’s DNS servers to operate outside their purview — meaning that they have no access to DNS data from AWS-based activity and no control over the queries DNS indiscriminately resolves.

Organizations need visibility into all activity on their network, whether it’s on-prem or cloud. If an AWS server is querying your company’s data centers or resources and has ties to the external world, you want to be keeping an eye on it. This visibility is the foundation for all other layers of security that organizations add, so the better practice would be to centrally collect DNS data from all environments and comprehensively leverage it to detect, deny, and disrupt threats.

John Baker

John Baker

@DeployBotHQ
John Baker is an experienced DevOps Engineer at DeployBot who understands the melding of operations and development to quickly deliver code to customers, paired with a deep knowledge of the cloud and monitoring processes as well as DevOps development in Linux and Mac.

“The biggest mistake companies make when it comes to AWS security is…”

Not managing access and setting up VPNs for privileged users.

Don’t overlook Web Application Firewalls. Many SaaS companies, especially smaller ones, don’t bother to use VPNs when privileged users are accessing their cloud, or they grant unnecessary permissions to certain users, leaving your environment vulnerable when novices don’t implement best practices.

Jason Sinchak

Jason Sinchak

@j_synack
Jason started his career in the early phases of cybersecurity at a big 4 firm. On noticing the demand and market, he left to become CEO and founder of two security firms: Emerging Defense and Sentegrity. Jason specializes in penetration testing, breach investigation, and mobile device security.

“In our experiences performing hundreds of penetration assessments and forensic investigations resulting from a data breach…”

We often see organizations view AWS as outsourced everything. They often forget the fact that they are still responsible and liable for the configuration of virtual servers and the various data storage mechanisms provided by AWS. The #1 mistakes companies make is the improper assignment of access controls to sensitive data storage on AWS. This has been evidenced by numerous large data breaches resulting from unsecured AWS S3 buckets. AWS typically deploys secure by default, but customers often loosen these access measures to enable development. In many cases, they are not re-assigned once development is complete and the service goes live.

*** This is a Security Bloggers Network syndicated blog from Blog – Threat Stack authored by Pat Cable. Read the original post at: https://www.threatstack.com/blog/21-infosec-and-aws-experts-reveal-the-1-mistake-companies-make-when-it-comes-to-aws-security-and-how-to-avoid-it