Shared Responsibility Model
We have heard the benefits of why companies are moving to serverless– zero administration, automatic scaling, and pay-for-use consumption modes. But where and how does security play a role in serverless computing services, how has it evolved, and what considerations should be put into place to address these differences?
When deploying serverless computing services, it is critical to first understand the shared responsibility model as it relates to serverless computing services. For instance, the application owner still owns the responsibility of securing the application itself, including the data and access controls, and the cloud provider has responsibility for securing the operating systems hosting those applications. As a reference, please refer to the shared responsibility model diagram provided by AWS in a recent webinar:
Serverless Computing Risks
There is a distinct delineation in responsibilities amongst the cloud provider and the customer, and it is important to understand the attack vectors to ensure secure serverless computing deployments for the areas within your responsibility. Risks can come from various forms like over permissioned functions, long function timeouts, the number of functions and their complexity, deficient testing in development and lack of monitoring during runtime. To summarize some of those risks, here is a list of serverless computing security risks identified in the OWASP Serverless Top 10:
- Broken Authentication
- Sensitive Data Exposure
- XML External Entities (XXE)
- Broken Access Control
- Security Misconfiguration
- Cross-Site Scripting (XXS)
- Insecure Deserialization
- Using Components with Known Vulnerabilities
- Insufficient Logging and Monitoring
Serverless Computing Services Security Considerations
There is a lot here to consider, and while developers are responsible for the code they produce, it is important to outline some serverless computing best practice tips to consider to enhance serverless security.
- Don’t rely solely on WAF protection: Application layer firewalls are only capable of inspecting HTTP(s) traffic, so a WAF will only protect functions which are API Gateway-triggered functions. It will not provide protection against any other event trigger types. While you still need to have a WAF in place, it should not be the only line of defense in securing serverless applications.
- Customize Function Permissions: Setting-up permissions is a very cumbersome task for developers. In fact, 90% of permissions in serverless applications are found to be over permissioned. There is not a one size fits all approach. It is important to truly understand the role of each function and set policies around those roles for proper permissions.
- Conduct a Code Audit: Serverless computing services comprise of many modules and libraries. The modules often include many other modules, so it’s not uncommon for a single serverless function to include tens of thousands of lines of code from various external sources, even with less than 100 lines of code your developers wrote. Attackers look to include their malicious code in common projects- “poisoning the well” looking for their time to strike. Having visibility over what each function is doing is critical for serverless computing services security.
- Retain Control Over Your Functions: Malicious functions can slip in through a variety of means, so it is critical to mitigate this risk through careful CI/CD. To offset this risk, create a policy and strategy for conducting a code analysis during build before it goes into runtime, and make sure every function goes through CI/CD.
- Look at All Attack Indicators: The shift to serverless significantly increases the total amount of information and the number of resources. As the quantity of functions increases, it becomes even more difficult to determine if everything is behaving the way it’s supposed to. Even if you are familiar with the attack patterns that are unique to serverless computing, visually scanning them is beyond human ability. It is important to leverage AI tools for added serverless security visibility and efficiency.
- Time Out Your Functions: Functions should have a tight runtime profile. Admittedly, crafting appropriate serverless function timeouts is often not intuitive. The maximum duration of a function can be quite specific to that function. As a serverless security best practice, shrink not just what a function can do, but how long it can run.
Serverless computing is a great solution for organizations, and the security challenges are no different than with traditional AppSec, they just need a different perspective on how they are addressed. For more information, check out this article on AWS Lambda Security Best Practices.
The post Serverless Computing Services Security Quick Guide appeared first on Protego.
*** This is a Security Bloggers Network syndicated blog from Blog – Protego authored by Trisha Paine. Read the original post at: https://www.protego.io/serverless-computing-services-security-quick-guide/