Broken Access Control in Serverless Deployments

Hi everyone. I’m glad to see you’re back for the next post in the series. As we continue forward, more and more people become aware of the problem. I know that because more and more events are inviting me to talk about it. If you’re interested (obviously you are) you can join my talk at the OWASP Global event. May 30th, in which I will list the Top 10 risks according to the OWASP Project. If you would also like to participate, please help us by contributing to the Call for Data that will eventually help creating a designated, industry-based, top-10 for Serverless.

Now, let’s get into business. The original top 10 listed Broken Access Control as number 5 in the list. I do believe that this risk is and will be the most different from what we are used to in the monolithic application. For better and worse, but mostly for the better.

Broken Access Control in Serverless

Maintaining good access control in traditional apps is one of the hardest tasks. It involves both code and configuration. Both in the application and the infrastructure level. I want to concentrate on the application permissions rather than the end-user permission, which can usually be well maintained with RBAC (Role-Based Access Control). In traditional applications, most permissions are given to the app as a whole, this means that the access to resources are given according to the requirements of the entire application. You will barely see any case where on file/function can do one thing in the database while another can do other things and this separation will usually be enforced in the code according to the requesting user.

Why is it a problem? Because it means that if there is a vulnerability in the application, like SQL Injection, the attacker is able to do whatever the app itself is allowed to do, and this usually ends up with reading the entire database. Sometimes even admin access.

If someone managed to run code (e.g. RCE) they will run with the app’s permissions, which will usually end up in having access to all the resources available to the application.

You Get and Admin

On the other hand, Serverless brings great opportunity for access control. The micro-services nature of the application, which is built from dozens or even hundreds of functions allows us, with the right tools to understand what each and every function should be doing, and allowing it to do just and only that.

Let me give you an example. I have the following (python) function:

Permissive Controls

This simple function receives a compressed file as an input, fetches it from a removed host (defined as an environment variable), unpacks it, and uploads a specific file to a cloud storage with a random name.

If we have no awareness of security or access control, our AWS role for this function would probably include something like that:

No Access Control

This policy allows the function to do anything (Action) on any storage that we have in the account (Resource). If an attacker gains access to the function or it’s keys they can take full control over all of the account’s cloud starges. So, even though the function only uploads files into a certain bucket, attackers could read sensitive data out of this or any other bucket. They could even delete entire buckets. If you think the developer should be completely clueless to do such a thing, then you are in for a treat – Protego has identified that almost all functions are over-privileged.

However, as I said, we have a great opportunity here. We can limit the function to do only what it’s supposed to do, which is to upload a file to a specific bucket. To do that – we would create the following aws policy for the function:

AWS Function Policy

As you can see, the Action is now limited to PutObject, which is the action used to upload files, and the Resource is set to a specific bucket. Now, even if the attacker gains control over the function all the function is able to do is upload files into the designate-bucket, and nothing else.

To show you what it looks like, I changed the last line of code to the following, acting as a code injection:

code injection

When using the first, insecure policy for the function, we could successfully run a code that list files on another bucket, using the following aws cli command (assuming I compromised the lambda function’s key):

listing python files

Using the second, least-privileged policy, with the exact same cli command resulted in no data. If we look at the function logs, we could see the access error which does not allow the function to run the list_objects() command:

Least privilege policy in aws

Easy no? Well, for one simple function – it might be, but if you have dozens and even hundreds of function, and some of them more complex than other – it might be dreary and often even difficult to choose the right permissions. So, you could have an IAM team that interacts with the developers to create some strict policies, but even that is not going to be on-the-action policy. You probably want to automate that.

Now, imagine that you could somehow limit the function to only its specific tasks:

  • Run process(es):

     

    • /bin/tar
  • Access file(s):
    • /tmp/file.pdf
    • /tmp/pkg.tar.gz
  • Connect to host(s):
    • trusted-host.com

If you’re a security professional, to be able to enforce least privilege permissions for code, without depending on the developer to know how to do that – that is a dream come true.

I’ll let you in on a secret – it is possible! Since most of the function repeat their exact behavior over and over, we could profile the function’s behavior and limit it to the required actions alone.

In fact, Protege automates both the least-privilege policy generation and the function profiling. After learning the function’s behavior, Protego generates and enforces a whitelist profile for the function:

Protego Whitelist Profile

Regardless of the function execution, whether from the ci/cd pipeline or on production functions, Protego can detect permissive roles:

Permissive Controls

and generate a tailored least-privilege role for the function:

Least Privilege Roles

So, if this is all so easy, why is this a high risk? Well, It’s quite easy to mitigate if you can automate it. But, if you’re sitting idle it’s a ticking bomb. Over-privileged functions are everywhere, and if the attacker spots the right one – your whole cloud account is in danger.

If you’re a fan of DIY, there are plenty of resources to learn from, depending on the environment you’re running on:

TL;DR

The post Broken Access Control in Serverless Deployments appeared first on Protego.



*** This is a Security Bloggers Network syndicated blog from Blog – Protego authored by Tal Melamed. Read the original post at: https://www.protego.io/broken-access-control-in-serverless-deployments/

Tal Melamed

Tal has 15 years’ experience in the information security field, specializing in security research and vulnerability assessment. Prior to being the Head of Security Research at Protego, Tal was a tech leader at AppSec Labs, leading and executing a variety of security projects for serverless, IoT, mobile, web, and client applications, as well as working for leading security organizations, such as Synack, CheckPoint, and RSA. Tal is also a keen speaker; training DevOps and hackers around the world, as well as lecturing at major security conferences; and a neat developer, experimenting daily with offensive and defensive security.

tal-melamed has 9 posts and counting.See all posts by tal-melamed