How To Handle Secret Management for Serverless Applications

Like most applications, serverless apps often need access to configuration data in order to function properly. And while most configuration data is non-sensitive, some needs to remain confidential. These strings are known as secrets.

Common examples of secrets include:

  • API keys
  • Database credentials
  • Encryption keys
  • Sensitive configuration settings (email address, usernames, debug flags, etc.)
  • Passwords

Secret management is a fundamental part of application development, but it’s a little different in serverless environments. While most developers and InfoSec teams are accustomed to working with traditional secret management processes, serverless apps are stateless and require a different approach.

Top 3 Mistakes in Serverless Secret Management

There are plenty of ways to get secret management wrong for serverless applications, but here are the three we see most often:

Mistake #1: Storing secrets in configuration files

One of the most common errors when managing application secrets is storing them in plain text within configuration files. This approach is common in all types of development, but it’s particularly frequent in serverless applications because many developers aren’t accustomed to working with cloud-based key management systems.

Why is this an error? Because anyone with “read only” permissions to the project will have unfettered access to all stored secrets. If the project is hosted on a public repository this becomes an even more serious mistake, because now anybody can access what should be highly confidential information.

They are called “secrets” after all.

Mistake #2: Storing secrets in plain text as environment variables

Since serverless applications are stateless, data from one session is not stored for use in future sessions. Environment variables are a great way to persist data that’s important to the operation of a serverless function between sessions.

Environment variables are also commonly used to store secrets.

In a sense this is logical, because it keeps secrets out of application code and makes it easy to change them between deployments. Even better, practically every app, service, and platform can read environment variables, which is great from a functionality perspective.

But environment variables are not a secure option for secret management. Here’s why:

  1. Anybody who works on the project will have access to them.
  2. Environment variables are available to many child processes, making them susceptible to misuse.
  3. It’s easy for a hacker to inject malicious code into a process or dependency that has access to environment variables. This has happened many times already.
  4. Environment variables are often logged or printed in plain text following errors or crashes.

So as tempting as it may seem, environment variables are not suitable for storing secrets.

Mistake #3: Hard coding secrets

On the face of things hard coding secrets seems like a reasonable idea. Unfortunately, it suffers from two major issues.

First, if secrets are hard coded in plain text, any developer who works on the project will have access to them. And again, if your project code is hosted on public repositories, your secrets become even more widely available. This is hardly secure.

Second, even if your hard coded secrets are encrypted, this can cause problems. Most notably, it becomes awkward to periodically change secrets, and functionally impossible to change them between deployments. This is a big problem, because cycling secrets is fundamental to application security.

If you already have projects underway with hard coded secrets, finding and removing them can prove challenging. This article by a security researcher at Adobe details how to use Gitrob to identify sensitive information stored in code repositories on Github.

Best Practices for Serverless Secret Management

There are two fundamental rules for secret management:

  1. Secrets should be encrypted at rest
  2. Secrets should be encrypted in transit

These may seem like obvious requirements, but they are non-trivial to achieve. Thankfully AWS offers strong functionality for secret management which covers most of the heavy lifting.

Best Practice #1: Use Secrets Manager

AWS Secrets Manager encrypts and stores application secrets, and makes it easy to rotate, manage, and retrieve them. Applications retrieve secrets using an API call, which completely eliminates the need for hardcoded secrets or table lookups.

Secrets Manager also integrates with the Systems Manager (SSM) Parameter Store, allowing you to retrieve stored secrets when using other AWS services (including Lambda) using secure SSM parameters.

Secrets manager is particularly useful for storing secrets that need to be changed regularly. It includes a secret rotation function, which can be used to periodically change the secrets associated with commonly used databases or services. New secrets are always encrypted before being stored.

Best Practice #2: Use AWS Key Management Service (KMS)

KMS makes it easy to create and manage encryption keys, and access them across AWS services. It’s an excellent option for serverless applications, and it’s fully configurable in AWS Lambda.

Note that KMS does not store secrets directly. It’s a secure way to store encryption keys, and allows applications to use keys via an API call without exposing the keys themselves.

Encryption keys stored in KMS are as close to being hack-proof as is realistically possible. They’re stored in a FIPS 140-2 validated hardware module, and encrypted using AES-256 algorithm, making them functionally immune to physical attacks and brute force cracking techniques.


Secrets Manager + KMS = Secure Secret Management
Remember the rules: Secrets must be encrypted at rest and in transit.
Secrets Manager and KMS combine to encrypt every version of a secret with a unique key that’s protected by the customer’s master key. The integration of these two services ensures your secrets are protected using keys that are always encrypted before leaving KMS. Whenever a secret changes, a new key is generated.
All communications between Secrets Manager and KMS are encrypted during transit.


Best Practice #3: Ensure developers can’t access secrets by loading during deployment only

Strangely, developers are rarely considered as a possible security risk, and many times secrets are loaded into serverless applications much earlier than they need to be. It’s not that we don’t trust developers… it’s just allowing humans access to sensitive information that they don’t need is an inherently bad idea — particularly when those humans may or may not remain with your company for the foreseeable future.

The use of temporary credentials during development is generally unavoidable, but there’s really no reason for developers to have access to an app’s actual secrets after go-live.

A Practical Note on Serverless Security

The combination of Secrets Manager and KMS is the gold standard when it comes to managing secrets in a serverless application. Nonetheless, it’s important to understand that even with these precautions in place, bad things can happen.

If malicious code is injected into your application it may be able to intercept anything the infected function has access to.

So how do you avoid these situations? By rigorously vetting the code of your application and its dependencies prior to each release and by protecting your application in runtime using an application firewall and behavioral protection.

The Spirit of Serverless

At first glance the advice we’ve given here about protecting secrets in serverless applications may seem as though it’s going to produce a lot of extra work. After all, leaving aside the security issues surrounding the use of environment variables, they certainly are an easy option.

In reality, once Secrets Manager and KMS are set up they actually save work. Instead of having to manually switch out secrets (or accept fundamental security weaknesses) you can allow AWS services to do the heavy lifting for you.

And using the combination of Secrets Manager and KMS is very much in the spirit of serverless technology – you’re using a cloud service to manage every part of your application.

*** This is a Security Bloggers Network syndicated blog from PureSec Blog authored by Pete Hugh. Read the original post at:

Cloud Workload Resilience PulseMeter

Step 1 of 8

How do you define cloud resiliency for cloud workloads? (Select 3)(Required)