Building Well-Architected Serverless Applications with Thundra (Part 1)


Building Well-Architected Serverless Applications with Thundra (Part I)

Building your application with serverless technology can save you a lot of time and money. This is because serverless architectures usually come with less of an administrative burden than regular cloud architectures. Less, though, doesn’t mean zero. You can’t just throw a few functions into the cloud and hope that everything will work out okay. There are a few things you still need to consider to keep your system operational, secure, reliable, performant, and reasonably priced.

AWS created the Well-Architected Framework (WAF) to help you get the most out of your cloud applications. The WAF is a catalog of best practices and questions about your application that you should be able to answer. It asks about issues that are often overlooked, like reliability and security, so you don’t miss them in the early stages of development.

While the WAF talks about system design on AWS in general, the AWS Serverless Application Lens (SAL) is a more specific document that concretizes the WAF questions for serverless technology. Thundra was a launch partner for the SAL and as such, provides technology that helps answer some of the questions the SAL document asks.

This is the first part of a three-part series.

  • Part I is about the basics and the architecture of the application you will use to ask SAL questions. 
  • Part II is about the two biggest pillars of the SAL, reliability and security.
  • Part III tries to answer the questions related to the remaining three pillars: operational excellence, performance efficiency, and cost optimization.

Things to Keep in Mind

The WAF and SAL documents ask very generic questions that can be answered in many ways. Sometimes the answer is a correct configuration and sometimes answering requires a change in the architecture. While all questions can be answered with AWS technology, this article focuses on using Thundra.

The process here is as follows:

  1. Define the requirements with use cases.
  2. Define a serverless architecture by trying to find some AWS services that satisfy the requirements, or at least allow us to implement something to satisfy the requirements with the help of these services.
  3. Try to answer the SAL questions with respect to our architecture and see how Thundra can help.

Define Requirements with Use Cases

Let’s say we want to build a recipe site, a place where people can share their most beloved recipes with the world.

What are the use cases for such an application?

  • Users want to add their own recipes with images.
  • Users want to read recipes that are on the site.
  • Users want to search for recipes.
  • Users who are health conscious want nutritional information about ingredients.

These are the basic use cases we should cover, so let’s extract some requirements.

  • A way to host the frontend.
  • User management.
  • Storage for our recipe data and the corresponding images.
  • An API that allows users to modify the storage with CRUD operations and image uploads.
  • A way to index data for search.
  • A third-party API with which you can integrate and which gives you nutritional data for your ingredients.

Define a Serverless Architecture that Satisfies the Requirements

The next step is searching for AWS services that allow you to satisfy the requirements you’ve defined, while focusing on serverless technology.


You can upload your static web page assets to S3, AWS’s object storage, and then distribute it with a content delivery network. AWS provides a globally distributed content delivery network called CloudFront, which lets you deploy your web frontend as close to your users as possible.


AWS offers two different serverless API services here, API Gateway and AppSync. API Gateway focuses on REST APIs, while AppSync is a GraphQL-based API service. We will go with API Gateway here to keep things simple, but both services integrate well with the rest of the AWS ecosystem.

User Management

For user management, Cognito is the way to go. It offers features like password verification, password recovery, and email activation out of the box.


For the image upload, you can also use S3 and CloudFront. S3 even allows uploads directly from your browser, so you don’t have to go through the API.

AWS has a multitude of databases to choose from: relational, document store, graph-based, and even an immutable ledger database. For our recipe data, we’ll use DynamoDB, because it’s serverless and reasonably flexible.


For indexing and searching purposes, AWS offers Elasticsearch Service, a managed version of Elasticsearch. While not serverless, it offers reasonable performance, and at least some of the heavy lifting required for setup and maintenance is done by AWS.

Nutritional Data

Recipe analysis is a rather high-level activity and as such, is not provided by a low-level cloud provider like AWS. That means that we need to integrate with a third-party API outside of the AWS Cloud.


The basic architecture that we start with can be seen in figure 1.

Starting architecture

Figure 1: Starting architecture

The clients download the website from CloudFront, which, in turn, gets it from the deployment bucket.

The clients authenticate with Cognito, and that enables them to access CRUD and search actions supplied from API Gateway.

API Gateway uses two Lambdas to interact with DynamoDB and Elasticsearch Service.

The image bucket allows for direct uploads via signed URLs, which are generated by the CRUD Lambda on a create action. The uploaded images are also served with CloudFront.

The nutrition Lambda is triggered when a new recipe is added to DynamoDB and will enrich the recipe with nutritional data from an external API.

Evolving the Architecture

In the next two articles, we will extend this architecture. Some of the SAL questions require us to add services to this architecture to get satisfying answers, so the architecture outlined in figure 1 isn’t final, but a draft that gives us an idea of what we want to do and what services we need to do it.

It’s always a good idea to split system design into multiple phases and to focus on specific design aspects in every phase. That way, you won’t forget important decisions that would be expensive or even impossible to change later. Iteration is the key.

What’s Coming Next…

In the second article we’ll focus on the security and reliability pillars of the SAL and try to answer the following questions:

SEC 1: How do you control serverless API access?

SEC 2: How are you managing the security boundaries of your serverless application?

SEC 3: How do you implement application security in your workload?

REL 1: How do you regulate inbound request rates?

REL 2: How are you building resiliency into your serverless application?

The third article will focus on operational excellence, performance efficiency, and cost optimization pillars of the SAL. We’ll try to answer the following questions:

OPS 1: How do you understand your serverless application’s health?

OPS 2: How do you approach lifecycle management for the application?

PER 1: How have you optimized serverless application performance?

COST 1: How do you optimize costs?

Wrapping Up

In this article we defined a serverless architecture for a recipe application, then started with some basic use cases and solved them with AWS services. We also talked about the WAF and the SAL in particular, and how its nine questions try to help us improve our architecture. We now have a solid starting point for the next two articles, where we try to answer the questions and find out where Thundra helps with this.

*** This is a Security Bloggers Network syndicated blog from Thundra blog authored by Emrah Samdan. Read the original post at: