SBN

Monitoring Microservices on AWS with Thundra: Part III

4 minutes read


POSTED Jan, 2021

dot

IN
DevOps

Monitoring Microservices on AWS with Thundra: Part III

Serkan Özal

Written by Serkan Özal

Founder and CTO of Thundra


 X

Welcome to the third and final part of our series on monitoring microservices on AWS with Thundra. In the first part of this series we talked about monitoring a microservice architecture based on serverless technology like AWS API Gateway and AWS Lambda. In the second part, we looked at how to perform the same task with a Kubernetes- and AWS EKS-based architecture. In this final installment, we’ll learn how to monitor an AWS ECS cluster with Thundra.

Elastic Kubernetes Services (EKS) is AWS’s managed Kubernetes solution. With EKS, we don’t have to worry about setting up and managing our Kubernetes nodes. We just have to keep track of our cluster.

AWS created Elastic Container Service (ECS) as an alternative to Kubernetes that integrates seamlessly with the AWS ecosystem. With an EKS cluster, everything is done the Kubernetes way, which makes it more accessible for people used to Kubernetes, but less accessible for people coming from an AWS background. An ECS cluster is built on AWS core services, like AWS EC2, which lets AWS professionals feel more at home.

Sample Project

Again, our sample project is an HTTP API that takes a number, makes a sample calculation just for the sake of this project, and sends the result of that calculation to a given email address.

It consists of four microservices:

  • Backend: The HTTP entry point of our system
  • Calculator: The service that does the calculation work
  • Error: A sample service that always fails
  • Email: The service that uses AWS SES to send the result email

                                                            Figure 1: Heavy computation ECS architecture

The architecture diagram can be seen in Figure 1. The microservices are all Node.js-based containers inside an ECS cluster. The backend container is available to the internet through an application load balancer, and the email container uses the AWS SES mail API to send results.

If you’ve been following along in this series, much of this article will look familiar because AWS EKS and AWS ECS can both use Docker images as their containers. The Thundra monitoring client interfaces with the Express framework, so there is no need to change the Docker images. Again AWS Cloud Development Kit (CDK) is our infrastructure as code tool of choice.

Prerequisites

The following prerequisites are required to install this project:

  • An AWS account
  • A Thundra account, with your AWS account set up
  • AWS CDK
  • Node.js
  • npm

Installing the Sample Project

Clone the GitHub repository to your local machine:

$ git clone https://github.com/thundra-io/heavy-computation-ecs.git

Install dependencies with the following command:

$ npm i

The next step is to replace the environment variables inside lib/env.vars.json.

<API_KEY>  needs to be replaced with your Thundra API key.

<REGION> has to be replaced with the AWS region you want to deploy your cluster to, so the right SMTP server can be found.

To get the values for <SMTP_USER>, <SMTP_PASSWORD>, and <SOURCE_EMAIL>, you have to generate SMTP credentials in the console and add an email address to your verified email list.

Note: For security reasons, Amazon SES is in sandbox mode by default, which only allows you to send emails to verified addresses. Sandbox mode can only be deactivated by writing a support request to AWS.

The cluster deployment takes a few minutes and can be done with the following commands:

$ cdk bootstrap
$ cdk deploy

Thundra Integration

The integration example below for containers didn’t change much from the example for Kubernetes that we tried in our previous article. The only difference in the JavaScript code comes from the difference in how AWS ECS does service discovery versus Kubernetes’ approach.

Let’s look at the JavaScript code inside lib/containers/backend/index.js to see what this difference actually means.

const axios = require("axios");
const bodyParser = require("body-parser");
const express = require("express");
const thundra = require("@thundra/core");

const calculatorHostname = process.env.CALCULATOR_HOSTNAME || "calculator";
const errorHostname = process.env.ERROR_HOSTNAME || "error";

const app = express();
app.use(thundra.expressMW());
app.use(bodyParser.json());
app.post("/", async ({ body: { email, number } }, response) => {
  response.end(JSON.stringify({ email, number }));
  await axios.post(
    `http://${calculatorHostname}:8000/`,
    { email, number },
    { timeout: 3000 }
  );
  await axios.post(
    `http://${errorHostname}:8000/`,
    { email, number },
    { timeout: 3000 }
  );
});
app.listen(8000);

Compared to the container code in the Kubernetes sample app where no environmental variables were used, we’re now using two. The reason for this change lies in the way Kubernetes and AWS ECS handle service discovery and DNS.

While Kubernetes allows us to give a pod a simple top-level domain, AWS ECS requires us to use AWS Cloud Map to make service discovery. AWS Cloud Map needs a root namespace from which it starts giving out domains.

In the Kubernetes version, this leads to cluster URLs looking like this: http://calculator:8000/, while in the AWS ECS version, we get cluster URLs looking like this http://calculator.local:8000/. To accommodate this difference, we’re using environment variables to pass other services’ hostnames to the containers.

Using the Sample Project

After deployment, the CDK CLI will display the URL to your load balancer service, which then points to the backend microservice. You can use this hostname to send a sample request to it with cURL:

$ curl --header "Content-Type: application/json" 
  --request POST 
  --data '{"email":"<EMAIL>","number": 10}' 
       http://<LOAD_BALANCER_HOSTNAME>/

Replace <EMAIL> with the email you verified for AWS SES. If your email was correctly verified, you should have an email in your inbox informing you about the finished calculation.

Thundra Monitoring Insights

In the last article, we looked into the automatically generated architecture diagrams and application insights. In this article, we will look into another feature of the Thundra console: Error Inspector.

 

                                                                                Figure 2: Thundra console menu

In Figure 2, you see where to find the Error Inspector. It currently has a red notification icon that tells us something has gone wrong twice already! If we navigate to the Error Inspector, we can get more insights into our system’s problems.

                                                                              Figure 3: Error Inspector overview

Figure 3 shows us the overview of all errors. In the sample application case, we created two errors—one in the backend container and one in the error container. As a reminder, the backend container sends an HTTP request to the error container, and the error container always fails to respond; thus, both containers end up with an error.

Since this is just an overview, we don’t get much information about what errors specifically happened. To get to the root of the problem, we need to click on one of the errors. Let’s click on the TypeError.

                                                                                     Figure 4: Error details

In the top left of Figure 4, we see a histogram that shows us when and how many errors happened. We see the transactions that included the error in the bottom left, and in the top right, we see a stack trace.

The stack trace is the most helpful feature for locating and fixing the error. In our case, a well-known JavaScript issue has occurred: Accessing an object attribute of an undefined object. If we look at line nine in lib/containers/error/index.js we see the following:

  const error = null.a;

So Thundra’s Error Inspector wasn’t just right about the error, but also about the code’s location.

Monitoring AWS ECS with Thundra

Monitoring and debugging AWS ECS with Thundra is as easy as it gets. Thundra supports monitoring Java and Node.js-based containers, helps to find errors swiftly, and comes with helpful insights about your architecture out of the box.

With Thundra’s application monitoring, your containers aren’t black boxes anymore. This is especially helpful for hybrid architectures that consist of serverless functions and containers. You can build with the technology your business requires and stop worrying about different monitoring tools because everything you need can be found in one place: at Thundra.


×

SUBSCRIBE TO OUR BLOG

Get our new blogs delivered straight to your inbox.

 

THANKS FOR SIGNING UP!

We’ll make sure to share the best materials crafted for you!

4 minutes read


POSTED Jan, 2021

dot

IN
DevOps

Monitoring Microservices on AWS with Thundra: Part III

Serkan Özal

Written by Serkan Özal

Founder and CTO of Thundra


 X

Welcome to the third and final part of our series on monitoring microservices on AWS with Thundra. In the first part of this series we talked about monitoring a microservice architecture based on serverless technology like AWS API Gateway and AWS Lambda. In the second part, we looked at how to perform the same task with a Kubernetes- and AWS EKS-based architecture. In this final installment, we’ll learn how to monitor an AWS ECS cluster with Thundra.

Elastic Kubernetes Services (EKS) is AWS’s managed Kubernetes solution. With EKS, we don’t have to worry about setting up and managing our Kubernetes nodes. We just have to keep track of our cluster.

AWS created Elastic Container Service (ECS) as an alternative to Kubernetes that integrates seamlessly with the AWS ecosystem. With an EKS cluster, everything is done the Kubernetes way, which makes it more accessible for people used to Kubernetes, but less accessible for people coming from an AWS background. An ECS cluster is built on AWS core services, like AWS EC2, which lets AWS professionals feel more at home.

Sample Project

Again, our sample project is an HTTP API that takes a number, makes a sample calculation just for the sake of this project, and sends the result of that calculation to a given email address.

It consists of four microservices:

  • Backend: The HTTP entry point of our system
  • Calculator: The service that does the calculation work
  • Error: A sample service that always fails
  • Email: The service that uses AWS SES to send the result email

                                                            Figure 1: Heavy computation ECS architecture

The architecture diagram can be seen in Figure 1. The microservices are all Node.js-based containers inside an ECS cluster. The backend container is available to the internet through an application load balancer, and the email container uses the AWS SES mail API to send results.

If you’ve been following along in this series, much of this article will look familiar because AWS EKS and AWS ECS can both use Docker images as their containers. The Thundra monitoring client interfaces with the Express framework, so there is no need to change the Docker images. Again AWS Cloud Development Kit (CDK) is our infrastructure as code tool of choice.

Prerequisites

The following prerequisites are required to install this project:

  • An AWS account
  • A Thundra account, with your AWS account set up
  • AWS CDK
  • Node.js
  • npm

Installing the Sample Project

Clone the GitHub repository to your local machine:

$ git clone https://github.com/thundra-io/heavy-computation-ecs.git

Install dependencies with the following command:

$ npm i

The next step is to replace the environment variables inside lib/env.vars.json.

<API_KEY>  needs to be replaced with your Thundra API key.

<REGION> has to be replaced with the AWS region you want to deploy your cluster to, so the right SMTP server can be found.

To get the values for <SMTP_USER>, <SMTP_PASSWORD>, and <SOURCE_EMAIL>, you have to generate SMTP credentials in the console and add an email address to your verified email list.

Note: For security reasons, Amazon SES is in sandbox mode by default, which only allows you to send emails to verified addresses. Sandbox mode can only be deactivated by writing a support request to AWS.

The cluster deployment takes a few minutes and can be done with the following commands:

$ cdk bootstrap
$ cdk deploy

Thundra Integration

The integration example below for containers didn’t change much from the example for Kubernetes that we tried in our previous article. The only difference in the JavaScript code comes from the difference in how AWS ECS does service discovery versus Kubernetes’ approach.

Let’s look at the JavaScript code inside lib/containers/backend/index.js to see what this difference actually means.

const axios = require("axios");
const bodyParser = require("body-parser");
const express = require("express");
const thundra = require("@thundra/core");

const calculatorHostname = process.env.CALCULATOR_HOSTNAME || "calculator";
const errorHostname = process.env.ERROR_HOSTNAME || "error";

const app = express();
app.use(thundra.expressMW());
app.use(bodyParser.json());
app.post("/", async ({ body: { email, number } }, response) => {
  response.end(JSON.stringify({ email, number }));
  await axios.post(
    `http://${calculatorHostname}:8000/`,
    { email, number },
    { timeout: 3000 }
  );
  await axios.post(
    `http://${errorHostname}:8000/`,
    { email, number },
    { timeout: 3000 }
  );
});
app.listen(8000);

Compared to the container code in the Kubernetes sample app where no environmental variables were used, we’re now using two. The reason for this change lies in the way Kubernetes and AWS ECS handle service discovery and DNS.

While Kubernetes allows us to give a pod a simple top-level domain, AWS ECS requires us to use AWS Cloud Map to make service discovery. AWS Cloud Map needs a root namespace from which it starts giving out domains.

In the Kubernetes version, this leads to cluster URLs looking like this: http://calculator:8000/, while in the AWS ECS version, we get cluster URLs looking like this http://calculator.local:8000/. To accommodate this difference, we’re using environment variables to pass other services’ hostnames to the containers.

Using the Sample Project

After deployment, the CDK CLI will display the URL to your load balancer service, which then points to the backend microservice. You can use this hostname to send a sample request to it with cURL:

$ curl --header "Content-Type: application/json" 
  --request POST 
  --data '{"email":"<EMAIL>","number": 10}' 
       http://<LOAD_BALANCER_HOSTNAME>/

Replace <EMAIL> with the email you verified for AWS SES. If your email was correctly verified, you should have an email in your inbox informing you about the finished calculation.

Thundra Monitoring Insights

In the last article, we looked into the automatically generated architecture diagrams and application insights. In this article, we will look into another feature of the Thundra console: Error Inspector.

 

                                                                                Figure 2: Thundra console menu

In Figure 2, you see where to find the Error Inspector. It currently has a red notification icon that tells us something has gone wrong twice already! If we navigate to the Error Inspector, we can get more insights into our system’s problems.

                                                                              Figure 3: Error Inspector overview

Figure 3 shows us the overview of all errors. In the sample application case, we created two errors—one in the backend container and one in the error container. As a reminder, the backend container sends an HTTP request to the error container, and the error container always fails to respond; thus, both containers end up with an error.

Since this is just an overview, we don’t get much information about what errors specifically happened. To get to the root of the problem, we need to click on one of the errors. Let’s click on the TypeError.

                                                                                     Figure 4: Error details

In the top left of Figure 4, we see a histogram that shows us when and how many errors happened. We see the transactions that included the error in the bottom left, and in the top right, we see a stack trace.

The stack trace is the most helpful feature for locating and fixing the error. In our case, a well-known JavaScript issue has occurred: Accessing an object attribute of an undefined object. If we look at line nine in lib/containers/error/index.js we see the following:

  const error = null.a;

So Thundra’s Error Inspector wasn’t just right about the error, but also about the code’s location.

Monitoring AWS ECS with Thundra

Monitoring and debugging AWS ECS with Thundra is as easy as it gets. Thundra supports monitoring Java and Node.js-based containers, helps to find errors swiftly, and comes with helpful insights about your architecture out of the box.

With Thundra’s application monitoring, your containers aren’t black boxes anymore. This is especially helpful for hybrid architectures that consist of serverless functions and containers. You can build with the technology your business requires and stop worrying about different monitoring tools because everything you need can be found in one place: at Thundra.

Welcome to the third and final part of our series on monitoring microservices on AWS with Thundra. In the first part of this series we talked about monitoring a microservice architecture based on serverless technology like AWS API Gateway and AWS Lambda. In the second part, we looked at how to perform the same task with a Kubernetes- and AWS EKS-based architecture. In this final installment, we’ll learn how to monitor an AWS ECS cluster with Thundra.

Elastic Kubernetes Services (EKS) is AWS’s managed Kubernetes solution. With EKS, we don’t have to worry about setting up and managing our Kubernetes nodes. We just have to keep track of our cluster.

AWS created Elastic Container Service (ECS) as an alternative to Kubernetes that integrates seamlessly with the AWS ecosystem. With an EKS cluster, everything is done the Kubernetes way, which makes it more accessible for people used to Kubernetes, but less accessible for people coming from an AWS background. An ECS cluster is built on AWS core services, like AWS EC2, which lets AWS professionals feel more at home.

Sample Project

Again, our sample project is an HTTP API that takes a number, makes a sample calculation just for the sake of this project, and sends the result of that calculation to a given email address.

It consists of four microservices:

  • Backend: The HTTP entry point of our system
  • Calculator: The service that does the calculation work
  • Error: A sample service that always fails
  • Email: The service that uses AWS SES to send the result email

                                                            Figure 1: Heavy computation ECS architecture

The architecture diagram can be seen in Figure 1. The microservices are all Node.js-based containers inside an ECS cluster. The backend container is available to the internet through an application load balancer, and the email container uses the AWS SES mail API to send results.

If you’ve been following along in this series, much of this article will look familiar because AWS EKS and AWS ECS can both use Docker images as their containers. The Thundra monitoring client interfaces with the Express framework, so there is no need to change the Docker images. Again AWS Cloud Development Kit (CDK) is our infrastructure as code tool of choice.

Prerequisites

The following prerequisites are required to install this project:

  • An AWS account
  • A Thundra account, with your AWS account set up
  • AWS CDK
  • Node.js
  • npm

Installing the Sample Project

Clone the GitHub repository to your local machine:

$ git clone https://github.com/thundra-io/heavy-computation-ecs.git

Install dependencies with the following command:

$ npm i

The next step is to replace the environment variables inside lib/env.vars.json.

<API_KEY>  needs to be replaced with your Thundra API key.

<REGION> has to be replaced with the AWS region you want to deploy your cluster to, so the right SMTP server can be found.

To get the values for <SMTP_USER>, <SMTP_PASSWORD>, and <SOURCE_EMAIL>, you have to generate SMTP credentials in the console and add an email address to your verified email list.

Note: For security reasons, Amazon SES is in sandbox mode by default, which only allows you to send emails to verified addresses. Sandbox mode can only be deactivated by writing a support request to AWS.

The cluster deployment takes a few minutes and can be done with the following commands:

$ cdk bootstrap
$ cdk deploy

Thundra Integration

The integration example below for containers didn’t change much from the example for Kubernetes that we tried in our previous article. The only difference in the JavaScript code comes from the difference in how AWS ECS does service discovery versus Kubernetes’ approach.

Let’s look at the JavaScript code inside lib/containers/backend/index.js to see what this difference actually means.

const axios = require("axios");
const bodyParser = require("body-parser");
const express = require("express");
const thundra = require("@thundra/core");

const calculatorHostname = process.env.CALCULATOR_HOSTNAME || "calculator";
const errorHostname = process.env.ERROR_HOSTNAME || "error";

const app = express();
app.use(thundra.expressMW());
app.use(bodyParser.json());
app.post("/", async ({ body: { email, number } }, response) => {
  response.end(JSON.stringify({ email, number }));
  await axios.post(
    `http://${calculatorHostname}:8000/`,
    { email, number },
    { timeout: 3000 }
  );
  await axios.post(
    `http://${errorHostname}:8000/`,
    { email, number },
    { timeout: 3000 }
  );
});
app.listen(8000);

Compared to the container code in the Kubernetes sample app where no environmental variables were used, we’re now using two. The reason for this change lies in the way Kubernetes and AWS ECS handle service discovery and DNS.

While Kubernetes allows us to give a pod a simple top-level domain, AWS ECS requires us to use AWS Cloud Map to make service discovery. AWS Cloud Map needs a root namespace from which it starts giving out domains.

In the Kubernetes version, this leads to cluster URLs looking like this: http://calculator:8000/, while in the AWS ECS version, we get cluster URLs looking like this http://calculator.local:8000/. To accommodate this difference, we’re using environment variables to pass other services’ hostnames to the containers.

Using the Sample Project

After deployment, the CDK CLI will display the URL to your load balancer service, which then points to the backend microservice. You can use this hostname to send a sample request to it with cURL:

$ curl --header "Content-Type: application/json" 
  --request POST 
  --data '{"email":"<EMAIL>","number": 10}' 
       http://<LOAD_BALANCER_HOSTNAME>/

Replace <EMAIL> with the email you verified for AWS SES. If your email was correctly verified, you should have an email in your inbox informing you about the finished calculation.

Thundra Monitoring Insights

In the last article, we looked into the automatically generated architecture diagrams and application insights. In this article, we will look into another feature of the Thundra console: Error Inspector.

 

                                                                                Figure 2: Thundra console menu

In Figure 2, you see where to find the Error Inspector. It currently has a red notification icon that tells us something has gone wrong twice already! If we navigate to the Error Inspector, we can get more insights into our system’s problems.

                                                                              Figure 3: Error Inspector overview

Figure 3 shows us the overview of all errors. In the sample application case, we created two errors—one in the backend container and one in the error container. As a reminder, the backend container sends an HTTP request to the error container, and the error container always fails to respond; thus, both containers end up with an error.

Since this is just an overview, we don’t get much information about what errors specifically happened. To get to the root of the problem, we need to click on one of the errors. Let’s click on the TypeError.

                                                                                     Figure 4: Error details

In the top left of Figure 4, we see a histogram that shows us when and how many errors happened. We see the transactions that included the error in the bottom left, and in the top right, we see a stack trace.

The stack trace is the most helpful feature for locating and fixing the error. In our case, a well-known JavaScript issue has occurred: Accessing an object attribute of an undefined object. If we look at line nine in lib/containers/error/index.js we see the following:

  const error = null.a;

So Thundra’s Error Inspector wasn’t just right about the error, but also about the code’s location.

Monitoring AWS ECS with Thundra

Monitoring and debugging AWS ECS with Thundra is as easy as it gets. Thundra supports monitoring Java and Node.js-based containers, helps to find errors swiftly, and comes with helpful insights about your architecture out of the box.

With Thundra’s application monitoring, your containers aren’t black boxes anymore. This is especially helpful for hybrid architectures that consist of serverless functions and containers. You can build with the technology your business requires and stop worrying about different monitoring tools because everything you need can be found in one place: at Thundra.

Welcome to the third and final part of our series on monitoring microservices on AWS with Thundra. In the first part of this series we talked about monitoring a microservice architecture based on serverless technology like AWS API Gateway and AWS Lambda. In the second part, we looked at how to perform the same task with a Kubernetes- and AWS EKS-based architecture. In this final installment, we’ll learn how to monitor an AWS ECS cluster with Thundra.

Elastic Kubernetes Services (EKS) is AWS’s managed Kubernetes solution. With EKS, we don’t have to worry about setting up and managing our Kubernetes nodes. We just have to keep track of our cluster.

AWS created Elastic Container Service (ECS) as an alternative to Kubernetes that integrates seamlessly with the AWS ecosystem. With an EKS cluster, everything is done the Kubernetes way, which makes it more accessible for people used to Kubernetes, but less accessible for people coming from an AWS background. An ECS cluster is built on AWS core services, like AWS EC2, which lets AWS professionals feel more at home.

Sample Project

Again, our sample project is an HTTP API that takes a number, makes a sample calculation just for the sake of this project, and sends the result of that calculation to a given email address.

It consists of four microservices:

  • Backend: The HTTP entry point of our system
  • Calculator: The service that does the calculation work
  • Error: A sample service that always fails
  • Email: The service that uses AWS SES to send the result email

                                                            Figure 1: Heavy computation ECS architecture

The architecture diagram can be seen in Figure 1. The microservices are all Node.js-based containers inside an ECS cluster. The backend container is available to the internet through an application load balancer, and the email container uses the AWS SES mail API to send results.

If you’ve been following along in this series, much of this article will look familiar because AWS EKS and AWS ECS can both use Docker images as their containers. The Thundra monitoring client interfaces with the Express framework, so there is no need to change the Docker images. Again AWS Cloud Development Kit (CDK) is our infrastructure as code tool of choice.

Prerequisites

The following prerequisites are required to install this project:

  • An AWS account
  • A Thundra account, with your AWS account set up
  • AWS CDK
  • Node.js
  • npm

Installing the Sample Project

Clone the GitHub repository to your local machine:

$ git clone https://github.com/thundra-io/heavy-computation-ecs.git

Install dependencies with the following command:

$ npm i

The next step is to replace the environment variables inside lib/env.vars.json.

<API_KEY>  needs to be replaced with your Thundra API key.

<REGION> has to be replaced with the AWS region you want to deploy your cluster to, so the right SMTP server can be found.

To get the values for <SMTP_USER>, <SMTP_PASSWORD>, and <SOURCE_EMAIL>, you have to generate SMTP credentials in the console and add an email address to your verified email list.

Note: For security reasons, Amazon SES is in sandbox mode by default, which only allows you to send emails to verified addresses. Sandbox mode can only be deactivated by writing a support request to AWS.

The cluster deployment takes a few minutes and can be done with the following commands:

$ cdk bootstrap
$ cdk deploy

Thundra Integration

The integration example below for containers didn’t change much from the example for Kubernetes that we tried in our previous article. The only difference in the JavaScript code comes from the difference in how AWS ECS does service discovery versus Kubernetes’ approach.

Let’s look at the JavaScript code inside lib/containers/backend/index.js to see what this difference actually means.

const axios = require("axios");
const bodyParser = require("body-parser");
const express = require("express");
const thundra = require("@thundra/core");

const calculatorHostname = process.env.CALCULATOR_HOSTNAME || "calculator";
const errorHostname = process.env.ERROR_HOSTNAME || "error";

const app = express();
app.use(thundra.expressMW());
app.use(bodyParser.json());
app.post("/", async ({ body: { email, number } }, response) => {
  response.end(JSON.stringify({ email, number }));
  await axios.post(
    `http://${calculatorHostname}:8000/`,
    { email, number },
    { timeout: 3000 }
  );
  await axios.post(
    `http://${errorHostname}:8000/`,
    { email, number },
    { timeout: 3000 }
  );
});
app.listen(8000);

Compared to the container code in the Kubernetes sample app where no environmental variables were used, we’re now using two. The reason for this change lies in the way Kubernetes and AWS ECS handle service discovery and DNS.

While Kubernetes allows us to give a pod a simple top-level domain, AWS ECS requires us to use AWS Cloud Map to make service discovery. AWS Cloud Map needs a root namespace from which it starts giving out domains.

In the Kubernetes version, this leads to cluster URLs looking like this: http://calculator:8000/, while in the AWS ECS version, we get cluster URLs looking like this http://calculator.local:8000/. To accommodate this difference, we’re using environment variables to pass other services’ hostnames to the containers.

Using the Sample Project

After deployment, the CDK CLI will display the URL to your load balancer service, which then points to the backend microservice. You can use this hostname to send a sample request to it with cURL:

$ curl --header "Content-Type: application/json" 
  --request POST 
  --data '{"email":"<EMAIL>","number": 10}' 
       http://<LOAD_BALANCER_HOSTNAME>/

Replace <EMAIL> with the email you verified for AWS SES. If your email was correctly verified, you should have an email in your inbox informing you about the finished calculation.

Thundra Monitoring Insights

In the last article, we looked into the automatically generated architecture diagrams and application insights. In this article, we will look into another feature of the Thundra console: Error Inspector.

 

                                                                                Figure 2: Thundra console menu

In Figure 2, you see where to find the Error Inspector. It currently has a red notification icon that tells us something has gone wrong twice already! If we navigate to the Error Inspector, we can get more insights into our system’s problems.

                                                                              Figure 3: Error Inspector overview

Figure 3 shows us the overview of all errors. In the sample application case, we created two errors—one in the backend container and one in the error container. As a reminder, the backend container sends an HTTP request to the error container, and the error container always fails to respond; thus, both containers end up with an error.

Since this is just an overview, we don’t get much information about what errors specifically happened. To get to the root of the problem, we need to click on one of the errors. Let’s click on the TypeError.

                                                                                     Figure 4: Error details

In the top left of Figure 4, we see a histogram that shows us when and how many errors happened. We see the transactions that included the error in the bottom left, and in the top right, we see a stack trace.

The stack trace is the most helpful feature for locating and fixing the error. In our case, a well-known JavaScript issue has occurred: Accessing an object attribute of an undefined object. If we look at line nine in lib/containers/error/index.js we see the following:

  const error = null.a;

So Thundra’s Error Inspector wasn’t just right about the error, but also about the code’s location.

Monitoring AWS ECS with Thundra

Monitoring and debugging AWS ECS with Thundra is as easy as it gets. Thundra supports monitoring Java and Node.js-based containers, helps to find errors swiftly, and comes with helpful insights about your architecture out of the box.

With Thundra’s application monitoring, your containers aren’t black boxes anymore. This is especially helpful for hybrid architectures that consist of serverless functions and containers. You can build with the technology your business requires and stop worrying about different monitoring tools because everything you need can be found in one place: at Thundra.

*** This is a Security Bloggers Network syndicated blog from Thundra blog authored by Serkan Özal. Read the original post at: https://blog.thundra.io/monitoring-microservices-on-aws-with-thundra-part-iii