×
THANKS FOR SIGNING UP!
We’ll make sure to share the best materials crafted for you!
5 minutes read
Container Workloads on AWS, Azure, and Google Cloud Platforms

Written by Bruno Amaro Almeida
Principal Architect & Technology Advisor @ Futurice

Docker, an open-source project to automate deployment of applications as portable self-sufficient containers, has propelled an entire set of container technologies and a totally new way to design systems. As microservices architectures are adopted as a way to break the traditional monolithic software design pattern, containers and microservices are a perfect match, and the terms have become deeply intertwined.
With the public cloud becoming the primary choice to run systems, it’s important to understand the look of the current landscape when it comes to leveraging cloud-managed services to run container workloads.
Choices for Running Container Workloads in the Public Cloud
When designing a container-based system in a public cloud, regardless of the provider you choose, it’s important to know what different services are available and which would best suit your workload needs.
Containers are an important and rather large part of the computing service offerings of the biggest cloud providers, such as Amazon Web Services, Microsoft Azure, or Google Cloud.
The choice of the right container service depends heavily on the system architecture, business requirements, and overall workload complexity, but it can be broken down into single-container workload or multi-container orchestration (with or without Kubernetes).
Single Container
When your workload is rather simple and consists of an isolated container, the best approach is often to choose a service that lets you run the container with the lowest possible operational overhead.
Running a container without having to operate any servers is a fantastic way to quickly deliver value without much effort. It gives you the flexibility and customization of the container world while enabling you to reap some of the benefits of a serverless architecture.
AWS was the first provider to introduce the concept of running containers without operating servers, first with Elastic Beanstalk and later with Fargate service. In all fairness, Elastic Beanstalk was designed for web application development, and containers are just one type of workload supported. Fargate, which brought the concept of serverless containers to the mainstream, was the true game changer.
Microsoft Azure’s offering for serverless containers is fairly similar, with Azure AppService targeting web application development, and Azure Container Instances (ACI) available for serverless container execution. There are some interesting and noteworthy differences, though. In contrast to AWS, which has been gradually shifting attention in web and mobile development towards other types of services that don’t offer container support, such as Amplify, Microsoft has been putting a significant amount of effort into developing AppService further.
Also, ACI was launched in 2017 but in the past year or two has truly jumped into the spotlight, benefiting from a bigger maturity leap and readily available integrations with Azure Kubernetes Service (AKS) and Azure Logic Apps for event-driven architectures.
Google Cloud has been a champion of Kubernetes, which is no surprise considering the project’s roots. Until last year, Google App Engine, a service tailored for web application development, was the only available option for simple application deployments. However, last year Google launched the Cloud Run service, specifically designed for single container cases. Cloud Run enables developers to bundle a web server in their container workload and make an HTTP endpoint automatically available and discoverable in a truly easy and serverless way.
AWS Fargate, ACI, and Google Cloud Run are great examples of combining the flexibility of containers with the operational simplicity of serverless. However, it’s important to note that all these services have many limitations, especially when it comes to stateful use cases—i.e., cases that require data persistence—and therefore, they are mostly suitable for stateless workloads where data is not preserved within the container.
Orchestrating Multiple Containers
Quite often, when designing a microservices-based architecture, we end up with more than one container. Even with a fairly simple workload, this is likely to happen as the solution grows.
The ability to orchestrate multiple containers becomes a must, and the simplicity of the single container (workload) services falls short in delivering the capabilities needed to address system complexity, such as service discovery, communication, and observability.
Kubernetes, an open-source project born at Google and currently part of the Cloud Native Computing Foundation, is the most popular solution for container orchestration. The platform ecosystem, vendor neutrality, and extensibility make it a great solution that can cover the most demanding and advanced use cases.
However, when your system only has a few containers and a team without prior Kubernetes experience, the adoption and learning curve are often too steep and complex. If you’re using Microsoft or AWS, you can leverage specific services for container orchestration which, while not so feature-rich as Kubernetes, are a lot simpler to get started with.
In Microsoft Azure, Service Fabric offers container orchestration capabilities and even makes it possible to extend that functionality outside Azure environments (e.g., on-premises or to other public cloud providers). In AWS, the Elastic Container Service (ECS), built on top of the EC2 and other core AWS services, makes it possible to orchestrate containers spread across one or more virtual machines, removing some of the operational overhead and letting you control the underlying instances.
A key difference between the Azure Service Fabric and AWS ECS is that while ECS is bound to AWS resources, the Azure Service Fabric can be used in hybrid and non-Azure environments. Also noteworthy is that no such service exists in Google Cloud, making Kubernetes the only choice for container orchestration in Google Cloud Platform (GCP).
Managed Kubernetes Services
Kubernetes has been growing rapidly in popularity in the past few years. It’s usually notoriously difficult, however, to ramp up and gain enough in-depth knowledge to operate and make use of Kubernetes for a production system. To make it simpler to get started and to minimize the effort required to maintain and operate a Kubernetes cluster, all three cloud providers offer a managed Kubernetes service.
At a baseline level, AWS Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and AKS share the same approach, a highly available and fully managed Kubernetes control plane and node groups based on virtual instances that are easy to set up and manage. This eases the deployment process and the day-to-day operations. However, as expected, it is limited to the Kubernetes version offered by the cloud provider, and customers lose the ability to fully customize the control plane.
On the other hand, one clear benefit is that day-to-day maintenance (e.g., backups and upgrades) is a lot easier, and thanks to close out-of-the-box integrations with other services from the provider, it’s really easy to have features such as centralized logging, tracing, and encryption key management available across multiple managed cloud services.
While all cloud providers use a similar baseline approach towards managed Kubernetes, there are key aspects you should take into account based on the additional (and often unique) features each service has. One of these is the possibility of having serverless Kubernetes node groups. Microsoft Azure and AWS have out-of-the-box integration with ACI and AWS Fargate, respectively.
This makes it truly possible to reap the benefits of Kubernetes without having to operate and manage the underlying servers where your containers run. Similarly, GKE offers the same serverless capability via their Knative project. However, one key difference is that Google launched Knative as an industry-wide movement, involving other companies like IBM, Red Hat, and Pivotal, and thus the Knative concept is supported and available in other cloud environments and offerings.
For those scenarios, Azure and Google launched Arc and Anthos, respectively. This enables their managed Kubernetes offerings, Azure Kubernetes Service and GKE, to be deployed in other environments, such as an on-premises data center or a competitor cloud provider.
This is a clear advantage for customers who are required to have a hybrid platform, enabling them to keep the same development, integration, and deployment tooling and processes. AWS, on the other hand, does not provide a similar capability with EKS, instead directing customers with hybrid needs to their Outposts service, which provides core computing and network needs in data centers.
Considering the Benefits and Tradeoffs
When it comes to containers and the public cloud, there’s no shortage of options to choose from. For people new to public cloud (or containers), there is a lot to take into account, and it can be hard to choose a technology stack. In addition to the technical capabilities, business aspects such as the total cost of ownership (TCO), development time, and organization technology strategy need to be considered carefully.
Regardless of which public cloud provider you select, you’ll find good baseline options that can meet the needs of simple and complex use cases. The most important thing is to find the right fit for the system, organization, and development team, allowing you to quickly deliver value while making your system architecture resilient to future changes and growth.
Docker, an open-source project to automate deployment of applications as portable self-sufficient containers, has propelled an entire set of container technologies and a totally new way to design systems. As microservices architectures are adopted as a way to break the traditional monolithic software design pattern, containers and microservices are a perfect match, and the terms have become deeply intertwined.
With the public cloud becoming the primary choice to run systems, it’s important to understand the look of the current landscape when it comes to leveraging cloud-managed services to run container workloads.
Choices for Running Container Workloads in the Public Cloud
When designing a container-based system in a public cloud, regardless of the provider you choose, it’s important to know what different services are available and which would best suit your workload needs.
Containers are an important and rather large part of the computing service offerings of the biggest cloud providers, such as Amazon Web Services, Microsoft Azure, or Google Cloud.
The choice of the right container service depends heavily on the system architecture, business requirements, and overall workload complexity, but it can be broken down into single-container workload or multi-container orchestration (with or without Kubernetes).
Single Container
When your workload is rather simple and consists of an isolated container, the best approach is often to choose a service that lets you run the container with the lowest possible operational overhead.
Running a container without having to operate any servers is a fantastic way to quickly deliver value without much effort. It gives you the flexibility and customization of the container world while enabling you to reap some of the benefits of a serverless architecture.
AWS was the first provider to introduce the concept of running containers without operating servers, first with Elastic Beanstalk and later with Fargate service. In all fairness, Elastic Beanstalk was designed for web application development, and containers are just one type of workload supported. Fargate, which brought the concept of serverless containers to the mainstream, was the true game changer.
Microsoft Azure’s offering for serverless containers is fairly similar, with Azure AppService targeting web application development, and Azure Container Instances (ACI) available for serverless container execution. There are some interesting and noteworthy differences, though. In contrast to AWS, which has been gradually shifting attention in web and mobile development towards other types of services that don’t offer container support, such as Amplify, Microsoft has been putting a significant amount of effort into developing AppService further.
Also, ACI was launched in 2017 but in the past year or two has truly jumped into the spotlight, benefiting from a bigger maturity leap and readily available integrations with Azure Kubernetes Service (AKS) and Azure Logic Apps for event-driven architectures.
Google Cloud has been a champion of Kubernetes, which is no surprise considering the project’s roots. Until last year, Google App Engine, a service tailored for web application development, was the only available option for simple application deployments. However, last year Google launched the Cloud Run service, specifically designed for single container cases. Cloud Run enables developers to bundle a web server in their container workload and make an HTTP endpoint automatically available and discoverable in a truly easy and serverless way.
AWS Fargate, ACI, and Google Cloud Run are great examples of combining the flexibility of containers with the operational simplicity of serverless. However, it’s important to note that all these services have many limitations, especially when it comes to stateful use cases—i.e., cases that require data persistence—and therefore, they are mostly suitable for stateless workloads where data is not preserved within the container.
Orchestrating Multiple Containers
Quite often, when designing a microservices-based architecture, we end up with more than one container. Even with a fairly simple workload, this is likely to happen as the solution grows.
The ability to orchestrate multiple containers becomes a must, and the simplicity of the single container (workload) services falls short in delivering the capabilities needed to address system complexity, such as service discovery, communication, and observability.
Kubernetes, an open-source project born at Google and currently part of the Cloud Native Computing Foundation, is the most popular solution for container orchestration. The platform ecosystem, vendor neutrality, and extensibility make it a great solution that can cover the most demanding and advanced use cases.
However, when your system only has a few containers and a team without prior Kubernetes experience, the adoption and learning curve are often too steep and complex. If you’re using Microsoft or AWS, you can leverage specific services for container orchestration which, while not so feature-rich as Kubernetes, are a lot simpler to get started with.
In Microsoft Azure, Service Fabric offers container orchestration capabilities and even makes it possible to extend that functionality outside Azure environments (e.g., on-premises or to other public cloud providers). In AWS, the Elastic Container Service (ECS), built on top of the EC2 and other core AWS services, makes it possible to orchestrate containers spread across one or more virtual machines, removing some of the operational overhead and letting you control the underlying instances.
A key difference between the Azure Service Fabric and AWS ECS is that while ECS is bound to AWS resources, the Azure Service Fabric can be used in hybrid and non-Azure environments. Also noteworthy is that no such service exists in Google Cloud, making Kubernetes the only choice for container orchestration in Google Cloud Platform (GCP).
Managed Kubernetes Services
Kubernetes has been growing rapidly in popularity in the past few years. It’s usually notoriously difficult, however, to ramp up and gain enough in-depth knowledge to operate and make use of Kubernetes for a production system. To make it simpler to get started and to minimize the effort required to maintain and operate a Kubernetes cluster, all three cloud providers offer a managed Kubernetes service.
At a baseline level, AWS Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and AKS share the same approach, a highly available and fully managed Kubernetes control plane and node groups based on virtual instances that are easy to set up and manage. This eases the deployment process and the day-to-day operations. However, as expected, it is limited to the Kubernetes version offered by the cloud provider, and customers lose the ability to fully customize the control plane.
On the other hand, one clear benefit is that day-to-day maintenance (e.g., backups and upgrades) is a lot easier, and thanks to close out-of-the-box integrations with other services from the provider, it’s really easy to have features such as centralized logging, tracing, and encryption key management available across multiple managed cloud services.
While all cloud providers use a similar baseline approach towards managed Kubernetes, there are key aspects you should take into account based on the additional (and often unique) features each service has. One of these is the possibility of having serverless Kubernetes node groups. Microsoft Azure and AWS have out-of-the-box integration with ACI and AWS Fargate, respectively.
This makes it truly possible to reap the benefits of Kubernetes without having to operate and manage the underlying servers where your containers run. Similarly, GKE offers the same serverless capability via their Knative project. However, one key difference is that Google launched Knative as an industry-wide movement, involving other companies like IBM, Red Hat, and Pivotal, and thus the Knative concept is supported and available in other cloud environments and offerings.
For those scenarios, Azure and Google launched Arc and Anthos, respectively. This enables their managed Kubernetes offerings, Azure Kubernetes Service and GKE, to be deployed in other environments, such as an on-premises data center or a competitor cloud provider.
This is a clear advantage for customers who are required to have a hybrid platform, enabling them to keep the same development, integration, and deployment tooling and processes. AWS, on the other hand, does not provide a similar capability with EKS, instead directing customers with hybrid needs to their Outposts service, which provides core computing and network needs in data centers.
Considering the Benefits and Tradeoffs
When it comes to containers and the public cloud, there’s no shortage of options to choose from. For people new to public cloud (or containers), there is a lot to take into account, and it can be hard to choose a technology stack. In addition to the technical capabilities, business aspects such as the total cost of ownership (TCO), development time, and organization technology strategy need to be considered carefully.
Regardless of which public cloud provider you select, you’ll find good baseline options that can meet the needs of simple and complex use cases. The most important thing is to find the right fit for the system, organization, and development team, allowing you to quickly deliver value while making your system architecture resilient to future changes and growth.
Docker, an open-source project to automate deployment of applications as portable self-sufficient containers, has propelled an entire set of container technologies and a totally new way to design systems. As microservices architectures are adopted as a way to break the traditional monolithic software design pattern, containers and microservices are a perfect match, and the terms have become deeply intertwined.
With the public cloud becoming the primary choice to run systems, it’s important to understand the look of the current landscape when it comes to leveraging cloud-managed services to run container workloads.
Choices for Running Container Workloads in the Public Cloud
When designing a container-based system in a public cloud, regardless of the provider you choose, it’s important to know what different services are available and which would best suit your workload needs.
Containers are an important and rather large part of the computing service offerings of the biggest cloud providers, such as Amazon Web Services, Microsoft Azure, or Google Cloud.
The choice of the right container service depends heavily on the system architecture, business requirements, and overall workload complexity, but it can be broken down into single-container workload or multi-container orchestration (with or without Kubernetes).
Single Container
When your workload is rather simple and consists of an isolated container, the best approach is often to choose a service that lets you run the container with the lowest possible operational overhead.
Running a container without having to operate any servers is a fantastic way to quickly deliver value without much effort. It gives you the flexibility and customization of the container world while enabling you to reap some of the benefits of a serverless architecture.
AWS was the first provider to introduce the concept of running containers without operating servers, first with Elastic Beanstalk and later with Fargate service. In all fairness, Elastic Beanstalk was designed for web application development, and containers are just one type of workload supported. Fargate, which brought the concept of serverless containers to the mainstream, was the true game changer.
Microsoft Azure’s offering for serverless containers is fairly similar, with Azure AppService targeting web application development, and Azure Container Instances (ACI) available for serverless container execution. There are some interesting and noteworthy differences, though. In contrast to AWS, which has been gradually shifting attention in web and mobile development towards other types of services that don’t offer container support, such as Amplify, Microsoft has been putting a significant amount of effort into developing AppService further.
Also, ACI was launched in 2017 but in the past year or two has truly jumped into the spotlight, benefiting from a bigger maturity leap and readily available integrations with Azure Kubernetes Service (AKS) and Azure Logic Apps for event-driven architectures.
Google Cloud has been a champion of Kubernetes, which is no surprise considering the project’s roots. Until last year, Google App Engine, a service tailored for web application development, was the only available option for simple application deployments. However, last year Google launched the Cloud Run service, specifically designed for single container cases. Cloud Run enables developers to bundle a web server in their container workload and make an HTTP endpoint automatically available and discoverable in a truly easy and serverless way.
AWS Fargate, ACI, and Google Cloud Run are great examples of combining the flexibility of containers with the operational simplicity of serverless. However, it’s important to note that all these services have many limitations, especially when it comes to stateful use cases—i.e., cases that require data persistence—and therefore, they are mostly suitable for stateless workloads where data is not preserved within the container.
Orchestrating Multiple Containers
Quite often, when designing a microservices-based architecture, we end up with more than one container. Even with a fairly simple workload, this is likely to happen as the solution grows.
The ability to orchestrate multiple containers becomes a must, and the simplicity of the single container (workload) services falls short in delivering the capabilities needed to address system complexity, such as service discovery, communication, and observability.
Kubernetes, an open-source project born at Google and currently part of the Cloud Native Computing Foundation, is the most popular solution for container orchestration. The platform ecosystem, vendor neutrality, and extensibility make it a great solution that can cover the most demanding and advanced use cases.
However, when your system only has a few containers and a team without prior Kubernetes experience, the adoption and learning curve are often too steep and complex. If you’re using Microsoft or AWS, you can leverage specific services for container orchestration which, while not so feature-rich as Kubernetes, are a lot simpler to get started with.
In Microsoft Azure, Service Fabric offers container orchestration capabilities and even makes it possible to extend that functionality outside Azure environments (e.g., on-premises or to other public cloud providers). In AWS, the Elastic Container Service (ECS), built on top of the EC2 and other core AWS services, makes it possible to orchestrate containers spread across one or more virtual machines, removing some of the operational overhead and letting you control the underlying instances.
A key difference between the Azure Service Fabric and AWS ECS is that while ECS is bound to AWS resources, the Azure Service Fabric can be used in hybrid and non-Azure environments. Also noteworthy is that no such service exists in Google Cloud, making Kubernetes the only choice for container orchestration in Google Cloud Platform (GCP).
Managed Kubernetes Services
Kubernetes has been growing rapidly in popularity in the past few years. It’s usually notoriously difficult, however, to ramp up and gain enough in-depth knowledge to operate and make use of Kubernetes for a production system. To make it simpler to get started and to minimize the effort required to maintain and operate a Kubernetes cluster, all three cloud providers offer a managed Kubernetes service.
At a baseline level, AWS Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and AKS share the same approach, a highly available and fully managed Kubernetes control plane and node groups based on virtual instances that are easy to set up and manage. This eases the deployment process and the day-to-day operations. However, as expected, it is limited to the Kubernetes version offered by the cloud provider, and customers lose the ability to fully customize the control plane.
On the other hand, one clear benefit is that day-to-day maintenance (e.g., backups and upgrades) is a lot easier, and thanks to close out-of-the-box integrations with other services from the provider, it’s really easy to have features such as centralized logging, tracing, and encryption key management available across multiple managed cloud services.
While all cloud providers use a similar baseline approach towards managed Kubernetes, there are key aspects you should take into account based on the additional (and often unique) features each service has. One of these is the possibility of having serverless Kubernetes node groups. Microsoft Azure and AWS have out-of-the-box integration with ACI and AWS Fargate, respectively.
This makes it truly possible to reap the benefits of Kubernetes without having to operate and manage the underlying servers where your containers run. Similarly, GKE offers the same serverless capability via their Knative project. However, one key difference is that Google launched Knative as an industry-wide movement, involving other companies like IBM, Red Hat, and Pivotal, and thus the Knative concept is supported and available in other cloud environments and offerings.
For those scenarios, Azure and Google launched Arc and Anthos, respectively. This enables their managed Kubernetes offerings, Azure Kubernetes Service and GKE, to be deployed in other environments, such as an on-premises data center or a competitor cloud provider.
This is a clear advantage for customers who are required to have a hybrid platform, enabling them to keep the same development, integration, and deployment tooling and processes. AWS, on the other hand, does not provide a similar capability with EKS, instead directing customers with hybrid needs to their Outposts service, which provides core computing and network needs in data centers.
Considering the Benefits and Tradeoffs
When it comes to containers and the public cloud, there’s no shortage of options to choose from. For people new to public cloud (or containers), there is a lot to take into account, and it can be hard to choose a technology stack. In addition to the technical capabilities, business aspects such as the total cost of ownership (TCO), development time, and organization technology strategy need to be considered carefully.
Regardless of which public cloud provider you select, you’ll find good baseline options that can meet the needs of simple and complex use cases. The most important thing is to find the right fit for the system, organization, and development team, allowing you to quickly deliver value while making your system architecture resilient to future changes and growth.