Own Your Digital Future: Cloud Migration Best Practices

Many organizations are well underway on their journey to using cloud platforms to become more agile, reduce costs, and drive efficiencies.  Nobody wants to reinvent the wheel though so to speak, and reusing established design patterns can be the difference between a delayed rollout and a fast and effective implementation of your cloud-based service or application.  Read on to discover how Akamai fits into such prevalent design patterns, all the while making sure that your web application remains fast, reliable and secure.

Design patterns are simply an abstraction of repeatable solutions.  The goal is to provide a high level map of how you can address some common challenges, in this case, when building or migrating an application to be hosted on a cloud platform.  Some of these common challenges include performance and scalability, availability, resiliency, and security.

Content Caching for Performance and Scalability

Performance and scalability is always going to be a key consideration in a hyperconnected world.  Akamai literally pioneered the technology to cache content close to users in order to avoid capacity and performance constraints choking centralized compute resources.  Even now with the ability to autoscale capacity of cloud services at the origin, caching provides the best opportunity to offload expensive compute resources, which translates to direct savings for those cloud services.  Akamai’s Ion platform provides the ability to maximize cacheability of static and dynamic content for scalability and optimal performance.

Retry and Circuit Breaker for Availability and Resiliency

Availability refers to the proportion of time that an application is functional; resiliency is the ability to gracefully handle and recover from failures.  Traditionally, high availability was the domain of specialized infrastructure-based solutions, such as clustering.  The goal was to ensure that if something failed, the application could continue to work.  “Five nines”, or 99.999% availability, was often used as a benchmark.  However, in the world of cloud-based applications which are often decomposed into microservices that may serve out billions of requests in a short time frame, “five nines” can result in thousands of failures.  Cloud services are rarely completely down, but transient network errors lasting seconds or minutes can be quite common.  This is where resiliency is important:  adding retry logic as a design pattern can boost overall availability if the initial failure is temporary and transient.  The thousands of errors in the above example wouldn’t therefore be an issue, as the application would eventually receive a response.


What happens though if there is a failure that lasts a little longer and is less transient?  Retry logic is typically set to make attempts within a short period of time since you never expect the end user to wait for too long for a response.  However, an application with a large number of clients all retrying quickly can lead to its own problem, a situation called dog piling, where essentially your own clients end up causing a denial of service.  The influx of retries can exacerbate the situation into a catastrophic failure…

So how can Akamai solve this?  Retry logic is something that Akamai handles natively within our platform.  It is configurable, both in the amount of attempts and delay between retries.  Furthermore, Akamai can be used as a circuit breaker, a mechanism that is designed to interrupt the retry pattern so that the application can be returned a proper failure so that it can be handled accordingly.  For API calls, Akamai can serve a preconfigured response in such instances, or can even act as an alternate origin serving a simplified but usable version of the web page or portion of the site.

Gatekeeper for Security

Gatekeeper design pattern essentially proxies access between the client and remote service hosted in your cloud hosting platform.  Traditionally, this pattern would be fulfilled through the use of a firewall that is close to the rest of the technology stack serving the application.   However, even firewalls that can be deployed as virtual appliances and scaled are no match for the large scale botnet attacks seen over the last few years.  Akamai however can act as a distributed gatekeeper, offering an initial layer of defense away from the origin.  You can enforce traffic to be funnelled through Akamai servers, cloaking your cloud origin and limiting its attack surface.  Large layer 7 DDoS attacks are blocked at the edge, with additional layer 3/4 network scrubbing services available.   For API traffic, Akamai also provides capabilities around governance, with ability to throttle and enforce quotas.


We’ve mentioned only a few prominent design patterns, along with its usage within cloud-based architectures.  It’s important to note that once you have identified a pattern to address a need, there a myriad of different ways to implement those patterns.  Akamai Professional Services is well equipped to help you address both technical and business challenges regardless of whether you are in the planning, build or operate phases of your cloud journey.

*** This is a Security Bloggers Network syndicated blog from The Akamai Blog authored by Desmond Tam. Read the original post at:

Cloud Workload Resilience PulseMeter

Step 1 of 8

How do you define cloud resiliency for cloud workloads? (Select 3)(Required)