It’s no secret that businesses are rapidly adopting Cloud Service Providers such as Amazon Web Services, Microsoft Azure or Google Cloud Platform for cost efficiency, agility, scalability and global distribution to serve their customers more easily. A recent IDG survey indicates that 70% of businesses have at least 1 application in the cloud and 16% plan to take their first app to the cloud in the next 12 months.
However, Cloud Service Providers present their own challenges. It’s neither easy nor straightforward for businesses to operate in the cloud. As you move applications and workloads to the cloud, there must be thoughtful consideration for which supporting services to keep on-premises, and which to replace entirely with a cloud service. Load balancing is an example of such a supporting service.
It is very common for businesses to adopt a hybrid cloud architecture where not all the data is processed in the cloud. For example, sensitive information is sometimes stored on premises, while non-sensitive information is processed and stored in the Cloud. This means that your load balancing solution should be able to seamlessly balance the traffic between your cloud and on-premises data centers.
Some companies use on-premise hardware based load balancers, but they are not designed for cloud technologies. They are not scalable and are expensive. Gartner notes that the range of use cases and requirements for load balancing technology has undergone a noticeable shift, as work styles have evolved to focus on software accessibility and standard, simpler feature sets, rather than the monolithic load balancing platforms that dominated traditional deployments. To get the most scalability and availability, you need to consider a cloud based load balancer.
Many cloud providers offer their own load balancing solutions, but these solutions are platform-specific.These solutions are great for balancing traffic on the same cloud platform, where all of the decisions are handled without accessing another provider or your own datacenters. But most architectures today require a hybrid approach – using a mix of cloud and on-premises solutions, or multiple cloud providers – which in turn requires load balancers that can work seamlessly in hybrid environments for operational simplicity.
Cloud adoption is complex and Akamai has designed a solution that is intended to deal with those complexities and simplify your traffic management.
Application Load Balancer is a cloud based load balancing solution that leverages both Application Layer (Layer 7) and DNS layer (Layer 3) logic, and gives you granular control to balance traffic based on HTTP attributes (cookie value, url path, query string) using weighted round-robin and performance based routing algorithms.
Application Load Balancer provides a platform agnostic load balancing solution, allowing you to balance traffic across any combination of data sources – both cloud based and on-premises. It provides two layers of failover – instant retry capabilities on a per request basis, enables a user’s request to be retried immediately to the next available data center without going back to the client in the event of an error. Additionally, Application Load Balancer leverages the DNS layer to add another level of failover that leverages Layer 3 health checks to continuously check the liveness of each data center, thereby increasing the reliability of your applications and improving user experience. Here are some key use-cases for Application Load Balancer.
Implement hybrid cloud architecture by customizing incoming HTTP requests for your data centers
For any hybrid cloud architecture, it is necessary to route traffic between data centers on-premises and data centers in the cloud. To enable these deployments, Application Load Balancer customizes incoming traffic to meet the needs of specific data centers. For example, Application Load Balancer permits the changing of incoming host header or the URL path on a per data center basis. For example, if your application is load balanced between AWS S3 and on premises data centers, you can configure Application Load Balancer to change the incoming request to specify S3 bucket directory path when a request is directed to the AWS data center, thus making it easy to balance traffic between any combination of data centers.
Maximize availability with instant retry and automated failover of failed HTTP requests to backup data centers
All load balancing solutions offer liveness detection and failover mechanisms for origin servers. However, there is often a delay from the time an origin server fails and until incoming traffic is directed to backup data centers. Typically, this delay is in the order of tens of seconds, which can be disastrous especially during periods of peak traffic.
With Application Load Balancer’s automated failover capability, a request can be configured to immediately retry to backup data centers upon receiving an HTTP error code from origin server. In this scenario, Application Load Balancer can also be configured to drop session stickiness to the non-responsive origin server and reestablish session stickiness with the servers in a backup data center. This not only maintains business continuity, but also improves user experience.
Application Load Balancer also allows you to specify a subset of data centers where requests should fail over to, thus providing additional control. For example, you can configure Application Load Balancer between all your data centers in both North America and Europe, and if a European data center goes down, all the requests will be failed over to other data centers only within Europe thus ensuring data sovereignty.
Easily perform maintenance of your data centers by routing traffic to a highly available static version of your website
Many website owners create a static version of their site that can be displayed when the original website is down for maintenance. With Application Load Balancer, you can not only serve traffic to this static website during maintenance, but also use this for disaster recovery. For example, in the event that all your data centers are down due to maintenance or outages, your application continues to answer user requests from the static version of your website, thus maximizing the availability of your application
Most public cloud platforms autoscale based on the demand for your applications. In reality though, autoscaling is not instantaneous, as it takes several minutes to spin up new instances. This delay can be disastrous during times of peak traffic or during a DDoS attack. With Application Load Balancer, you get the scalability needed to meet any traffic demands – whether planned or unplanned. Thus, whether you are dealing with Black Friday traffic or an unexpected DDoS attack, you can be assured that your customers will have superior user experiences.
In addition, Application Load Balancer ensures high performance by directing traffic to the best available data center by analyzing Internet traffic conditions in real time and avoiding congestion points and outages.
Application Load Balancer allows customers to regain reliability and control over the Cloud by balancing your traffic over HTTP and DNS layers. This dual-layer load balancer allows customers to avoid outages and lock-in, while also providing session stickiness and instant failover. As an agnostic load balancer, Application Load Balancer can balance traffic between on-premises data centers and any Cloud Service Provider.
Want to reduce cloud migration risks, while ensuring high scalability, reliability and performance? Try out Application Load balancer free for 60 days!
This is a Security Bloggers Network syndicated blog post authored by Jennifer Layton. Read the original post at: The Akamai Blog