Many organizations, such as Netflix and Amazon, are using microservice architecture to implement business applications as a collection of loosely coupled services. Some of the reasons to move to this distributed, loosely coupled architecture is to enable hyperscale, and continuous delivery for complex applications, among other things. Teams in these organizations have adopted Agile and DevOps practices to deliver applications quickly and to deploy them with a lower failure rate than traditional approaches. However, you have to balance the complexity that comes with a distributed architecture with the application needs, scale requirements and time-to-market constraints.
As is the case with a monolithic application, the distributed design raises concerns for service levels such as availability during instance failures, security for exposed surfaces including APIs, on-demand scalability to address load increases, latency and performance impacts during peak times. These concerns apply both for the business services as well as for the loosely coupled microservices that compose the business applications.
Enter the Load Balancer!
For many years, application delivery controllers (ADCs) have been integral to addressing service-level needs for enterprise applications deployed on premise or on the cloud.
In order to scale the client and microservices independent of each other, a client interacting with a microservices-based application does not need to know about the instances that are serving it. This is precisely the de-coupling that a reverse proxy or a load balancer provides.
Again, load balancing is the answer to ensure that the microservices can handle load, security and remains available.
The big gain comes when you merge the load balancer deployment in a traditional North-South pattern between client and microservices-based applications with East-West deployment for horizontal scalability. The goal is to keep a secure and controlled environment mandated by IT without losing development agility and automation needs of the DevOps.
Balancing Scale and Speed with Automation, Security, Manageability and Analytics
Although there are many benefits to a loosely coupled design of a microservices-based application, one of the challenges remains how to quickly roll out, troubleshoot, and manage these microservices. Manually allocating resources for applications and reconfiguring the load balancer to incorporate new services is inefficient and error-prone. It becomes downright problematic at scale. Automating the deployment of services quickly becomes a necessity. Automation tools transform the traditional manual approach into simpler automated scripts and tasks that do not require familiarity or expertise with the managed solution.
It is very difficult to deliver a commitment level of a service without an SLA, and it is impossible to manage an application’s SLA without first gaining visibility into it. Solutions for monitoring application performance and SLA are expensive and many-a-times require inserting probes and/or integrating software agents into every application and possibly microservice.
When deploying microservices that may affect many applications, proactive monitoring, analytics and troubleshooting become critical before they become business disruptions. These may include service alerts when a microservice or an application is not meeting its SLA requirements such as latency, unavailable services, and problems of access from a particular data center or a specific device type or something in between.
Businesses have to support complex IT architectures for their application delivery in a secure manner. Configuring, deploying and maintaining cross-domain microservices can be error-prone, costly and time-consuming. To simplify configuration and management of these microservices, IT should adopt automation, visibility, analytics and orchestration best practices and tools that fit in with their agile and DevOps processes. The goal is to keep a secure and controlled environment mandated by IT without losing development agility and automation needs of the DevOps.
For more details:
Application Delivery Products: https://www.radware.com/Products/#ApplicationDelivery
vDirect API Gateway for Orchestration: https://www.radware.com/Products/vDirect/
Read “Keep It Simple; Make It Scalable: 6 Characteristics of the Futureproof Load Balancer” to learn more.
Prakash Sinha, VP, ADC Solutions, Radware brings over 22 years of industry experience in strategy, product management and engineering.
Prior to Radware, Prakash led product management and ecosystem development for Citrix and was instrumental in introducing Citrix NetScaler VPX and SDX product lines to market. Prior to Citrix, Prakash held senior positions in architecture, engineering, and product management at leading technology companies such as Cisco, Informatica, and Tandem Computers.
Prakash holds a Bachelor in Electrical Engineering from India and an MBA from UC Berkeley.
*** This is a Security Bloggers Network syndicated blog from Radware Blog authored by Prakash Sinha. Read the original post at: https://blog.radware.com/applicationdelivery/2017/11/load-balancers-microservices/