Most of us use web applications in our daily lives, whether at work or for personal reasons. These applications include sites offering banking and financial services, payroll, utilities, online training, just to name a few. Users get frustrated, sometimes annoyed, if the applications – such as bank account access, loading of a statement, emails, or bills – are slow to respond. Heaven help us if we lose these services right in the middle of a payment!

data center, servers, application delivery controllers, ADCs, ADC
White and blue firewall activated on server room data center 3D rendering

If you look at these applications from a service provider perspective, especially those that have web facing applications, this loss of customer interest or frustration is expensive and translates into real loss of revenue, almost $8,900 per minute of downtime in addition to loss of customer satisfaction and reputation. And if your services are in the cloud and you don’t have a fall back? Good luck…

Traditional Use of ADCs for Applications

This is where application delivery controllers (ADCs), commonly referred to as load balancers, come in. ADCs focus on a few aspects to help applications. ADCs make it seem to the user that the services being accessed are always up, and in doing so, reduce the latency that a user perceives when accessing the application. ADCs also help in securing and scaling the applications across millions of users.

[You may also like: Ensuring a Secure Cloud Journey in a World of Containers]

Traditionally, these load balancers were deployed as physical devices as a redundant pair, then as virtualization took hold in the data centers, ADCs began to be deployed as virtual appliance. Now, as applications move to cloud environments, ADCs are being deployed as a service in the cloud, or as a mix of virtual, cloud and physical devices (depending on cost and desired performance characteristics, as well the familiarity and expertise of the administrator of these services – DevOps, NetOps or SecOps).

The ADC World is Changing

The world of ADCs is changing rapidly. Due to the fast changing world of applications, with micro-services, agile approach, continuous delivery and integration, there are many changes afoot in the world of ADCs.

ADCs still have the traditional job of making applications available locally in a data center or globally across data centers, and providing redundancy to links in a data center. In addition to providing availability to applications, these devices are still used for latency reduction – using caching, compressions and web performance optimizations – but due to where they sit in the network, they’ve taken on additional roles of a security choreographer and a single point of visibility across a variety of different applications.

[You may also like: Embarking on a Cloud Journey: Expect More from Your Load Balancer]

We are beginning to see additional use cases, such as web application firewalls for application protection, SSL inspection for preventing leaks of sensitive information, and single sign on across many applications and services. The deployment topology of the ADC is also changing – either run within a container for load balancing and scaling micro-services and embedded ADCs, or be able to provide additional value-add capabilities to the embedded ADCs or micro-services within a container.

Providing high availability is one of the core use cases for
an ADC. HA addresses the need for an application to recover from failures
within and between data centers themselves. SSL Offload is also considered a
core use case. As SSL and TLS become pervasive to secure and protect web
transactions, offloading non-business functions from application and web
servers so that they may be dedicated to business processing is needed not only
to reduce application latency but also to lower the cost of application footprint
needed to serve users.

As users connecting to a particular application service grow, new instances of application services are brought online in order to scale applications. Scaling-in and scaling-out in an automated way is one of the primary reasons why ADCs have built-in automation and integrations with orchestration systems. Advanced automation allows ADCs to discover and add or remove new application instances to the load balancing pool without manual intervention. This not only helps reduce manual errors and lowers administrative costs, but also removes the requirements for all users of an ADC to be experts.

[You may also like: Digital Transformation – Take Advantage of Application Delivery in Your Journey]

As we move to the cloud, other uses cases are emerging and quickly becoming a necessity. Elastic licensing, for example, is directed to cap the cost of licenses as organizations transition from physical hardware or virtual deployment to the cloud. Another use case is to provide analytics and end-to-end visibility, designed to pin-point root a cause of an issue quickly without finger-pointing between networking and application teams.

ADCs at the Intersection of Networking and Applications

Since ADCs occupy an important place between applications and networks, it’s quite logical to see ADCs take on additional responsibilities, as applications serve the users. Application delivery and load balancing technologies have been the strategic components providing availability, optimization, security and latency reduction for applications. In order to enable seamless migration of business critical applications to the cloud, the same load balancing and application delivery infrastructure has evolved to  address the needs of continuous delivery/integration, hybrid and multi-cloud deployments.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now