Bringing back your security from the ‘edge’ of the CDN has many advantages – particularly in multi-CDN deployment scenarios. We take a look at the various deployment models for creating a centralized security protection layer, and when each should be considered.
In the first part of this series, we discussed some of the unique security challenges that can arise from adopting a multi-CDN strategy – namely inconsistent protection among different CDNs and the lack of centralized reporting.
Many of these problems arise from fact that CDN security is done at the ‘edge,’ i.e., security policies are ultimately propagated and executed at the points-of-presence (PoPs) of the CDN. And when ingress traffic originates from disparate CDNs that don’t talk to each other – as is the case in multi-CDN – the result can be gaps in protection and reduced transparency.
A security barrier between you and the multi-CDN
One potential solution in such cases is to bring back security from the edge and create a separate – and centralized – layer of security between the origin server and incoming traffic. As a result, ingress traffic – even from multiple CDNs – will all pass through a central focal point, which will make it possible to apply uniform security policies on all traffic.
There are several advantages to this technique, compared to the traditional CDN approach of ‘edge security’:
- Decouples cost and security. When deploying a multi-CDN solution (or even a standalone CDN), cost is one of the most important factors that determines which CDNs are chosen to begin with, and how traffic is routed. However, the cheapest CDN may not always be the one with the best security – or even any security at all. Decoupling cost and security, therefore, makes sense so as to make sure that traffic cost considerations do not interfere with the quality of protection.
- A single, unified security policy. As we pointed out in the previous installment, different CDN vendors offer different security features with different technologies and different policies. A unified security layer that aggregates all traffics ensures that there are no security gaps between different vendors, and that all traffic is subjected to the same rigorous inspection.
- A single pane of glass for all traffic. Using a multi-CDN solution frequently means splitting management and reporting across disparate management consoles with disparate configurations. This is key when it comes to security, where attacks can come from multiple vectors and from multiple sources. Centralizing security within a single layer ensures full security visibility and reporting, regardless of traffic origin.
On the cloud or on-prem?
A key consideration in consolidating security layers is to decide whether these should run in the data center (on-prem) as a hardware appliance, or whether this should be in the cloud.
There are advantages and drawbacks to each approach, and each organization can decide differently depending on their specific context and configuration.
The greatest advantage of on-premise security solutions is the high degree of control they afford organizations, and ability to configure it to any type of internal network topology. In addition, on-premise solutions tend to have near-zero latency performance, meaning that the speed of business is not impacted.
The flip side of using hardware-based solutions is that they tend to have higher cost compared to cloud solutions, and require high, upfront capital expenditure. Moreover, the higher degree of control comes with higher management overhead. Finally, they have limited capacity compared to cloud-based solutions.
Premise-based security is best for organizations that have physical data centers (as opposed to cloud-based), who prefer the higher degree of control that comes along with it, and whose applications are sensitive to latency.
Perhaps the most noticeable advantage of cloud-based infrastructure is lower cost compared to hardware-based solutions, coupled with lower management overhead. This is one of the primary drivers which have led application developers to increasingly move their infrastructure to the cloud. In addition, cloud services have higher capacity and are therefore able to absorb larger attacks, such as volumetric DDoS attacks. Finally, if your applications are already running in the cloud, then it makes sense for your security to run on the cloud, as well.
The drawbacks of cloud deployments are the mirror image of the advantages of hardware-based solutions, namely minor additional latency (although usually not much) to application performance, a lower degree of control over hardware managed by outside vendors, and compliance barriers in certain regulated industries such as healthcare and finance.
This is best for customers with existing cloud applications, as well as those who are not sensitive to the minor additional latency.
Hybrid security model
Hybrid security deployments are arguably the most robust of application security strategies. In a nutshell, hybrid security models involve usage of both on-prem and cloud-based defense layers, enabling organizations to adapt their security deployment to their particular network topology. Indeed – as more companies begin moving infrastructure to the cloud – even Gartner says that hybrid cloud deployments will soon be the most common usage type.
Hybrid deployments provide the greatest degree of flexibility, allowing organizations to protect applications wherever they are deployed. Moreover, they also retain a high degree of control for organizations. Hybrid deployments also resolve the capacity-vs-control tradeoff by allowing organizations to put in place cloud-based defense mechanisms that can be activated on-demand when hardware capacity isn’t sufficient.
It is usually most suited for larger organizations with complex network topologies, multiple data centers, or applications that are split between cloud and on-prem. Ultimately, the decision of whether to implement a hybrid model is highly dependent on the particulars of your network configuration and needs.
Choosing the right deployment for you
As we’ve seen, multi-CDN strategies can create some fairly complex security challenges that arise from multiple traffic origins, coupled with fact that CDN security is usually executed at the edge of the network. The solution, therefore, is to bring back security from the edge and into a centralized security layer that applies a uniform security policy to all traffic.
Whether such solutions should be implemented as premise-based, cloud-based, or hybrid solutions is dependent on the particulars of each organization’s network configuration and applications.
Part III of this series takes a broader look at CDN security, and how taking security out of the ‘edge’ can enhance application defenses in general.
Read “Cyber-Security Perceptions and Realities: A View from the C-Suite.”
Eyal is a Product Marketing Manager in Radware’s security group, responsible for the company’s line of cloud security products, including Cloud WAF, Cloud DDoS, and Cloud Malware Protection. Eyal has extensive background in security, having served in the Israel Defense Force (IDF) at an elite technological unit. Prior to joining Radware, Eyal worked in Product Management and Product Marketing roles at a number of companies in the enterprise computing and security space, both on the small scale startup side, as well as large-scale corporate end, affording him a wide view of the industry. Eyal holds a BA in Management from the Interdisciplinary Center (IDC) Herzliya and a MBA from the UCLA Anderson School of Management.
*** This is a Security Bloggers Network syndicated blog from Radware Blog authored by Eyal Arazi. Read the original post at: https://blog.radware.com/security/2017/10/approaches-for-securing-protecting-multi-cdn/