One of the biggest challenges we continue to see in the evolving cloud and DevOps world is around security and standards in general.

The general lack of accumulated infrastructure knowledge coupled with the enthusiasm with which DevOps teams like to experiment is causing significant challenges for corporations in terms of standardization. This is leading to two primary symptoms:

  • The first is large cost increases, which is ironic because the promise of cloud was cost savings. In reality, there is seldom anything but increased cost (the Buying and maintaining of desperate systems along with friction-less procurement, thus cost expansion). Bear in mind the very constraint the cloud solves goes hand in hand with cost increase. Private data centers equals limited resources and time to expand; the cloud equals unlimited, readily-available resources.
  • The second symptom is the breaking of standardization and disaggregation of traditional tools. This is the thing that I personally struggle with. The outcome of this change seems so illogical to me. How can companies countenance the lack of standards? Never mind the fact that in a growing number of companies DevOps teams are now allowed to bypass purchasing and use true up budgeting.

[You might also like: Marrying the Business Need With Technology Drive, Part One: Choosing Your Cloud]

In the traditional IT world, companies owned their own infrastructure where there were strongly defined roles inside the IT department. The infrastructure group and security groups ran their respective domains alongside separate application/development groups. Each team had a defined role with well-known boundaries and each group had high-level domain expertise in their respective practice area. On top of this, each group usually had the ability to dictate product and technology standards within their realm. This most often lead to corporate standards, and unified vendor strategy.

As a group we have choices to make and it seems like the agility-first crowd is winning, but I believe there is a way to put agility first and have standards. This is important especially in security where common reporting and the use of standardized solutions should be paramount, in order to protect corporate value.

This solution is proxy location or man-in-the-middle security. For years now manufacturers have been experimenting with offering security solutions as a service. The first area they tried it successfully was URL filtering in the cloud (several service providers had previously tried offering clean pipe service embedded security into data links). I was an early believer in these solutions because they solved problems for disparate enterprise and usually offered a good ROI. However, these solutions were not initially aimed at the data center, they were targeting client machines/employees or branch offices. The advent of cloud and the increasing complexity associated with security in several areas has led to the explosion of cloud-based security offerings in the past five years. Many of these new solutions are specifically targeting the corporate apps or traditional core data center. The way they are architected is to have all your inbound and outbound traffic redirected to flow through their points of presence where they can bring all their security capabilities to bear. Essentially you will have dirty (unscanned) traffic coming into their man-in-the-middle site once it passes through them. In theory, it is clean coming to and from you.

[You might also like: Why Traditional High Availability (HA) for Security Devices is Not Enough]

There is an elegance to this solution, especially in a word where disparate cloud and hybrid data centers are becoming the norm. Why do I think it makes sense? Because it is infrastructure- and location-independent. No matter where the data is, we can still proxy through the POP to attain maximum security benefit. This makes it easier to follow a standard. Also, there is no procurement required on a per-environment basis. In this way, this architecture offers similar benefits to what we used to have when we had our own big data centers with limited ingress and egress points. In a lot of ways this not only answers the friction-less procurement requirement, it makes the whole discussion obsolete. If designed properly, this SAAS solution architecture replaces frictionless procurement with one-time procurement and makes security an embedded overlay function.

This by design recreates an environment that is much simpler to manage and report on: A single global view of your entire environment (at least by solution set, although manufacturers/solution providers are starting to bundle more and more functionality into their offerings), and a single place to manage that same environment. It has the best of both worlds’ standards. Often times it comes fully managed ( so you get to benefit from the infrastructure knowledge so often missing from DevOps teams), it allows better than friction-less procurement in many cases and it often allows you to get the benefit of OPEX expenditure vs. capital expenditure.

Having said that, smaller single data center companies who can work within the basic parameters of a single cloud in a single location/segment of that cloud may still find benefits from remaining with the cloud’s native solutions. In this specific environment, the specific cloud’s native products can work, but oftentimes a product that has a common industry name like WAF may not actually meet the criteria for a WAF.

All in all, I think if security, agility and standards are important, the proxy approach is the way to go.


Read “Keep It Simple; Make It Scalable: 6 Characteristics of the Futureproof Load Balancer” to learn more.

Download Now