Just a couple of years ago, developers were using container technologies to quickly provision systems for their prototyping and testing. Today, enterprises are implementing multiple technologies such as Kubernetes for orchestration and complementary technologies such as serverless functions from all of the big cloud vendors, then deploying them continuously into production. We are riding the next significant wave in IT operations that follows the maturation of virtualization and cloud computing. But Cloud-native is unique in several ways, and securing these newer architectures requires a new approach terms of both technology and enforcement.
For example, whether deployed as a public or private model, one way the underlying tools and technology improve developer productivity is by leveraging existing code from open source repositories or prior projects developed in house. Without proper systems in place to ensure no known vulnerabilities are present at build or are discovered later on, the lack of detailed knowledge of the entire code base introduces risk.
At the same time, organizations strive to leverage portability to allow a multi-cloud platform choice (and switch among providers) to optimize costs and/or gain new functionality advantage. The use of containers, so-called “serverless” architectures and microservices provides greater scale, redundancy and isolation. Developers now continuously update applications (or components of applications) and push those updates live to add new functionality, support the latest devices or simply fix bugs.
There are a number of models for deploying cloud-native applications:
- Containers managed either in the data center or on a cloud provider’s servers.
- Serverless containers, in which the cloud provider manages the entire container infrastructure.
- Serverless functions which eliminate concerns about where the service is running.
- Hybrid: many applications are already using combinations of these approaches, implementing each where they fit best.
Regardless of the underlying technology, securing these applications—especially when they operate on sensitive data—is critical. However, they each pose unique challenges when it comes to implementing a consistent security policy and monitoring the overall system.
First, is the dynamic nature of the environment. As new services are automatically provisioned to meet demand, the security team may no longer have adequate time to evaluate risks and provide late stage guidance to ensure compliance. The window to properly review the application and its infrastructure is much shorter, as is the time for overall systems testing as services are updated independently on a much more frequent schedule.
Another challenge is the loss of complete control over the physical network infrastructure as services are moved across data centers in different locations, or when the IT operations and security teams don’t know where they are running, as is the case in serverless models.
Traditional security tools cannot handle the velocity, scale and dynamic networking capabilities of containers and serverless infrastructure. Adapting to this new reality requires supporting three key requirements:
- Integration into the build process, or what some call “shift left,” to identify issues early and prevent vulnerable code from being introduced in the first place.
- Implementation of tight controls at the application level, for example by using whitelisting and baselining approaches to tightly limit the ability of services to behave in a way that is not consistent with their intent.
- Enforcement of consistent rollout and enforcement policies, regardless of the organization’s choice of cloud provider or technology “stack.”
Cloud-native environments offer several traits that make them easier to secure if managed correctly. Containers are created as images and deployed, rather than installed, and are meant to be immutable—i.e., not undergo any changes or patching in runtime and only be refreshed from new images. Containers are often used in microservices applications, which means that each container performs a simple function. And finally, container images are declarative—meaning that their contents can be used to learn of their intended use.
This creates an opportunity for organizations to adopt a new type of security model—one that examines container images or functions as they are created, vets their contents and then enforces immutability in runtime, not allowing any changes compared to the original images.
In addition, the relative simplicity of these microservices allows for behavioral profiling, which can be used to impose a whitelist of permitted resources and actions, ensuring that it only performs normal or approved functions. Machine learning techniques can model the application at runtime and flag suspicious activity that drifts from normal activity. The focus moves from specific malware and attack vectors to what is permitted behavior.
Information security professionals are not strangers to the demands of securing new technologies. This entirely new approach to application security for cloud-native environments creates a highly controlled environment where the attack surface is greatly reduced before the application is deployed, and then during runtime it’s easy to detect and automatically respond to anomalies in a very granular way. The net result is that the next generation of applications is more secure and more reliable—all while reaching the market faster than ever.