Some of the most common security challenges organizations face today can be boiled down to a breakdown in communication between development, operations and security teams, with security often being the last to know about changes. Container technology can play an important role in how these teams work together to address security concerns.
Containers have broad appeal because they allow developers to easily package an application and all its dependencies into a single image that can be promoted from development to test and production—without change.
Containers make it easy to ensure consistency across environments and multiple deployment targets such as physical servers, virtual machines (VMs) and private or public clouds. This helps Ops teams more easily manage the applications that deliver business value.
This improved packaging means easier deployments, with less back and forth between Dev and Ops. Developers are happier, because the likelihood of problems being introduced due to misconfigured systems is significantly reduced. Ops teams are happier because they have a more standardized set of server configurations, making it easier to manage and secure the servers.
The best practice for patching or updating containerized applications is to rebuild the application container and redeploy. Following this best practice means that a record of any and all changes made is available through the CI/CD tools and process. And, since applications are never patched on a running system, but only rebuilt and redeployed, Ops teams also have the ability to limit access to production servers. No more ssh needed. This makes security teams happier.
At the end of the day, containers change how we develop, deploy and manage applications. This provides measurable benefits, but as with any change, comes with its own set of challenges.
Securing Containers Throughout the Layers
Securing containers is a lot like securing any running process. Teams need to think about security throughout the layers of the solution stack before deploying and running containers. And while you can deploy and secure containers with a DIY solution, it’s a lot of work. Especially when working with containers at scale, you want an enterprise container platform the makes life easier for your application team and your infrastructure team.
You can also build your own container management environment, or you can use a container orchestration platform such as Kubernetes that automates scheduling and running application containers on clusters of physical or VMs. Since it directs the container runtime environment, it’s a crucial part of maintaining a secure container infrastructure and needs to be treated as such.
Control, Defend, Extend
If you think about securing containers throughout the major layers of an enterprise container solution, you can break it down into three manageable buckets: You need to control the security of your containerized apps, defend your container platform infrastructure and extend the security of your overall solution by leveraging tools from the broader security ecosystem.
Managing security is a continuous process. As applications are deployed or updated, it’s critical to provide dynamic security controls to keep the business safe. Organizations need a platform that enables a software supply chain with security controls built-in and enables the development and operations teams to defend and extend their application platform throughout the complete application life cycle without reducing developer productivity.
There are several things organizations can be doing to take control of container security, some before application development even begins.
First, make sure to start with trusted content. These days applications typically include many open source components, such as the Linux operating system, Apache Web Server, JBoss Enterprise Application Platform and Node.js. Containerized versions of these packages are readily available, so that you don’t have to build your own. But, as with any code you download from an external source, you need to know where the packages originally came from, who built them and whether there’s any malicious code inside them. Use certified images that are maintained and updated when new security vulnerabilities are discovered.
Next to consider is that your teams are building containers that layer content on top of the public container images you download. You need to manage access to—and promotion of—the downloaded container images and the internally built images in the same way you manage other types of binaries. There are a number of private registries that support storage of container images, so selecting a private registry that helps you automate policies for the use of container images stored in the registry is the safest bet.
The CI/CD pipeline is at the core of a secure software supply chain. Organizations need to remember to build in security gates to ensure that as processes are automated, they are using the most up to date versions before updates are implemented.
The next step is defending your container deployment, which includes ensuring that both the host OS and container orchestration platform are secure.
The host OS: Securing enterprise use of containers requires an OS that can manage multi-tenancy of the container runtime.
Use a host OS that can secure containers at the boundaries, securing the host kernel from container escapes and securing containers from each other.
SELinux is the brick wall at the OS layer that’s going to stop bad things from happening if a container process/user manages to breakout of the Linux namespace abstraction.
You can further enhance security, minimizing the attack surface for your applications and infrastructure by using a minimal, container optimized host OS.
Container orchestration: There are a number of capabilities that can and should be implemented in your container platform. For example:
API access control (authentication and authorization) is critical for securing your container platform. Use Kubernetes Role-Based Access Control (RBAC) controls access to ensure each user has access only to the commands and data needed for their role.
Deploying modern microservices-based applications in containers often means deploying multiple containers distributed across multiple nodes. With network defense in mind, you want a network plugin that supports Kubernetes Network Policies for fine-grained, policy-based control and isolation at the level of individual pods.
Use pod security policies (beta in Kubernetes 1.10) to minimize the risk of breakouts by taking advantage enterprise OS features such as SELinux, Linux capabilities and secure computing profiles.
Configure etcd so that secrets are encrypted at rest.
Finally, you need to extend your Kubernetes deployment with a broad ecosystem of security tools, such as PAM solutions, external vaults, web application firewalls (WAF) and SIEM systems, through standard interfaces and APIs.
Any container platform you choose should provide a wide network of partners whose solutions are certified as secure and compliant with those of the provider. It should also offer custom connections to tools you’re already using.
Of course, implementing containers isn’t just about security, your container platform needs to provide an experience that works for your developers and operations team. You need a secure, enterprise-grade container-based application platform that enables application developers and operators without compromising the functions needed by both teams, while also improving operational efficiency and infrastructure utilization.