Needing to get quickly up to speed with containers, Kubernetes and security for work but have been putting it off for a while? Why do today what you can put off until tomorrow? You’re not alone. Being a security manager, my day-to-day responsibilities sometimes take me away from the need to plan for the future as well as keep up my technical chops. So documenting my quick dive into containers and their application in security kills two birds.
As with any educational pursuit, creating a solid foundation from which to build is the key. That meant understanding what containers are, why they are needed, the security benefits and some key definitions. So let’s begin.
So what are containers?
I attempted to google a one word sentence to quickly understand a container; this was fruitless.
After researching for a while, combining numerous results into a simple explanation led to this:
“A common problem is having to worry about running applications, say a web site, on different computing environments that may not have everything to run that site, say a web server and a database. This is done by packaging all of the requirements for your application inside of a single self-contained entity. Everything then runs inside its own custom runtime environment called a container.”
This sounds like normal software that you download and run, but containers go a step further and can be thought of as similar to virtual machines but stripped down. And even better, this can include the networking as well. A very well-known container platform is Docker. They describe the technology as a way to “package software into standardized units for development, shipment and deployment.”
Docker further explains, “A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment.”
Another important key technology to understand is Kubernetes. Thankfully on their website they have this description “Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.”
Containers package all of the requirements to run an application from the source code to libraries and even networking. Kubernetes helps you manage those containers, so that from one single source of truth you can run many.
History of Developing Applications
To understand the importance of containers, we must look at the normal way that a business would build applications prior to containers.
First of all, there is the build pipeline. This is how developers have a repeatable way of producing consistent, deployable artifacts and ensure the code has integrity. There is automated testing which allows for quality assurance. Combining these together you get ‘continuous integration’ which comes with many operational benefits.
Applications are commonly placed on old static infrastructure. Rarely do these change over the years. I found many legacy operating systems during penetration tests as one example. This is where the idea of infrastructure as code came from. Infrastructure as code can be seen as cloud computing and programmable configuration management. With cloud computing a whole infrastructure can be brought up and down just with scripts.
Now what about release management? Keeping the infrastructure between testing and production similar is extremely important, albeit difficult to do. It is much easier to update an environment when customers or users will be not be affected. Production is another story.
I know this from first-hand experience. Once I downed a web application in a production environment, and the client was the opposite of happy.
These improvements in developing applications has matured into agile development.
So why are organizations moving towards containers?
Containers, such as Docker, allow for smoother operations. Containers help with all of the above; from being set up, to deploying and running them at scale. They are generally smaller and inherently more portable than virtual machines, therefore containers are much more efficient on your resources. For these reasons, containers are extremely valuable and change the way organizations develop applications.
Are containers more secure?
Docker has a white paper “Introduction to Container Security” that is full of great information. What I found was that in general, containers are by default more secure. This is ignoring anonymous access by default leading to this Backdooring through kubelet, but I digress.
Security benefits include:
- Isolation of applications
- Isolation from the host
- Install only what you need
- Least privilege
Escalation of privileges is a common step in penetration testing. This is mostly abused through incorrect permission on files, software and users. For example a normal Domain User may also be a Domain Admin, and thus dumping their credentials would allow compromise of the domain. Another example may be abusing a Linux application that has root permissions that can be run as a standard user. Containers help reduce these attacks. SELinux is great for this and is mentioned further on.
So what helps containers be secure?
When ‘googling’ around I came across two different concepts that by default help containers be secure (in Linux); these are cgroups and namespaces. Both of which are inherited from advances in the Linux Kernel. These are not new concepts, but within isolated containers these become much more powerful and fine grained.
Cgroups (short for control groups) allow system administrators to allocate resources, such as CPU time, system memory, network bandwidth, etc. I like to think of cgroups as the settings in VirtualBox, where you can limit the system memory it has, the network it is on, etc. I did the usual Wikipedia for cgroups but found that a Red Hat Guide was more helpful this time around.
Namespaces are used to partition kernel resources, thereby not allowing different ‘types’ of resources to see each other. I think of these as little virtual machines for namespaces. For example, processes not knowing of each other because they are completely isolated. Just as one VM does not know another is running unless connected via the network. These are not just for processes (Pid), as there are many different types of namespaces including Linux Network Namespaces.
A deep dive into the different types of namespaces can be found in the article, “Separation Anxiety: A Tutorial for Isolating Your System with Linux” (picture below from same source). For example, the article teaches that in the process namespace PID, the child process ‘namespace’ holds the child processes. However the root namespace that holds the root processes has full view of the child name spaces. The article provides a nice diagram below. My concerns is that whether the root process in the child process ‘namespace’ can see out towards the parent processes, and if that is then exploitable.
However, according to the child namespace processes, they are the only processes. This is the same premise of chroot jail.
SELinux (Security-Enhanced Linux)
SELinux is worthy of having its own section because of the importance of it. SELinux is an implementation of Mandatory Access Control, Multi-level Security (MLS) and Multi-category Security (MCS) in the Linux Kernel. Again like the previous 2 security protections, it is within the Linux Kernel.
The way I think of SELinux is like using Linux in ‘constrained mode’. It takes away what’s known as Discretionary Access Control (DAC). DAC is the idea that each individual user can decide on the security of the processes, files, objects they have ownership of, etc.. The problem with this is that inexperienced users may inadvertently sacrifice security for the sake of usability. As such SELinux takes that ability away from the non-security aware user and enforces a minimum level of security. The Windows version of this may be considered Device-Guard.
The value add I see of SELinux in addition to cgroups and namespaces is that SELinux can allow users to label files with categories, allowing different users to access those files but not the container or directory structure. I think of this like Google Drive and how you can view someone’s document but have no idea what folder you are in. This is completed through Multi-Category Security, which further restricts DAC. “Learning Docker” by Vinod Singh, Jeeva S. Chelladhurai and Pethuru Raj uses an example of a label category Company_Confidential. These allow finer grained access control to documents.
Other Security Principles Matter
Throughout my learning I began to further understand the value of containers, especially to reduce vulnerabilities during a development lifecycle.
However I do believe in defense in depth and still value the old, tried and true security principles (or ‘security pillars’ as they could be seen). You will know of these from security frameworks such as CIS controls, the NIST framework and others. I do have a love affair with the Top 5 CIS controls, just because they make so much sense and are extremely foundational:
CSC 1: Inventory of Authorized and Unauthorized Devices
CSC 2: Inventory of Authorized and Unauthorized Software
CSC 3: Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers
CSC 4: Continuous Vulnerability Assessment and Remediation
CSC 5: Controlled Use of Administrative Privileges
I find the following items are considered best practices for setting up the container security for success:
- Ensuring images are from a trusted registry
- Vulnerability Scan images prior to deployment
- Hardening the host
- Logging and monitoring of both the host and containers
- allowing lessons learned if there are bugs or vulnerabilities found
- Running containers and services as non-root
- Defense in depth mindset
My thoughts are that automation is fantastic for efficiency, but we need to consider security. Containers are extremely beneficial, but from a security perspective we can almost consider any automation as a double-edged sword. If a vulnerability is created, having multiple containers with the same image will result in all of them having the same vulnerability.
An issue I always think of is key management. I remember a talk by Ken Johnson and Chris Gates entitled DevOops Redux from Derbycon 2016, where they discuss finding root AWS keys being uploaded to github. This is the same premise as organizations like to use Docker or the cloud because it’s ‘safer’ and pushes security into someone else’s responsibility (totally faulty logic, but that’s a rant for another time and place), resulting in insecure keys. This includes securing the credentials, keys or secrets for accessing containers. Essentially this results in more ‘keys to the kingdom’ that, once compromised, makes all other compensating controls moot.
But what about Kubernetes?
Kubernetes has become one of the most popular tools for managing containers. It has a head node and a worker node. The head node is the ‘god’ and manages, deploys and configures containers. The worker node runs the containers according to what the head node tells it to.
“Security Best Practices for Kubernetes Deployment“ (following 2 pictures below from same source) begins with similar security principles to the above. An example given regarding user permissions is found below:
This is a great way for being explicit with user permissions.
As mentioned above on Infrastructure as Code, Kubernetes allows network policies and segmentation to be set via ‘code’. With another example such as:
As an auditor it is much easier to view the Kubernetes config than have to look into firewalls, windows hosts, etc. to find the settings.
There is certainly great value to operations and security in moving development to containers (namely Docker) and managing them with Kubernetes. Moving security practices to isolated and hardened containers reduces the attack surface thanks to cgroups, namespaces and SELinux. But do not forget that basic security principles should be the foundation. Containers are fantastic but not the silver bullet for security. They may be inherently more secure from a philosophical viewpoint, but, as with anything, reality has a way of proving us wrong. In the end, the responsibility is yours. Hopefully this quick dive into Containers, Kubernetes and Security helps set a solid foundation.
Haydn Johnson advocates Purple Teaming principles as a powerful methodology for improving intra-organizational security and relationships. Having recently moved to internal security, he uses the offsec mindset to create impactful change within his organization. Committed to learning and sharing his skills, he has spoken at multiple conferences in America and Canada, and has published multiple online articles on offensive security. Haydn has a Masters in Information Technology, the OSCP and GXPN certifications. Originally hailing from Australia, Canada is now called home.
Container image source: Shipping containers Birthday, 26 April 1956
*** This is a Security Bloggers Network syndicated blog from The Ethical Hacker Network authored by Haydn Johnson. Read the original post at: http://feedproxy.google.com/~r/eh-net/~3/wyILhkzl0Ao/