It is not often that one runs into situations that so purely fit a classic stereotype. Securing and monitoring Docker containers happens to be one of those conundrums that is a textbook example of a “damned if you do and damned if you don’t” setup. On the surface, securing and monitoring containers seems like a straightforward affair – treat containers like mini virtual machines, and run your security/monitoring agents in each container; or, treat them like processes running on the host OS, and run your security/monitoring agents on the host OS. Sounds simple enough. However, both options run into some surprisingly natty difficulties.
Running the agent in the container
Running security/monitoring/other agents in a container immediately runs into the problem that containers are designed to run a single process (which may spawn children processes). Once we employ our typical cleverness with shell scripts to circumvent the single-process-per-container limits, we realize why that constraint was created in the first place. Containers represent modular components, and many are sourced from third parties, and thus are effectively sealed; there is no way to modify their structure. Instantiating entire security/monitoring solutions to what amounts to securing and monitoring a single process invariably ends up consuming far more resources than the process being secured/monitored. It is easy to see that this is not a practicable solution.
So, just install on the host OS…right?
Installing a security/monitoring solution on the host OS that runs all the containers seems much more promising. After all, (and yes, I am oversimplifying here), containers are straightforward processes separated into groups through a tagging mechanism supported by the kernel. A sufficiently engineered security/monitoring solution should be able to secure and monitor all the containers hosted by the host OS. It all actually works, until someone in your DevOps team decides to move to managed orchestration engines like Google’s Kubernetes Engine, or the soon to be released AWS EKS. Suddenly, you lose all access to the host OS, and with it, your ability to secure and manage containers.
Osquery in a DaemonSet container
The answer to this lies in using Kubernetes’ DaemonSet construct, along with an intelligently designed end-point agent like Facebook’s osquery. A DaemonSet is a special type of Pod (a Pod is a collection of containers in Kubernetes terminology) which is logically guaranteed to run on each node (a node is a host OS that is part of the Kubernetes cluster and available to run one or more pods). Osquery, for its part, is an endpoint agent that is unique in that it has been designed for surfacing any type of system data as a set of normalized SQL tables. As opposed to endpoint agents purpose built for an application like vulnerability management or intrusion detection, osquery is built to be a general purpose agent that is very good at systematically presenting system data for whatever end purpose that the user wishes to use it for. The focus on efficiently collecting any and every type of system data makes osquery a particularly natural choice for collecting system data pertaining to containers as well.
Osquery has been designed for collecting system metrics at the OS level (hence it’s name). It does this by accessing the native capabilities and system APIs of the OS, and then casting the returned data into a set of SQL tables. On Linux systems, for example, it will retrieve information about processes by accessing the /proc filesystem and all its typical child nodes.
When osquery is run in a DaemonSet container, of course, it sees the “virtualized” version of /proc that is limited to processes running only in its own container. However, by mounting the host OS’s /proc filesystem into the container running osquery, and by suitably extending to osquery to look at this new mount point for process information, it becomes possible for osquery to see the processes running on the host OS, even though osquery is running in its own container. And, since all other containers running on the host OS are processes themselves, it becomes possible for osquery running in a DaemonSet container to examine and monitor processes running in other containers on the same host OS as the DaemonSet. This idea of mounting OS file systems into the container namespace, and then using osquery to look at the mount points for system information, works in a general way – for example, you can mount the /dev file system on the host OS into the osquery’s container space, and modify osquery to look at the host devices through this mount point.
Of course, things are a bit more complicated than that – there are issues relating to permissions and so on, which have to be suitably resolved. More about that in a future article.
For now, I’d encourage you to explore the general idea of solving critical security, monitoring, compliance, and other management challenges in a containerized environment by modifying and running osquery in a DaemonSet.
I gave a presentation on this topic recently at QueryCon. Here are the slides for additional context:
Have questions about how or why this could work for you? Post a comment below or in the osquery slack channel and tag @milans100 and I’d be happy to chat. I’ll also be at the Uptycs booth (E7) at DockerCon June 12-15th.
*** This is a Security Bloggers Network syndicated blog from Uptycs Blog authored by Milan Shah. Read the original post at: https://www.uptycs.com/blog/securing-containers-running-in-hosted-orchestration-services