Managing Security and Privacy in the Public Cloud

As with any enterprise technology, there are benefits and challenges when creating IT environments in the public cloud. The benefits include cost savings and the ability to easily scale up and down (just to name a few). Challenges present a double-edged sword: On the one hand, the scale and reach the cloud provides enable massive scale, but also raise regional issues in terms of privacy, such as complying with GDPR, and related challenges when it comes to visibility (companies wouldn’t want to accidentally collect private information from the UK and then display it on a dashboard in Texas). Companies should look for monitoring tools that surface business-critical information in a way that doesn’t break any laws.

Evolution in cloud technology from the last decade culminated in containers, which in many ways exacerbated the challenges (for starters, there are more moving pieces to monitor). While studies conclude that container adoption has begun to plateau, the same can’t be said for technologies at the orchestration layer (aka, Kubernetes, for which native adoption is up 43%), which is where organizations should focus their investment. By moving up a level from packaging—as long as organizations interact with the orchestration layer in a consistent manner—it doesn’t matter how they package it. Another reason to focus on the orchestration layer is containers tend to present their own challenges when it comes to security.

While there are a number of different projects that try to make containers more secure, I predict we’ll see a shift to lightweight virtual machines (VMs), which take advantage of the security that comes with VMs and the efficiency and portability that containers offer. They’re light and portable enough to provide what Docker does today, but with the added benefit of proper security isolation. Although Amazon’s Firecracker was a bit slow to gain traction, we’ll see increased adoption in 2020; as Firecracker and other lightweight VMs become more established, companies will be able to make the transition to a more secure infrastructure without having to rewrite how code is deployed.

That said, organizations should treat any public cloud infrastructure (whether that’s hypervisors or containers) as a hostile public network. Trust no one and carefully evaluate every potential tool and technology partnership. Tools used for any event in an organization’s infrastructure, including logging, monitoring and security auditing, have to be robust, secure and able to operate in these extremely hostile operating environments. These tools have to offer access control to limit what information is available to whom, as well as limit the actions people can use that tool for. Companies don’t want a visibility tool to become an attack vector. By treating every public infrastructure as hostile—assuming there’s always going to be a bad actor—organizations can design and build solutions that can operate safely.

This isn’t to say companies should completely separate performance and compliance data. Instead, they should bake security into their visibility tool, with a robust role-based access control (RBAC) model in place, for deep visibility while staying compliant. Too often security and performance data is siloed, which overlooks the criticality of security information and responsibility to maintain visibility into an organization’s entire infrastructure, gaining a holistic picture. Surfacing security and compliance data alongside everything else, such as application and performance data, offers a well-rounded approach to monitoring and observability.

Implementing a strong RBAC policy and enforcement model from the outset helps bake security into a strong visibility strategy. From there, IT leaders can achieve scale by working with a solution that’s able to operate in these aforementioned hostile public networks. Layering an access control model on top makes it safer to take that one visibility tool and apply it more broadly, not only in terms of infrastructure coverage but also in terms of its capacity to move throughout the organization, allowing many teams (like security and compliance) to consume it safely.

Another requirement for operating safely in the public cloud is to have a robust secrets management solution that’s also developer-friendly. This solution should naturally and effectively integrate with other tools, including monitoring. In its very nature—monitoring public cloud infrastructure—a monitoring tool needs access to some level of credentials. One of the best ways a monitoring tool vendor can help organizations maintain a secure infrastructure on a public cloud while maintaining visibility is integrating with a first-class solution for secrets management.

If monitoring is all about identifying an issue, that puts security and compliance data under that same umbrella. For organizations to gain a complete picture of what’s going on with their infrastructure, it’s critical they eliminate any siloing of performance and security data, creating a holistic view of data coupled with a robust RBAC model to ensure no laws are being broken. It’s so much easier to manage cloud infrastructure when all data is in one place and that data can be accessed safely. This holistic and compliant view leads to better management, less risk, and better, more secure operations—an all-around net positive for businesses.

Sean Porter

Avatar photo

Sean Porter

Sean Porter is the creator of the Sensu project and the co-founder and CTO of Sensu Inc., a leader in open source monitoring. Sean is a seasoned systems operator and software developer with over a decade of experience in automating infrastructure. As CTO of Sensu Inc, he oversees the development of Sensu and works with users to better understand how Sensu can help them solve complex monitoring problems.

sean-porter has 1 posts and counting.See all posts by sean-porter