Why Microsegmentation is Critical for Securing CI/CD

Modern development environments are characterized by cloud-native technology, microservices architectures and DevOps or DevSecOps teams working in close coordination throughout the development life cycle. The continuous integration/continuous delivery (CI/CD) pipeline is the heart of this environment and is becoming a valuable target for cybercriminals. Global-scale supply chain attacks such as SolarWinds and Kaseya illustrate the grave danger of failing to properly secure CI/CD tooling.

In this article, I’ll focus on microsegmentation, a technology that is growing in importance as a foundation of zero-trust security implementations. By bringing microsegmentation to the CI/CD pipeline, especially in the context of Kubernetes, DevOps teams can achieve an unprecedented level of security and reduce the blast radius of successful breaches, if and when they occur.

What is Microsegmentation?

Microsegmentation is a way to orchestrate traffic between servers within the same network segment (the focus is on server-to-server traffic). For example, you can define that a specific server can only communicate with another server, or a specific application can only communicate with another host, to reflect the roles and permissions in your organization.

Microsegmentation policies and permissions are based on resource identity and can be independent of the underlying infrastructure. This distinguishes microsegmentation from network segmentation, which relies on network IP addresses, and so is tightly coupled to the infrastructure.

Microsegmentation is therefore an ideal way to create intelligent groups of workloads based on their characteristics and define access rules within and between those groups.

Microsegmentation is a fundamental part of zero-trust network access (ZTNA), a technology component that underlies zero-trust security implementations. It provides stronger and more reliable network security by not relying on dynamically changing networks or the technical requirements imposed on them. It also makes networks easier to manage, with a few identity-based policies instead of hundreds of address-based rules.

What are the Benefits of Microsegmentation?

Here are the notable benefits of microsegmentation:

● Robust east-west network traffic control—Microsegmentation enables you to control traffic within network perimeters in ways VLAN systems cannot. Software-defined networks (SDNs) and access control lists (ACLs) can control the resources made available to each user.
● Breach containment—Threat actors that breach a microsegmented internal network will find it difficult to reach sensitive resources and move laterally. This architecture enables you to locate, isolate and contain threat actors that bypass security.
● Smaller attack surface—Microsegmentation minimizes the threat surface, reducing the risk of a successful cyberattack. It typically involves deploying software agents across data centers and all endpoints to provide a more efficient alternative to firewall and VLAN options.
● Regulatory compliance—Organizations leverage microsegmentation for risk management, ensuring compliance with various entities like HIPAA, PCI-DSS and ISO.
● Improved operational efficiency—Microsegmentation involves using software to protect networks—there is no need for access control lists and individual firewall appliances. Shifting to SDN helps make it easier to define, monitor and manage access control policies and network segmentation.

Microsegmentation for DevOps and CI/CD

While cloud-native application development has many benefits, traditional network architectures and security practices cannot keep up with DevOps practices like CI/CD. Microsegmentation reduces network risk and prevents lateral movement by isolating environments and applications. However, it can be a challenge to implement segmentation in a cloud-native environment.

Typical network security teams use a centralized approach with one SecOps team responsible for all security management. For example, some networks have ticket-based approval systems where the central team reviews each request based on access policies. However, this system is slow and prone to human error.

Teams can use DevOps methods to operationalize microsegmentation, implementing policy as code. You can also leverage a microsegmentation solution that helps automate and secure the process. The security team enforces basic segmentation policies, while application owners create more granular policies.

This decentralized security approach preserves the agility of DevOps.
Container Microsegmentation Strategy with Kubernetes
There are several important security considerations when deploying containers:

● Automating the management of microsegmentation across container workloads.
● Incorporating segmentation policies and automation into existing tools.
● Managing a separate microsegmentation solution for containers.

Kubernetes security requires a different approach to networks, given the complex and ephemeral nature of pods. Determining how to segment a containerized environment is challenging, especially when namespaces can span multiple entities. Microsegmentation could complicate container management further.

One way to apply microsegmentation to a hybrid cloud environment is to address scale and evaluate the total number of workloads. Evaluate your virtual and physical infrastructure (hosts) and multiply it to anticipate growth—physical infrastructure is relatively static, while virtual machines can be unpredictable. This estimate is not exact, but it’s useful as a benchmark for scaling.

However, this approach is more complex with containerized workloads, which behave differently from physical hosts and VMs. For instance, Kubernetes runs containers within pods on a node (ideally up to 110 pods per node). You can easily end up with thousands of compute instances, which must be segmented somehow.

The Solution

Some vendors allow you to segment container environments in OpenShift or Kubernetes—they usually support microsegmentation for containerized workloads only (not other workloads across a hybrid environment). Having separate solutions for containers and non-containerized workloads can work on a small scale.

A unified segmentation management approach across all workloads helps you scale more smoothly. Kubernetes can deploy a stable load balancing service in front of a group of pods—Kubernetes rarely spins services up and down as with pods. Thus, you can segment workloads by services instead of individual containers. This can provide robust security and preserve the agility that DevOps teams sorely need.

Avatar photo

Gilad David Maayan

Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Oracle, Zend, CheckPoint and Ixia, producing technical and thought leadership content that elucidates technical solutions for developers and IT leadership.

gilad-david-maayan has 44 posts and counting.See all posts by gilad-david-maayan