SBN

Why you Need Kubernetes Security Policy Enforcement

Securing Kubernetes is a serious topic, both for organizations already implementing Kube and those that are just getting ready to migrate to the open source container orchestration system. One of the challenges we see around security is not simply patching vulnerabilities but automating Kubernetes security policy enforcement to make it easier for organizations to protect themselves against vulnerabilities and other security issues. Problems can arise from misconfigurations in containers or the underlying Kubernetes infrastructure. Consider a few of the security challenges common in Kubernetes, why Kubernetes policy enforcement is needed, and how  Fairwinds Insights can be used to implement and enforce these policies in Kube automatically.

Application Vulnerabilities

According to a DZone report back in 2020, seventy-eight percent of companies run part or all of their operations on open source software (up from to 42% in 2010). The2023 Open Source Security and Risk Analysis Report showed that all seventeen industry sectors analyzed contained open source in at least 92% of their codebases.

And the Kubernetes community, built on an open source project originally created by Google, almost certainly runs even more of their operations on open source software. The Cloud Native Computing Foundation (CNCF), which is part of the Linux Foundation, lists 153 open source projects in support of cloud native computing. The problem is that  Common Vulnerabilities and Exposures (CVEs) in open source tools may be included in a container or even Kubernetes itself. And because new vulnerabilities can be disclosed at any time, a container image previously thought to be secure may later present  a security risk in running containers and applications.

To address these risks, engineering teams must be able to scan containers to identify CVEs and open source components that have known vulnerabilities. Developers then need to upgrade or patch these components to address the vulnerabilities. Even after a vulnerability has been disclosed, it can be difficult to identify, particularly in a complex computing environment like Kubernetes. Creating a Kubernetes security policy and enforcing it automatically can help you ensure that all container policies are applied consistently in a dynamic, ever-changing environment.

Kubernetes Pod Security Policy

In earlier versions of Kube, Pod Security Policies (PSP) helped create Pod Security Standards. However, Kubernetes continues to evolve, as does the Kubernetes API. APIs are regularly reorganized or deprecated, and as this happens, the old API is deprecated and removed. 

Claroty

Find your deprecated APIs using Pluto, an open source utility to help users find deprecated Kubernetes apiVersions in their code repositories and their helm releases.

The PodSecurityPolicy is an excellent example of an API that was deprecated in version 1.21 and removed in version 1.25. Following the deprecation of Pod Security Policy, Kubernetes documentation advises enforcing similar restrictions on pods using  Pod Security Admission (podsecuritypolicy admission controller) or a third party admission controller plugin to enforce pod security standards.

Polaris is an open source policy engine for Kubernetes that can be run as an admission controller. It acts as a validating webhook, accepts the same configuration as the dashboard, and can run the same validations. The webhook rejects workloads that trigger a danger-level check, blocking workloads that do not conform to your configured policies. Polaris includes over thirty built in configuration policies and the ability to build custom policies using JSON Schema. When you run Polaris on the command line or as a mutating webhook, it can remediate issues automatically based on policy criteria.

Pod Security Standards

These standards define three policies that govern the security spectrum. The policies range from highly permissive to highly restrictive.

Privileged: An unrestricted policy that provides the greatest level of permissions possible, allowing for known privilege escalations. It is intentionally open and completely unrestricted, intended for use by system- and infrastructure-level workloads that are managed by privileged, trusted users.

Baseline: This minimally restrictive policy prevents known privilege escalations; it allows the default Pod configuration (minimally specified). The goal of this policy is to make it easy to adopt common containerized workloads and it is targeted at application operators and developers of non-critical applications.

Restricted: This heavily restricted policy follows the current best practices for Pod hardening. This restricted policy does come at the expense of some compatibility; it is intended for operators and developers of security-critical applications and lower-trust users.

A Privileged policy has basically no restrictions, so allow-by-default mechanisms may be privileged by default. The controls offered in a Baseline policy include: HostProcess, Host Namespaces, Privileged Containers, Capabilities, HostPath Volumes, Host Ports, AppArmor, SELinux, /proc Mount Type, Seccomp, Sysctls. A Restricted policy includes everything in the baseline profile as well as Volume Types, Privilege Escalation (v1.8+), Running as Non-root, Running as Non-root user (v1.23+), Seccomp (v1.19+), and Capabilities (v1.22+).

The security policy you choose depends on your use cases. In some cases, you may want to run privileged pods, but be certain you are doing so intentionally. If you have questions, refer to the Kubernetes docs so you can make informed decisions about enabling pod security policies.

Configure a Security Context for a Pod or Container

You can use a security context to define privilege and access control settings for your pods and containers. A few of the security context settings include:

  • Discretionary Access Control: You can base permission to access an object, such as a file, on user ID (UID) and group ID (GID).

  • Security Enhanced Linux (SELinux): You can assign security labels to objects.

  • Running as privileged or unprivileged. For greater security, run as few pods and containers as privileged as possible.

  • Linux Capabilities: Use these capabilities to allow a process a subset of privileges, but not all the privileges of the root user.

  • AppArmor: You can use program profiles to restrict capabilities for individual programs.

  • Seccomp: Use this security context to filter the system calls for a process.

  • allowPrivilegeEscalation: This configuration controls whether a process can have more privileges than its parent process. It controls whether the no_new_privs flag is set on the container process. allowPrivilegeEscalation is always true in cases when the container is run as privileged, or has CAP_SYS_ADMIN

  • readOnlyRootFilesystem: Mounts the root filesystem for a container as read-only.

The above bullets are not a complete set of security context settings — please see SecurityContext for a comprehensive list. Using a policy-driven configuration validation can help identify misconfigurations and vulnerabilities in Docker containers.

If you want to specify security settings for a Pod, you need to include the securityContext field in your Pod specification. When you specify security settings for a Pod, it applies to all containers in the pod.

Example of a yaml configuration file for a pod with securityContext:

apiVersion: v1

kind: Pod

metadata:

  name: security-context-example

spec:code>

  securityContext:

    runAsUser: 2000

    runAsGroup: 4000

    fsGroup: 3000

  volumes:

  - name: sec-ctx-vol

    emptyDir: {}

  containers:

  - name: sec-ctx-example

    image: busybox:1.28

    command: [ "sh", "-c", "sleep 2h" ]

    volumeMounts:

    - name: sec-ctx-vol

      mountPath: /data/demo

    securityContext:

      allowPrivilegeEscalation: false

Platform Vulnerabilities

Similarly, vulnerabilities in the underlying Kubernetes cluster, plugins, and add-ons may exist. Your Kubernetes clusters must constantly be scanned and monitored for new vulnerabilities and patched as necessary to fix problems. For example, it is important to control access to the Kubernetes API, which users access using kubectl, client libraries, or by making REST requests. Humans and Kubernetes service accounts can both be authorized for API access. It then goes through multiple stages, including:

  • Transport security: the Kubernetes API server listens on port 6643 on the first non-localhost network interface by default. It is protected by Transport Layer Security (TLS); Kubernetes offers an API certificates.k8s.io that enables you to provision TLS certificates signed by a Certificate Authority (CA) that you control.
  • Authentication: After TLS is established, an HTTP request moves on to the next step, authentication. The cluster admin or cluster creation script configures the API server to run in one or more Authenticator modules. It usually examines the headers, client certificate, or both. You can specify multiple authentication modules, which are tried in sequence until one succeeds.
  • Authorization: after the request has been authenticated as coming from a specific user, it must be authorized. The request must include the username, requested action, and the object impacted by the action, and it is authorized if a policy declares that the user does have the permissions needed to complete the requested action. For example, a policy may dictate that a given user can only read pods in a specified namespace. Kube authorization requires the use of common REST attributes to interact with existing organization-wide or cloud-provider-wide access control systems.
  • Admission control: admission control modules can modify or reject requests. These modules can access the attributes available to Authorization modules as well as the contents of the object being created or modified. Admission controllers do not act on requests that only read objects, and when you have multiple admission controllers configured, they are called in order. If any admission controller module rejects a request, it is immediately rejected.
  • Auditing: The Kubernetes cluster audits activities generated by users, the control plane, and applications that use the Kubernetes API. This auditing delivers a chronological set of records that documents the sequence of actions in a cluster, which is relevant in a security context.

See the Kube docs  Controlling Access to the Kubernetes API to learn more about API server access controls.

Appropriate Kube Permissions

A common attack vector used by hackers is to exploit users or services that have access to system resources beyond what they actually need, for example, taking advantage of privilege escalation, leveraging root access, and so on.  Role-Based Access Controls (RBAC) is a way to regulate the access to network or computer resources that is based on the roles of users in your organization. RBAC authorization uses the API group rbac.authorization.k8s.io  to make authorization choices, which enables you to configure policies dynamically via the Kubernetes API.

In the RBAC API, there are four Kubernetes objects declared: Role, ClusterRole, RoleBinding, and ClusterRoleBinding. You can amend or describe objects using kubectl and other tools. An RBAC Role or ClusterRole contains a set of permissions; a Role sets permissions within a specified namespace — you must specify the namespace a Role belongs in when you create it. CusterRole is a resource that is non-namespaced; use it to define a role cluster-wide. Use ClusterRoles to define permissions:

  • on namespaced resources and grant access in individual namespaces

  • on namespaced resources and grant access across all namespaces

  • define permissions on cluster-scoped resources

RoleBinding grants permissions that have been defined in a Role to a user or set of users. It contains a list of users, groups, or service accounts (subjects) in addition to a reference to the role that is being granted. RoleBinding grants permissions in a specified namespace while ClusterRoleBinding grants that same access cluster wide.

Example of RoleBinding

apiVersion: rbac.authorization.k8s.io/v1

# This rolebinding allows "sally" to read pods in the "fairwinds"namespace.

# You must already have a Role named "pod2-reader" in that namespace.

kind: RoleBinding

metadata:

  name: read-pods2

  namespace: fairwinds

subjects:

# You can specify more than a single "subject"

- kind: User

  name: sally # "name" is case sensitive

  apiGroup: rbac.authorization.k8s.io

roleRef:

  # "roleRef" specifies the binding to a Role / ClusterRole

  kind: Role #this must be Role or ClusterRole

name: pod2-reader # make sure this matches the name of the Role or ClusterRole you want to bind to

  apiGroup: rbac.authorization.k8s.io

It is also possible for a RoleBinding to reference a ClusterRole in order to grant the permissions that were defined in that ClusterRole to the resources inside the RoleBinding’s namespace. Using this type of reference allows you to define a set of common roles across your Kubernetes cluster and reuse them in multiple namespaces.

Using RBAC, you can enforce the concept of least privilege, that is, only giving access to the resources needed by the user or service and nothing more. However, discovering whether a Kubernetes deployment has been  over-permissioned with root access requires teams responsible for security to go through each pod manually to check for  misconfigured deployments. This process benefits from automated checks throughout the entire lifecycle of development to ensure the right privileges are granted.

Ingress and Egress Controls

As application services communicate with other resources internally or externally outside of the application, appropriate safeguards must also be put in place to manage inbound and outbound communication. Ingress exposes HTTP and HTTPS routes that are outside to cluster to services inside the cluster. You can configure an Ingress to provide Services URLs that are externally reachable, to load balance traffic, to terminate SSL/TLS, and to offer virtual hosting that is name based. An Ingress controller has the role of fulfilling the Ingress, usually using a load balancer. It may configure an edge router or additional frontends as well in order to handle traffic. You need to have an Ingress controller to satisfy an Ingress, and you may need to deploy an Ingress controller, for example, ingress-nginx. There are multiple Ingress controllers available.

Example of a minimal Ingress resource:

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: minimum-ingress

  annotations:

    nginx.ingress.kubernetes.io/rewrite-target: /

spec:

  ingressClassName: nginx-example

  rules:

  - http:

      paths:

      - path: /examplepath

        pathType: Prefix

        backend:

          service:

            name: example

            port:

              number: 80

Policies determine what data is allowed to go where and what services are allowed to communicate with one another. Similar to RBAC, the best practice is to establish a zero trust model for networking and permissions that enables communication to happen only where it is needed. These policies must be applied consistently in order to be effective. Policy-as-code is the best option in Kubernetes, but it presents a challenge: how do you check that the policy has been applied to every cluster? Again, this is a time consuming and error-prone process without automation.

Certificate Management

Secure Socket Layer (SSL) certificates are used for encrypting network traffic to safeguard data as it is transmitted. These certificates need to be rotated, updated, and managed to ensure that data is being encrypted properly.

In Kubernetes, cert-manager runs within a cluster as a series of deployment resources. To configure Certificate Authorities and request certificates, it uses CustomResourceDefinitions. You should check this customization against your policies to make sure that CustomResource includes all the right security checks, from privileges to permissions to capabilities and more.

Kubernetes Security Policy Enforcement with Fairwinds Insights

To address the challenges around policy enforcement in Kubernetes, we developed  Fairwinds Insights. Fairwinds Insights enables platform teams to standardize and enforce development best practices. Insights includes recommendations for establishing Kubernetes resource limits, which help ensure that the application can perform reliably without increasing costs unnecessarily, but these limits can also help protect against malicious resource abuse.

Insights ensures that throughout the entire development lifecycle, containers and pods are checked against security policy and other best practices. That means users do not accidentally expose a cluster to a CVE, privileges are in line with policy and the entire environment adheres to policy. Not only does Fairwinds Insights use Kubernetes policy enforcement to improve security, it also enables platform engineers to put Kubernetes security guardrails in place at every stage of the development process, automate best practices and develop a culture of service ownership and cost avoidance — without slowing down app teams.

Check out our  sandbox environment or sign up for  the free tier, which is available for environments up to 20 nodes, two clusters, and one repo. 

Originally published on 16 October, 2020

A Platfrom Engineers Guide to Kubernetes - Automate Kubernetes Development Best Practices and Enable Your Developers

*** This is a Security Bloggers Network syndicated blog from Fairwinds | Blog authored by Bill Ledingham. Read the original post at: https://www.fairwinds.com/blog/kubernetes-security-policy