What’s New in Istio 1.4?
The Istio working group just released Istio 1.4.0 ahead of KubeCon + CloudNativeCon North America in San Diego this week. This post summarizes how this latest version continues the project’s recent focus on improving the operability and performance of Istio for production users.
- Continued work on performance improvements with alpha support for Mixer-less telemetry
- A complete update to service authorization system with the new
- Support for Istio installation, control plane configuration, and upgrades in the
- More troubleshooting support in
- Proxy sidecar stability and feature improvements
Laying the Groundwork for Performance Improvements
Istio 1.4.0 adds alpha support for Mixer-less telemetry. In previous versions, if a user wants to collect connection telemetry data from the Envoy proxy, the istio-proxy sidecar must make its own connection to Istio’s Mixer telemetry service for every connection it handles. This approach obviously doubles the number of TCP connections the proxy must negotiate, increasing system resource requirements, and it also puts pressure on Mixer scalability as the network traffic in a mesh increases.
The introduction of Mixer-less telemetry paves the way for using fewer CPU and memory resources in the proxy sidecar without degrading network service or metrics. When the new feature is enabled, the connection metrics are processed in the Envoy proxy, then made available for scraping by Prometheus. By making metrics collection passive from the point of view of the proxy, the bookkeeping load on the proxy drops.
Currently, Mixer-less telemetry does not offer feature parity with current Mixer metrics. Only default HTTP metrics are currently supported. Neither TCP metrics (now in experimental release) nor custom metrics are yet available in the proxy. The Istio team does not plan to graduate in-proxy metrics to stable until they reach feature parity with Mixer-based metrics.
Guide to Kubernetes Configurations
A comprehensive list of tips, tricks, and best practices for hardening Kubernetes and preventing misconfigurations
Maturing Security Controls
The Istio authorization policy API, the replacement for Istio’s RBAC policy implementation, graduates to beta in Istio 1.4.0. The new control system addresses confusion over how Istio RBAC policies apply to workloads rather than services, and it also simplifies the user experience while adding support for more use cases.
Even though Istio’s philosophy as a service mesh framework focuses on the service entity, the RBAC policy controls do not perfectly align with the Kubernetes resources Istio defines elsewhere. For example, even though Istio RBAC includes a
ServiceRole Custom Resource Definition,
ServiceRole policies actually apply to the workloads behind a Kubernetes/Istio
Why is that confusing? First, the Istio
ServiceRoleBinding doesn’t actually bind a Service to the role. While the
ServiceRoleBinding supports several different attributes to use to identify the subject, none of them specifies a Kubernetes Service.
The other source of confusion stems from the Kubernetes
Service specification. Kubernetes does not require a one-to-one correlation between Services and workloads, even though that configuration is the typical use case. Kubernetes
Service objects are associated to workload deployments by Kubernetes object labels. A single Kubernetes
Service object can route traffic to pods in multiple Kubernetes deployments, because the same label can be applied to multiple deployments in the same namespace. When a workload pod makes a request to another service, the Istio RBAC evaluates the resulting connection against that source pod’s workload, even if it is one of multiple workloads associated to a shared Service.
For example, a cluster may have multiple deployments labeled
nginx, which are all associated with the Service called
nginx service is downstream to a
backend service in the mesh. However the deployment
nginx-old binds to a different
ServiceRole than the deployment
nginx-new, and each
ServiceRole has different RBAC rules. The same request from the same downstream source to the
backend service may be handled successfully if it happens to route through an
nginx-old pod, but it would fail due to lack of RBAC permissions if it is routed to an
nginx-new pod. The deployments and RBAC policies were probably not configured with the intention of letting requests succeed or fail based on a coin-toss. While the issue is easily fixed, it still requires time spent debugging.
The new Istio
AuthorizationPolicy specification makes the authorization relationship between source and destination much clearer. Instead of having to create both a
ServiceRoleBinding resource, both are combined into a single resource. Furthermore, the use of pod label selectors makes the relationship between the
AuthorizationPolicy and the workload to which it applies much more explicit.
istioctl command-line interface continues to centralize tooling for managing Istio service meshes. In v1.4.0, the
istioctl command adds support for installing and managing the control plane configuration. (Note that in my testing with
istioctl manifest apply, the control plane components did not deploy correctly. Whether this failure is due to a bug in 1.4.0 or the result of unclear instructions in the documentation, make sure to use the command with care, and test against non-production clusters first.)
Users trying to troubleshoot their Istio clusters can now also use the experimental
istioctl analyze command, which can inspect either live clusters or the YAML manifests used to install and configure Istio, making it a very useful tool for preventing issues when installing Istio on a live Kubernetes cluster.
The Envoy proxy, which powers the data plane via its placement in the istio-proxy sidecar in all pods in the mesh, should now run more smoothly, and it also supports additional metrics and enables mirroring a user-specified percentage of traffic rather than presenting an all-or-nothing choice.
The migration from permissive mTLS mode to mTLS enforcement has been simplified with automatic mTLS, also making gradual adoption of Istio and its security features easier. In past releases, users who were gradually adding the istio-proxy sidecar to deployments had to create
DestinationRule resources and update them to reflect which upstream targets had the proxy sidecar and thus could support mTLS connections. With automatic mTLS, the Istio control plane tracks which deployments have the sidecar and updates the mesh’s sidecar proxies to connect to those workloads with or without mTLS as needed.
With these and other improvements, the Istio service mesh project continues to make its usability and management simpler and more predictable. While it is not yet a low-maintenance, easy-to-learn technology, this release highlights the continued focus on addressing challenges for new users and decreasing the maintenance workload for existing users.
If you’re just getting started learning about and using Istio, check out our “Getting Started with Istio” blog – it details what Istio is, how it works, and what use cases are best suited to it. You can also check out our architecture overview to learn more about the StackRox Kubernetes Security Platform.
*** This is a Security Bloggers Network syndicated blog from The Container Security Blog on StackRox authored by The Container Security Blog on StackRox. Read the original post at: https://www.stackrox.com/post/2019/11/whats-new-in-istio-1.4/