Kubernetes News

-
Blog: User Namespaces: Now Supports Running Stateful Pods in Alpha!
Authors: Rodrigo Campos Catelin (Microsoft), Giuseppe Scrivano (Red Hat), Sascha Grunert (Red Hat)
Kubernetes v1.25 introduced support for user namespaces for only stateless pods. Kubernetes 1.28 lifted that restriction, after some design changes were done in 1.27.
The beauty of this feature is that:
- it is trivial to adopt (you just need to set a bool in the pod spec)
- doesn't need any changes for most applications
- improves security by drastically enhancing the isolation of containers and mitigating CVEs rated HIGH and CRITICAL.
This post explains the basics of user namespaces and also shows:
- the changes that arrived in the recent Kubernetes v1.28 release
- a demo of a vulnerability rated as HIGH that is not exploitable with user namespaces
- the runtime requirements to use this feature
- what you can expect in future releases regarding user namespaces.
What is a user namespace?
A user namespace is a Linux feature that isolates the user and group identifiers (UIDs and GIDs) of the containers from the ones on the host. The indentifiers in the container can be mapped to indentifiers on the host in a way where the host UID/GIDs used for different containers never overlap. Even more, the identifiers can be mapped to unprivileged non-overlapping UIDs and GIDs on the host. This basically means two things:
-
As the UIDs and GIDs for different containers are mapped to different UIDs and GIDs on the host, containers have a harder time to attack each other even if they escape the container boundaries. For example, if container A is running with different UIDs and GIDs on the host than container B, the operations it can do on container B's files and process are limited: only read/write what a file allows to others, as it will never have permission for the owner or group (the UIDs/GIDs on the host are guaranteed to be different for different containers).
-
As the UIDs and GIDs are mapped to unprivileged users on the host, if a container escapes the container boundaries, even if it is running as root inside the container, it has no privileges on the host. This greatly protects what host files it can read/write, which process it can send signals to, etc.
Furthermore, capabilities granted are only valid inside the user namespace and not on the host.
Without using a user namespace a container running as root, in the case of a container breakout, has root privileges on the node. And if some capabilities were granted to the container, the capabilities are valid on the host too. None of this is true when using user namespaces (modulo bugs, of course 🙂).
Changes in 1.28
As already mentioned, starting from 1.28, Kubernetes supports user namespaces with stateful pods. This means that pods with user namespaces can use any type of volume, they are no longer limited to only some volume types as before.
The feature gate to activate this feature was renamed, it is no longer
UserNamespacesStatelessPodsSupport
but from 1.28 onwards you should useUserNamespacesSupport
. There were many changes done and the requirements on the node hosts changed. So with Kubernetes 1.28 the feature flag was renamed to reflect this.Demo
Rodrigo created a demo which exploits CVE 2022-0492 and shows how the exploit can occur without user namespaces. He also shows how it is not possible to use this exploit from a Pod where the containers are using this feature.
This vulnerability is rated HIGH and allows a container with no special privileges to read/write to any path on the host and launch processes as root on the host too.
Most applications in containers run as root today, or as a semi-predictable non-root user (user ID 65534 is a somewhat popular choice). When you run a Pod with containers using a userns, Kubernetes runs those containers as unprivileged users, with no changes needed in your app.
This means two containers running as user 65534 will effectively be mapped to different users on the host, limiting what they can do to each other in case of an escape, and if they are running as root, the privileges on the host are reduced to the one of an unprivileged user.
Node system requirements
There are requirements on the Linux kernel version as well as the container runtime to use this feature.
On Linux you need Linux 6.3 or greater. This is because the feature relies on a kernel feature named idmap mounts, and support to use idmap mounts with tmpfs was merged in Linux 6.3.
If you are using CRI-O with crun, this is supported in CRI-O 1.28.1 and crun 1.9 or greater. If you are using CRI-O with runc, this is still not supported.
containerd support is currently targeted for containerd 2.0; it is likely that it won't matter if you use it with crun or runc.
Please note that containerd 1.7 added experimental support for user namespaces as implemented in Kubernetes 1.25 and 1.26. The redesign done in 1.27 is not supported by containerd 1.7, therefore it only works, in terms of user namespaces support, with Kubernetes 1.25 and 1.26.
One limitation present in containerd 1.7 is that it needs to change the ownership of every file and directory inside the container image, during Pod startup. This means it has a storage overhead and can significantly impact the container startup latency. Containerd 2.0 will probably include a implementation that will eliminate the startup latency added and the storage overhead. Take this into account if you plan to use containerd 1.7 with user namespaces in production.
None of these containerd limitations apply to CRI-O 1.28.
What’s next?
Looking ahead to Kubernetes 1.29, the plan is to work with SIG Auth to integrate user namespaces to Pod Security Standards (PSS) and the Pod Security Admission. For the time being, the plan is to relax checks in PSS policies when user namespaces are in use. This means that the fields
spec[.*].securityContext
runAsUser
,runAsNonRoot
,allowPrivilegeEscalation
andcapabilities
will not trigger a violation if user namespaces are in use. The behavior will probably be controlled by utilizing a API Server feature gate, likeUserNamespacesPodSecurityStandards
or similar.How do I get involved?
You can reach SIG Node by several means:
You can also contact us directly:
- GitHub: @rata @giuseppe @saschagrunert
- Slack: @rata @giuseppe @sascha
-
Blog: Comparing Local Kubernetes Development Tools: Telepresence, Gefyra, and mirrord
Author: Eyal Bukchin (MetalBear)
The Kubernetes development cycle is an evolving landscape with a myriad of tools seeking to streamline the process. Each tool has its unique approach, and the choice often comes down to individual project requirements, the team's expertise, and the preferred workflow.
Among the various solutions, a category we dubbed “Local K8S Development tools” has emerged, which seeks to enhance the Kubernetes development experience by connecting locally running components to the Kubernetes cluster. This facilitates rapid testing of new code in cloud conditions, circumventing the traditional cycle of Dockerization, CI, and deployment.
In this post, we compare three solutions in this category: Telepresence, Gefyra, and our own contender, mirrord.
Telepresence
The oldest and most well-established solution in the category, Telepresence uses a VPN (or more specifically, a
tun
device) to connect the user's machine (or a locally running container) and the cluster's network. It then supports the interception of incoming traffic to a specific service in the cluster, and its redirection to a local port. The traffic being redirected can also be filtered to avoid completely disrupting the remote service. It also offers complementary features to support file access (by locally mounting a volume mounted to a pod) and importing environment variables. Telepresence requires the installation of a local daemon on the user's machine (which requires root privileges) and a Traffic Manager component on the cluster. Additionally, it runs an Agent as a sidecar on the pod to intercept the desired traffic.Gefyra
Gefyra, similar to Telepresence, employs a VPN to connect to the cluster. However, it only supports connecting locally running Docker containers to the cluster. This approach enhances portability across different OSes and local setups. However, the downside is that it does not support natively run uncontainerized code.
Gefyra primarily focuses on network traffic, leaving file access and environment variables unsupported. Unlike Telepresence, it doesn't alter the workloads in the cluster, ensuring a straightforward clean-up process if things go awry.
mirrord
The newest of the three tools, mirrord adopts a different approach by injecting itself into the local binary (utilizing
LD_PRELOAD
on Linux orDYLD_INSERT_LIBRARIES
on macOS), and overriding libc function calls, which it then proxies a temporary agent it runs in the cluster. For example, when the local process tries to read a file mirrord intercepts that call and sends it to the agent, which then reads the file from the remote pod. This method allows mirrord to cover all inputs and outputs to the process – covering network access, file access, and environment variables uniformly.By working at the process level, mirrord supports running multiple local processes simultaneously, each in the context of their respective pod in the cluster, without requiring them to be containerized and without needing root permissions on the user’s machine.
Summary
Comparison of Telepresence, Gefyra, and mirrord Telepresence Gefyra mirrord Cluster connection scope Entire machine or container Container Process Developer OS support Linux, macOS, Windows Linux, macOS, Windows Linux, macOS, Windows (WSL) Incoming traffic features Interception Interception Interception or mirroring File access Supported Unsupported Supported Environment variables Supported Unsupported Supported Requires local root Yes No No How to use - CLI
- Docker Desktop extension
- CLI
- Docker Desktop extension
- CLI
- Visual Studio Code extension
- IntelliJ plugin
Conclusion
Telepresence, Gefyra, and mirrord each offer unique approaches to streamline the Kubernetes development cycle, each having its strengths and weaknesses. Telepresence is feature-rich but comes with complexities, mirrord offers a seamless experience and supports various functionalities, while Gefyra aims for simplicity and robustness.
Your choice between them should depend on the specific requirements of your project, your team's familiarity with the tools, and the desired development workflow. Whichever tool you choose, we believe the local Kubernetes development approach can provide an easy, effective, and cheap solution to the bottlenecks of the Kubernetes development cycle, and will become even more prevalent as these tools continue to innovate and evolve.
-
Blog: Kubernetes Legacy Package Repositories Will Be Frozen On September 13, 2023
Authors: Bob Killen (Google), Chris Short (AWS), Jeremy Rickard (Microsoft), Marko Mudrinić (Kubermatic), Tim Bannister (The Scale Factory)
On August 15, 2023, the Kubernetes project announced the general availability of the community-owned package repositories for Debian and RPM packages available at
pkgs.k8s.io
. The new package repositories are replacement for the legacy Google-hosted package repositories:apt.kubernetes.io
andyum.kubernetes.io
. The announcement blog post forpkgs.k8s.io
highlighted that we will stop publishing packages to the legacy repositories in the future.Today, we're formally deprecating the legacy package repositories (
apt.kubernetes.io
andyum.kubernetes.io
), and we're announcing our plans to freeze the contents of the repositories as of September 13, 2023.Please continue reading in order to learn what does this mean for you as an user or distributor, and what steps you may need to take.
How does this affect me as a Kubernetes end user?
This change affects users directly installing upstream versions of Kubernetes, either manually by following the official installation and upgrade instructions, or by using a Kubernetes installer that's using packages provided by the Kubernetes project.
This change also affects you if you run Linux on your own PC and have installed
kubectl
using the legacy package repositories. We'll explain later on how to check if you're affected.If you use fully managed Kubernetes, for example through a service from a cloud provider, you would only be affected by this change if you also installed
kubectl
on your Linux PC using packages from the legacy repositories. Cloud providers are generally using their own Kubernetes distributions and therefore they don't use packages provided by the Kubernetes project; more importantly, if someone else is managing Kubernetes for you, then they would usually take responsibility for that check.If you have a managed control plane but you are responsible for managing the nodes yourself, and any of those nodes run Linux, you should check whether you are affected.
If you're managing your clusters on your own by following the official installation and upgrade instructions, please follow the instructions in this blog post to migrate to the (new) community-owned package repositories.
If you're using a Kubernetes installer that's using packages provided by the Kubernetes project, please check the installer tool's communication channels for information about what steps you need to take, and eventually if needed, follow up with maintainers to let them know about this change.
The following diagram shows who's affected by this change in a visual form (click on diagram for the larger version):
How does this affect me as a Kubernetes distributor?
If you're using the legacy repositories as part of your project (e.g. a Kubernetes installer tool), you should migrate to the community-owned repositories as soon as possible and inform your users about this change and what steps they need to take.
Timeline of changes
- 15th August 2023:
Kubernetes announces a new, community-managed source for Linux software packages of Kubernetes components - 31st August 2023:
(this announcement) Kubernetes formally deprecates the legacy package repositories - 13th September 2023 (approximately):
Kubernetes will freeze the legacy package repositories, (apt.kubernetes.io
andyum.kubernetes.io
). The freeze will happen immediately following the patch releases that are scheduled for September, 2023.
The Kubernetes patch releases scheduled for September 2023 (v1.28.2, v1.27.6, v1.26.9, v1.25.14) will have packages published both to the community-owned and the legacy repositories.
We'll freeze the legacy repositories after cutting the patch releases for September which means that we'll completely stop publishing packages to the legacy repositories at that point.
For the v1.28, v1.27, v1.26, and v1.25 patch releases from October 2023 and onwards, we'll only publish packages to the new package repositories (
pkgs.k8s.io
).What about future minor releases?
Kubernetes 1.29 and onwards will have packages published only to the community-owned repositories (
pkgs.k8s.io
).Can I continue to use the legacy package repositories?
The existing packages in the legacy repositories will be available for the foreseeable future. However, the Kubernetes project can't provide any guarantees on how long is that going to be. The deprecated legacy repositories, and their contents, might be removed at any time in the future and without a further notice period.
The Kubernetes project strongly recommends migrating to the new community-owned repositories as soon as possible.
Given that no new releases will be published to the legacy repositories after the September 13, 2023 cut-off point, you will not be able to upgrade to any patch or minor release made from that date onwards.
Whilst the project makes every effort to release secure software, there may one day be a high-severity vulnerability in Kubernetes, and consequently an important release to upgrade to. The advice we're announcing will help you be as prepared for any future security update, whether trivial or urgent.
How can I check if I'm using the legacy repositories?
The steps to check if you're using the legacy repositories depend on whether you're using Debian-based distributions (Debian, Ubuntu, and more) or RPM-based distributions (CentOS, RHEL, Rocky Linux, and more) in your cluster.
Run these instructions on one of your nodes in the cluster.
Debian-based Linux distributions
The repository definitions (sources) are located in
/etc/apt/sources.list
and/etc/apt/sources.list.d/
on Debian-based distributions. Inspect these two locations and try to locate a package repository definition that looks like:deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main
If you find a repository definition that looks like this, you're using the legacy repository and you need to migrate.
If the repository definition uses
pkgs.k8s.io
, you're already using the community-hosted repositories and you don't need to take any action.On most systems, this repository definition should be located in
/etc/apt/sources.list.d/kubernetes.list
(as recommended by the Kubernetes documentation), but on some systems it might be in a different location.If you can't find a repository definition related to Kubernetes, it's likely that you don't use package managers to install Kubernetes and you don't need to take any action.
RPM-based Linux distributions
The repository definitions are located in
/etc/yum.repos.d
if you're using theyum
package manager, or/etc/dnf/dnf.conf
and/etc/dnf/repos.d/
if you're usingdnf
package manager. Inspect those locations and try to locate a package repository definition that looks like this:[kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl
If you find a repository definition that looks like this, you're using the legacy repository and you need to migrate.
If the repository definition uses
pkgs.k8s.io
, you're already using the community-hosted repositories and you don't need to take any action.On most systems, that repository definition should be located in
/etc/yum.repos.d/kubernetes.repo
(as recommended by the Kubernetes documentation), but on some systems it might be in a different location.If you can't find a repository definition related to Kubernetes, it's likely that you don't use package managers to install Kubernetes and you don't need to take any action.
How can I migrate to the new community-operated repositories?
For more information on how to migrate to the new community managed packages, please refer to the announcement blog post for
pkgs.k8s.io
.Why is the Kubernetes project making this change?
Kubernetes has been publishing packages solely to the Google-hosted repository since Kubernetes v1.5, or the past seven years! Following in the footsteps of migrating to our community-managed registry,
registry.k8s.io
, we are now migrating the Kubernetes package repositories to our own community-managed infrastructure. We’re thankful to Google for their continuous hosting and support all these years, but this transition marks another big milestone for the project’s goal of migrating to complete community-owned infrastructure.Is there a Kubernetes tool to help me migrate?
We don't have any announcement to make about tooling there. As a Kubernetes user, you have to manually modify your configuration to use the new repositories. Automating the migration from the legacy to the community-owned repositories is technically challenging and we want to avoid any potential risks associated with this.
Acknowledgments
First of all, we want to acknowledge the contributions from Alphabet. Staff at Google have provided their time; Google as a business has provided both the infrastructure to serve packages, and the security context for giving those packages trustworthy digital signatures. These have been important to the adoption and growth of Kubernetes.
Releasing software might not be glamorous but it's important. Many people within the Kubernetes contributor community have contributed to the new way that we, as a project, have for building and publishing packages.
And finally, we want to once again acknowledge the help from SUSE. OpenBuildService, from SUSE, is the technology that the powers the new community-managed package repositories.
- 15th August 2023:
-
Blog: Gateway API v0.8.0: Introducing Service Mesh Support
Authors: Flynn (Buoyant), John Howard (Google), Keith Mattix (Microsoft), Michael Beaumont (Kong), Mike Morris (independent), Rob Scott (Google)
We are thrilled to announce the v0.8.0 release of Gateway API! With this release, Gateway API support for service mesh has reached Experimental status. We look forward to your feedback!
We're especially delighted to announce that Kuma 2.3+, Linkerd 2.14+, and Istio 1.16+ are all fully-conformant implementations of Gateway API service mesh support.
Service mesh support in Gateway API
While the initial focus of Gateway API was always ingress (north-south) traffic, it was clear almost from the beginning that the same basic routing concepts should also be applicable to service mesh (east-west) traffic. In 2022, the Gateway API subproject started the GAMMA initiative, a dedicated vendor-neutral workstream, specifically to examine how best to fit service mesh support into the framework of the Gateway API resources, without requiring users of Gateway API to relearn everything they understand about the API.
Over the last year, GAMMA has dug deeply into the challenges and possible solutions around using Gateway API for service mesh. The end result is a small number of enhancement proposals that subsume many hours of thought and debate, and provide a minimum viable path to allow Gateway API to be used for service mesh.
How will mesh routing work when using Gateway API?
You can find all the details in the Gateway API Mesh routing documentation and GEP-1426, but the short version for Gateway API v0.8.0 is that an HTTPRoute can now have a
parentRef
that is a Service, rather than just a Gateway. We anticipate future GEPs in this area as we gain more experience with service mesh use cases -- binding to a Service makes it possible to use the Gateway API with a service mesh, but there are several interesting use cases that remain difficult to cover.As an example, you might use an HTTPRoute to do an A-B test in the mesh as follows:
apiVersion:gateway.networking.k8s.io/v1beta1 kind:HTTPRoute metadata: name:bar-route spec: parentRefs: - group:"" kind:Service name:demo-app port:5000 rules: - matches: - headers: - type:Exact name:env value:v1 backendRefs: - name:demo-app-v1 port:5000 - backendRefs: - name:demo-app-v2 port:5000
Any request to port 5000 of the
demo-app
Service that has the headerenv: v1
will be routed todemo-app-v1
, while any request without that header will be routed todemo-app-v2
-- and since this is being handled by the service mesh, not the ingress controller, the A/B test can happen anywhere in the application's call graph.How do I know this will be truly portable?
Gateway API has been investing heavily in conformance tests across all features it supports, and mesh is no exception. One of the challenges that the GAMMA initiative ran into is that many of these tests were strongly tied to the idea that a given implementation provides an ingress controller. Many service meshes don't, and requiring a GAMMA-conformant mesh to also implement an ingress controller seemed impractical at best. This resulted in work restarting on Gateway API conformance profiles, as discussed in GEP-1709.
The basic idea of conformance profiles is that we can define subsets of the Gateway API, and allow implementations to choose (and document) which subsets they conform to. GAMMA is adding a new profile, named
Mesh
and described in GEP-1686, which checks only the mesh functionality as defined by GAMMA. At this point, Kuma 2.3+, Linkerd 2.14+, and Istio 1.16+ are all conformant with theMesh
profile.What else is in Gateway API v0.8.0?
This release is all about preparing Gateway API for the upcoming v1.0 release where HTTPRoute, Gateway, and GatewayClass will graduate to GA. There are two main changes related to this: CEL validation and API version changes.
CEL Validation
The first major change is that Gateway API v0.8.0 is the start of a transition from webhook validation to CEL validation using information built into the CRDs. That will mean different things depending on the version of Kubernetes you're using:
Kubernetes 1.25+
CEL validation is fully supported, and almost all validation is implemented in CEL. (The sole exception is that header names in header modifier filters can only do case-insensitive validation. There is more information in issue 2277.)
We recommend not using the validating webhook on these Kubernetes versions.
Kubernetes 1.23 and 1.24
CEL validation is not supported, but Gateway API v0.8.0 CRDs can still be installed. When you upgrade to Kubernetes 1.25+, the validation included in these CRDs will automatically take effect.
We recommend continuing to use the validating webhook on these Kubernetes versions.
Kubernetes 1.22 and older
Gateway API only commits to support for 5 most recent versions of Kubernetes. As such, these versions are no longer supported by Gateway API, and unfortunately Gateway API v0.8.0 cannot be installed on them, since CRDs containing CEL validation will be rejected.
API Version Changes
As we prepare for a v1.0 release that will graduate Gateway, GatewayClass, and HTTPRoute to the
v1
API Version fromv1beta1
, we are continuing the process of moving away fromv1alpha2
for resources that have graduated tov1beta1
. For more information on this change and everything else included in this release, refer to the v0.8.0 release notes.How can I get started with Gateway API?
Gateway API represents the future of load balancing, routing, and service mesh APIs in Kubernetes. There are already more than 20 implementations available (including both ingress controllers and service meshes) and the list keeps growing.
If you're interested in getting started with Gateway API, take a look at the API concepts documentation and check out some of the Guides to try it out. Because this is a CRD-based API, you can install the latest version on any Kubernetes 1.23+ cluster.
If you're specifically interested in helping to contribute to Gateway API, we would love to have you! Please feel free to open a new issue on the repository, or join in the discussions. Also check out the community page which includes links to the Slack channel and community meetings. We look forward to seeing you!!
Further Reading:
- GEP-1324 provides an overview of the GAMMA goals and some important definitions. This GEP is well worth a read for its discussion of the problem space.
- GEP-1426 defines how to use Gateway API route resources, such as HTTPRoute, to manage traffic within a service mesh.
- GEP-1686 builds on the work of GEP-1709 to define a conformance profile for service meshes to be declared conformant with Gateway API.
Although these are Experimental patterns, note that they are available in the
standard
release channel, since the GAMMA initiative has not needed to introduce new resources or fields to date. -
Blog: Kubernetes 1.28: A New (alpha) Mechanism For Safer Cluster Upgrades
Author: Richa Banker (Google)
This blog describes the mixed version proxy, a new alpha feature in Kubernetes 1.28. The mixed version proxy enables an HTTP request for a resource to be served by the correct API server in cases where there are multiple API servers at varied versions in a cluster. For example, this is useful during a cluster upgrade, or when you're rolling out the runtime configuration of the cluster's control plane.
What problem does this solve?
When a cluster undergoes an upgrade, the kube-apiservers existing at different versions in that scenario can serve different sets (groups, versions, resources) of built-in resources. A resource request made in this scenario may be served by any of the available apiservers, potentially resulting in the request ending up at an apiserver that may not be aware of the requested resource; consequently it being served a 404 not found error which is incorrect. Furthermore, incorrect serving of the 404 errors can lead to serious consequences such as namespace deletion being blocked incorrectly or objects being garbage collected mistakenly.
How do we solve the problem?
The new feature “Mixed Version Proxy” provides the kube-apiserver with the capability to proxy a request to a peer kube-apiserver which is aware of the requested resource and hence can serve the request. To do this, a new filter has been added to the handler chain in the API server's aggregation layer.
- The new filter in the handler chain checks if the request is for a group/version/resource that the apiserver doesn't know about (using the existing StorageVersion API). If so, it proxies the request to one of the apiservers that is listed in the ServerStorageVersion object. If the identified peer apiserver fails to respond (due to reasons like network connectivity, race between the request being received and the controller registering the apiserver-resource info in ServerStorageVersion object), then error 503("Service Unavailable") is served.
- To prevent indefinite proxying of the request, a (new for v1.28) HTTP header
X-Kubernetes-APIServer-Rerouted: true
is added to the original request once it is determined that the request cannot be served by the original API server. Setting that to true marks that the original API server couldn't handle the request and it should therefore be proxied. If a destination peer API server sees this header, it never proxies the request further. - To set the network location of a kube-apiserver that peers will use to proxy requests, the value passed in
--advertise-address
or (when--advertise-address
is unspecified) the--bind-address
flag is used. For users with network configurations that would not allow communication between peer kube-apiservers using the addresses specified in these flags, there is an option to pass in the correct peer address as--peer-advertise-ip
and--peer-advertise-port
flags that are introduced in this feature.
How do I enable this feature?
Following are the required steps to enable the feature:
- Download the latest Kubernetes project (version
v1.28.0
or later) - Switch on the feature gate with the command line flag
--feature-gates=UnknownVersionInteroperabilityProxy=true
on the kube-apiservers - Pass the CA bundle that will be used by source kube-apiserver to authenticate destination kube-apiserver's serving certs using the flag
--peer-ca-file
on the kube-apiservers. Note: this is a required flag for this feature to work. There is no default value enabled for this flag. - Pass the correct ip and port of the local kube-apiserver that will be used by peers to connect to this kube-apiserver while proxying a request. Use the flags
--peer-advertise-ip
andpeer-advertise-port
to the kube-apiservers upon startup. If unset, the value passed to either--advertise-address
or--bind-address
is used. If those too, are unset, the host's default interface will be used.
What’s missing?
Currently we only proxy resource requests to a peer kube-apiserver when its determined to do so. Next we need to address how to work discovery requests in such scenarios. Right now we are planning to have the following capabilities for beta
- Merged discovery across all kube-apiservers
- Use an egress dialer for network connections made to peer kube-apiservers
How can I learn more?
How can I get involved?
Reach us on Slack: #sig-api-machinery, or through the mailing list.
Huge thanks to the contributors that have helped in the design, implementation, and review of this feature: Daniel Smith, Han Kang, Joe Betz, Jordan Liggit, Antonio Ojea, David Eads and Ben Luddy!