Authors: James Strong, Ricardo Katz
With all Kubernetes APIs, there is a process to creating, maintaining, and ultimately deprecating them once they become GA. The networking.k8s.io API group is no different. The upcoming Kubernetes 1.22 release will remove several deprecated APIs that are relevant to networking:
networking.k8s.io/v1beta1API version of IngressClass
- all beta versions of Ingress:
On a v1.22 Kubernetes cluster, you'll be able to access Ingress and IngressClass objects through the stable (v1) APIs, but access via their beta APIs won't be possible. This change has been in in discussion since 2017, 2019 with 1.16 Kubernetes API deprecations, and most recently in KEP-1453: Graduate Ingress API to GA.
During community meetings, the networking Special Interest Group has decided to continue supporting Kubernetes versions older than 1.22 with Ingress-NGINX version 0.47.0. Support for Ingress-NGINX will continue for six months after Kubernetes 1.22 is released. Any additional bug fixes and CVEs for Ingress-NGINX will be addressed on a need-by-need basis.
Ingress-NGINX will have separate branches and releases of Ingress-NGINX to support this model, mirroring the Kubernetes project process. Future releases of the Ingress-NGINX project will track and support the latest versions of Kubernetes.
Ingress NGINX supported version with Kubernetes Versions Kubernetes version Ingress-NGINX version Notes v1.22 v1.0.0-alpha.2 New features, plus bug fixes. v1.21 v0.47.x Bugfixes only, and just for security issues or crashes. No end-of-support date announced. v1.20 v0.47.x Bugfixes only, and just for security issues or crashes. No end-of-support date announced. v1.19 v0.47.x Bugfixes only, and just for security issues or crashes. Fixes only provided until 6 months after Kubernetes v1.22.0 is released.
Because of the updates in Kubernetes 1.22, v0.47.0 will not work with Kubernetes 1.22.
What you need to do
The team is currently in the process of upgrading ingress-nginx to support the v1 migration, you can track the progress here.
We're not making feature improvements to
ingress-nginxuntil after the support for Ingress v1 is complete.
In the meantime to ensure no compatibility issues:
- Update to the latest version of Ingress-NGINX; currently v0.47.0
- After Kubernetes 1.22 is released, ensure you are using the latest version of Ingress-NGINX that supports the stable APIs for Ingress and IngressClass.
- Test Ingress-NGINX version v1.0.0-alpha.2 with Cluster versions >= 1.19 and report any issues to the projects Github page.
The community’s feedback and support in this effort is welcome. The Ingress-NGINX Sub-project regularly holds community meetings where we discuss this and other issues facing the project. For more information on the sub-project, please see SIG Network.
Authors: Celeste Horgan, Adolfo García Veytia, James Laverack, Jeremy Rickard
On April 23, 2021, the Release Team merged a Kubernetes Enhancement Proposal (KEP) changing the Kubernetes release cycle from four releases a year (once a quarter) to three releases a year.
This blog post provides a high level overview about what this means for the Kubernetes community's contributors and maintainers.
What's changing and when
Starting with the Kubernetes 1.22 release, a lightweight policy will drive the creation of each release schedule. This policy states:
- The first Kubernetes release of a calendar year should start at the second or third week of January to provide people more time for contributors coming back from the end of year holidays.
- The last Kubernetes release of a calendar year should be finished by the middle of December.
- A Kubernetes release cycle has a length of approximately 15 weeks.
- The week of KubeCon + CloudNativeCon is not considered a 'working week' for SIG Release. The Release Team will not hold meetings or make decisions in this period.
- An explicit SIG Release break of at least two weeks between each cycle will be enforced.
As a result, Kubernetes will follow a three releases per year cadence. Kubernetes 1.23 will be the final release of the 2021 calendar year. This new policy results in a very predictable release schedule, allowing us to forecast upcoming release dates:
Proposed Kubernetes Release Schedule for the remainder of 2021
Week Number in Year Release Number Release Week Note 35 1.23 1 (August 23) 50 1.23 16 (December 07) KubeCon + CloudNativeCon NA Break (Oct 11-15)
Proposed Kubernetes Release Schedule for 2022
Week Number in Year Release Number Release Week Note 1 1.24 1 (January 03) 15 1.24 15 (April 12) 17 1.25 1 (April 26) KubeCon + CloudNativeCon EU likely to occur 32 1.25 15 (August 09) 34 1.26 1 (August 22 KubeCon + CloudNativeCon NA likely to occur 49 1.26 14 (December 06)
These proposed dates reflect only the start and end dates, and they are subject to change. The Release Team will select dates for enhancement freeze, code freeze, and other milestones at the start of each release. For more information on these milestones, please refer to the release phases documentation. Feedback from prior releases will feed into this process.
What this means for end users
The major change end users will experience is a slower release cadence and a slower rate of enhancement graduation. Kubernetes release artifacts, release notes, and all other aspects of any given release will stay the same.
Prior to this change an enhancement could graduate from alpha to stable in 9 months. With the change in cadence, this will stretch to 12 months. Additionally, graduation of features over the last few releases has in some part been driven by release team activities.
With fewer releases, users can expect to see the rate of feature graduation slow. Users can also expect releases to contain a larger number of enhancements that they need to be aware of during upgrades. However, with fewer releases to consume per year, it's intended that end user organizations will spend less time on upgrades and gain more time on supporting their Kubernetes clusters. It also means that Kubernetes releases are in support for a slightly longer period of time, so bug fixes and security patches will be available for releases for a longer period of time.
What this means for Kubernetes contributors
With a lower release cadence, contributors have more time for project enhancements, feature development, planning, and testing. A slower release cadence also provides more room for maintaining their mental health, preparing for events like KubeCon + CloudNativeCon or work on downstream integrations.
Why we decided to change the release cadence
The Kubernetes 1.19 cycle was far longer than usual. SIG Release extended it to lessen the burden on both Kubernetes contributors and end users due the COVID-19 pandemic. Following this extended release, the Kubernetes 1.20 release became the third, and final, release for 2020.
As the Kubernetes project matures, the number of enhancements per cycle grows, along with the burden on contributors, the Release Engineering team. Downstream consumers and integrators also face increased challenges keeping up with ever more feature-packed releases. A wider project adoption means the complexity of supporting a rapidly evolving platform affects a bigger downstream chain of consumers.
Changing the release cadence from four to three releases per year balances a variety of factors for stakeholders: while it's not strictly an LTS policy, consumers and integrators will get longer support terms for each minor version as the extended release cycles lead to the previous three releases being supported for a longer period. Contributors get more time to mature enhancements and get them ready for production.
Finally, the management overhead for SIG Release and the Release Engineering team diminishes allowing the team to spend more time on improving the quality of the software releases and the tooling that drives them.
How you can help
Join the discussion about communicating future release dates and be sure to be on the lookout for post release surveys.
Where you can find out more
Author: Kunal Kushwaha, Civo
Are you interested in learning about what SIG Usability does and how you can get involved? Well, you're at the right place. SIG Usability is all about making Kubernetes more accessible to new folks, and its main activity is conducting user research for the community. In this blog, we have summarized our conversation with Gaby Moreno, who walks us through the various aspects of being a part of the SIG and shares some insights about how others can get involved.
Gaby is a co-lead for SIG Usability. She works as a Product Designer at IBM and enjoys working on the user experience of open, hybrid cloud technologies like Kubernetes, OpenShift, Terraform, and Cloud Foundry.
A summary of our conversation
Q. Could you tell us a little about what SIG Usability does?
A. SIG Usability at a high level started because there was no dedicated user experience team for Kubernetes. The extent of SIG Usability is focussed on the end-client ease of use of the Kubernetes project. The main activity is user research for the community, which includes speaking to Kubernetes users.
This covers points like user experience and accessibility. The objectives of the SIG are to guarantee that the Kubernetes project is maximally usable by people of a wide range of foundations and capacities, such as incorporating internationalization and ensuring the openness of documentation.
Q. Why should new and existing contributors consider joining SIG Usability?
A. There are plenty of territories where new contributors can begin. For example:
- User research projects, where people can help understand the usability of the end-user experiences, including error messages, end-to-end tasks, etc.
- Accessibility guidelines for Kubernetes community artifacts, examples include: internationalization of documentation, color choices for people with color blindness, ensuring compatibility with screen reader technology, user interface design for core components with user interfaces, and more.
Q. What do you do to help new contributors get started?
A. New contributors can get started by shadowing one of the user interviews, going through user interview transcripts, analyzing them, and designing surveys.
SIG Usability is also open to new project ideas. If you have an idea, we’ll do what we can to support it. There are regular SIG Meetings where people can ask their questions live. These meetings are also recorded for those who may not be able to attend. As always, you can reach out to us on Slack as well.
Q. What does the survey include?
A. In simple terms, the survey gathers information about how people use Kubernetes, such as trends in learning to deploy a new system, error messages they receive, and workflows.
One of our goals is to standardize the responses accordingly. The ultimate goal is to analyze survey responses for important user stories whose needs aren't being met.
Q. Are there any particular skills you’d like to recruit for? What skills are contributors to SIG Usability likely to learn?
A. Although contributing to SIG Usability does not have any pre-requisites as such, experience with user research, qualitative research, or prior experience with how to conduct an interview would be great plus points. Quantitative research, like survey design and screening, is also helpful and something that we expect contributors to learn.
Q. What are you getting positive feedback on, and what’s coming up next for SIG Usability?
A. We have had new members joining and coming to monthly meetings regularly and showing interests in becoming a contributor and helping the community. We have also had a lot of people reach out to us via Slack showcasing their interest in the SIG.
Currently, we are focused on finishing the study mentioned in our talk, also our project for this year. We are always happy to have new contributors join us.
Q: Any closing thoughts/resources you’d like to share?
A. We love meeting new contributors and assisting them in investigating different Kubernetes project spaces. We will work with and team up with other SIGs to facilitate engaging with end-users, running studies, and help them integrate accessible design practices into their development practices.
Here are some resources for you to get started:
SIG Usability hosted a KubeCon talk about studying Kubernetes users' experiences. The talk focuses on updates to the user study projects, understanding who is using Kubernetes, what they are trying to achieve, how the project is addressing their needs, and where we need to improve the project and the client experience. Join the SIG's update to find out about the most recent research results, what the plans are for the forthcoming year, and how to get involved in the upstream usability team as a contributor!
Authors: Krishna Kilari (Amazon Web Services), Tim Bannister (The Scale Factory)
As the Kubernetes API evolves, APIs are periodically reorganized or upgraded. When APIs evolve, the old APIs they replace are deprecated, and eventually removed. See Kubernetes API removals to read more about Kubernetes' policy on removing APIs.
We want to make sure you're aware of some upcoming removals. These are beta APIs that you can use in current, supported Kubernetes versions, and they are already deprecated. The reason for all of these removals is that they have been superseded by a newer, stable (“GA”) API.
Kubernetes 1.22, due for release in August 2021, will remove a number of deprecated APIs. Kubernetes 1.22 Release Information has details on the schedule for the v1.22 release.
API removals for Kubernetes v1.22
The v1.22 release will stop serving the API versions we've listed immediately below. These are all beta APIs that were previously deprecated in favor of newer and more stable API versions.
- Beta versions of the
MutatingWebhookConfigurationAPI (the admissionregistration.k8s.io/v1beta1 API versions)
- The beta
- The beta
- The beta
- Beta API versions of
SelfSubjectAccessReview(API versions from authorization.k8s.io/v1beta1)
- The beta
- The beta
- All beta
IngressAPIs (the extensions/v1beta1 and networking.k8s.io/v1beta1 API versions)
The Kubernetes documentation covers these API removals for v1.22 and explains how each of those APIs change between beta and stable.
What to do
We're going to run through each of the resources that are affected by these removals and explain the steps you'll need to take.
- Migrate to use the networking.k8s.io/v1
available since v1.19.
The related API IngressClass is designed to complement the Ingress concept, allowing you to configure multiple kinds of Ingress within one cluster. If you're currently using the deprecated
kubernetes.io/ingress.classannotation, plan to switch to using the
On any cluster running Kubernetes v1.19 or later, you can use the v1 API to retrieve or update existing Ingress objects, even if they were created using an older API version.
When you convert an Ingress to the v1 API, you should review each rule in that Ingress. Older Ingresses use the legacy
ImplementationSpecificpath type. Instead of
ImplementationSpecific, switch path matching to either
Exact. One of the benefits of moving to these alternative path types is that it becomes easier to migrate between different Ingress classes.
ⓘ As well as upgrading your own use of the Ingress API as a client, make sure that every ingress controller that you use is compatible with the v1 Ingress API. Read Ingress Prerequisites for more context about Ingress and ingress controllers.
- Migrate to use the admissionregistration.k8s.io/v1 API versions of
available since v1.16.
You can use the v1 API to retrieve or update existing objects, even if they were created using an older API version.
- Migrate to use the CustomResourceDefinition
apiextensions.k8s.io/v1 API, available since v1.16.
You can use the v1 API to retrieve or update existing objects, even if they were created using an older API version. If you defined any custom resources in your cluster, those are still served after you upgrade.
If you're using external CustomResourceDefinitions, you can use
kubectl convertto translate existing manifests to use the newer API. Because there are some functional differences between beta and stable CustomResourceDefinitions, our advice is to test out each one to make sure it works how you expect after the upgrade.
- Migrate to use the apiregistration.k8s.io/v1 APIService
API, available since v1.10.
You can use the v1 API to retrieve or update existing objects, even if they were created using an older API version. If you already have API aggregation using an APIService object, this aggregation continues to work after you upgrade.
- Migrate to use the authentication.k8s.io/v1 TokenReview
API, available since v1.10.
As well as serving this API via HTTP, the Kubernetes API server uses the same format to send TokenReviews to webhooks. The v1.22 release continues to use the v1beta1 API for TokenReviews sent to webhooks by default. See Looking ahead for some specific tips about switching to the stable API.
- Migrate to use the authorization.k8s.io/v1 versions of those authorization APIs, available since v1.6.
- Migrate to use the certificates.k8s.io/v1
API, available since v1.19.
You can use the v1 API to retrieve or update existing objects, even if they were created using an older API version. Existing issued certificates retain their validity when you upgrade.
- Migrate to use the coordination.k8s.io/v1 Lease
API, available since v1.14.
You can use the v1 API to retrieve or update existing objects, even if they were created using an older API version.
There is a plugin to
kubectlthat provides the
kubectl convertsubcommand. It's an official plugin that you can download as part of Kubernetes. See Download Kubernetes for more details.
You can use
kubectl convertto update manifest files to use a different API version. For example, if you have a manifest in source control that uses the beta Ingress API, you can check that definition out, and run
kubectl convert -f <manifest> --output-version <group>/<version>. You can use the
kubectl convertcommand to automatically convert an existing manifest.
For example, to convert an older Ingress definition to
networking.k8s.io/v1, you can run:
kubectl convert -f ./legacy-ingress.yaml --output-version networking.k8s.io/v1
The automatic conversion uses a similar technique to how the Kubernetes control plane updates objects that were originally created using an older API version. Because it's a mechanical conversion, you might need to go in and change the manifest to adjust defaults etc.
Rehearse for the upgrade
If you manage your cluster's API server component, you can try out these API removals before you upgrade to Kubernetes v1.22.
To do that, add the following to the kube-apiserver command line arguments:
(as a side effect, this also turns off v1beta1 of EndpointSlice - watch out for that when you're testing).
Once you've switched all the kube-apiservers in your cluster to use that setting, those beta APIs are removed. You can test that API clients (
kubectl, deployment tools, custom controllers etc) still work how you expect, and you can revert if you need to without having to plan a more disruptive downgrade.
Advice for software authors
Maybe you're reading this because you're a developer of an addon or other component that integrates with Kubernetes?
If you develop an Ingress controller, webhook authenticator, an API aggregation, or any other tool that relies on these deprecated APIs, you should already have started to switch your software over.
You can use the tips in Rehearse for the upgrade to run your own Kubernetes cluster that only uses the new APIs, and make sure that your code works OK. For your documentation, make sure readers are aware of any steps they should take for the Kubernetes v1.22 upgrade.
Where possible, give your users a hand to adopt the new APIs early - perhaps in a test environment - so they can give you feedback about any problems.
There are some more deprecations coming in Kubernetes v1.25, so plan to have those covered too.
Kubernetes API removals
Here's some background about why Kubernetes removes some APIs, and also a promise about stable APIs in Kubernetes.
Kubernetes follows a defined deprecation policy for its features, including the Kubernetes API. That policy allows for replacing stable (“GA”) APIs from Kubernetes. Importantly, this policy means that a stable API only be deprecated when a newer stable version of that same API is available.
That stability guarantee matters: if you're using a stable Kubernetes API, there won't ever be a new version released that forces you to switch to an alpha or beta feature.
Earlier stages are different. Alpha features are under test and potentially incomplete. Almost always, alpha features are disabled by default. Kubernetes releases can and do remove alpha features that haven't worked out.
After alpha, comes beta. These features are typically enabled by default; if the testing works out, the feature can graduate to stable. If not, it might need a redesign.
Last year, Kubernetes officially adopted a policy for APIs that have reached their beta phase:
For Kubernetes REST APIs, when a new feature's API reaches beta, that starts a countdown. The beta-quality API now has three releases … to either:
- reach GA, and deprecate the beta, or
- have a new beta version (and deprecate the previous beta).
At the time of that article, three Kubernetes releases equated to roughly nine calendar months. Later that same month, Kubernetes adopted a new release cadence of three releases per calendar year, so the countdown period is now roughly twelve calendar months.
Whether an API removal is because of a beta feature graduating to stable, or because that API hasn't proved successful, Kubernetes will continue to remove APIs by following its deprecation policy and making sure that migration options are documented.
There's a setting that's relevant if you use webhook authentication checks. A future Kubernetes release will switch to sending TokenReview objects to webhooks using the
authentication.k8s.io/v1API by default. At the moment, the default is to send
authentication.k8s.io/v1beta1TokenReviews to webhooks, and that's still the default for Kubernetes v1.22. However, you can switch over to the stable API right now if you want: add
--authentication-token-webhook-version=v1to the command line options for the kube-apiserver, and check that webhooks for authentication still work how you expected.
Once you're happy it works OK, you can leave the
--authentication-token-webhook-version=v1option set across your control plane.
The v1.25 release that's planned for next year will stop serving beta versions of several Kubernetes APIs that are stable right now and have been for some time. The same v1.25 release will remove PodSecurityPolicy, which is deprecated and won't graduate to stable. See PodSecurityPolicy Deprecation: Past, Present, and Future for more information.
The official list of API removals planned for Kubernetes 1.25 is:
- The beta
- The beta
- The beta
- The beta
Want to know more?
For information on the process of deprecation and removal, check out the official Kubernetes deprecation policy document.
- Beta versions of the
Authors: Divya Mohan
Given the growth and scale of the Kubernetes project, the existing reporting mechanisms were proving to be inadequate and challenging. Kubernetes is a large open source project. With over 100000 commits just to the main k/kubernetes repository, hundreds of other code repositories in the project, and thousands of contributors, there's a lot going on. In fact, there are 37 contributor groups at the time of writing. We also value all forms of contribution and not just code changes.
With that context in mind, the challenge of reporting on all this activity was a call to action for exploring better options. Therefore inspired by the Apache Software Foundation’s open guide to PMC Reporting and the CNCF project Annual Reporting, the Kubernetes project is proud to announce the Kubernetes Community Group Annual Reports for Special Interest Groups (SIGs) and Working Groups (WGs). In its flagship edition, the 2020 Summary report focuses on bettering the Kubernetes ecosystem by assessing and promoting the healthiness of the groups within the upstream community.
Previously, the mechanisms for the Kubernetes project overall to report on groups and their activities were devstats, GitHub data, issues, to measure the healthiness of a given UG/WG/SIG/Committee. As a project spanning several diverse communities, it was essential to have something that captured the human side of things. With 50,000+ contributors, it’s easy to assume that the project has enough help and this report surfaces more information than /help-wanted and /good-first-issue for end users. This is how we sustain the project. Paraphrasing one of the Steering Committee members, Paris Pittman, “There was a requirement for tighter feedback loops - ones that involved more than just GitHub data and issues. Given that Kubernetes, as a project, has grown in scale and number of contributors over the years, we have outgrown the existing reporting mechanisms."
The existing communication channels between the Steering committee members and the folks leading the groups and committees were also required to be made as open and as bi-directional as possible. Towards achieving this very purpose, every group and committee has been assigned a liaison from among the steering committee members for kick off, help, or guidance needed throughout the process. According to Davanum Srinivas a.k.a. dims, “... That was one of the main motivations behind this report. People (leading the groups/committees) know that they can reach out to us and there’s a vehicle for them to reach out to us… This is our way of setting up a two-way feedback for them." The progress on these action items would be updated and tracked on the monthly Steering Committee meetings ensuring that this is not a one-off activity. Quoting Nikhita Raghunath, one of the Steering Committee members, “... Once we have a base, the liaisons will work with these groups to ensure that the problems are resolved. When we have a report next year, we’ll have a look at the progress made and how we could still do better. But the idea is definitely to not stop at the report.”
With this report, we hope to empower our end user communities with information that they can use to identify ways in which they can support the project as well as a sneak peek into the roadmap for upcoming features. As a community, we thrive on feedback and would love to hear your views about the report. You can get in touch with the Steering Committee via Slack or via the mailing list.