Containers

Changes to the Kubernetes Container Image Registry

Introduction

The release of Kubernetes 1.25 was when it was first announced that the Kubernetes project would be updating its official container image registry endpoint from k8s.gcr.io to the community owned registry, registry.k8s.io. The goal was to eventually sunset the old registry over time. However, as highlighted on the official Kubernetes website, this changeover has since been expedited to adopt a more sustainable infrastructure model with the first set of changes happening on Monday March 20, 2023. This post covers what changes are happening, why they’re happening, important dates to keep in mind, and what actions you need to take.

What changes are happening?

Beginning March 20, 2023, all traffic being served from the k8s.gcr.io endpoint will be redirected to registry.k8s.io. Then on April 3, 2023, the old registry will then be frozen, preventing any images for Kubernetes and its sub-projects from being pushed to the k8s.gcr.io registry.

All images in the k8s.gcr.io registry will be impacted by this change, including other sub-projects such as Kubernetes DNS (dns/k8s-dns-node-cache) and Ingress NGINX Controller (ingress-nginx/controller). The new registry endpoint is designed to spread traffic across a number of regions and cloud providers. As such, clients pulling images from registry.k8s.io will be securely redirected to fetch images from a storage service (i.e., Amazon Simple Storage Service [Amazon S3]) in the closest region of the relevant cloud provider.

Why the change?

Since the inception of Kubernetes, Google has hosted the project’s official container image registry (k8s.gcr.io). As the project has grown, this model has become increasingly unsustainable for the Cloud Native Computing Foundation (CNCF) because of the egress charges associated with image pulls from other cloud providers. The Kubernetes community recognized that a better long-term strategy would be to extend the infrastructure usage to include other cloud providers that could also host the project’s image layers and repositories. The aim of using a distributed cloud infrastructure model is to improve the overall speed and experience for the various projects’ end users, allowing them to take advantage of closer servers and infrastructure from other cloud providers like AWS.

Making these changes now will help mitigate the end-user network traffic costs associated with image pulls from the previous image registry. At the same time, it will make the project more cost efficient, allowing the Kubernetes team to lower the egress bandwidth and storage costs that come from serving distributed Kubernetes end-users. In addition to this, it allows the Kubernetes project team to make better use of resources at their disposal, such as the AWS donation announced at last year’s KubeCon NA 2022 in Detroit.

When is the change happening?

Traffic redirect from old container image registry – 20th March 2023

Beginning on March 20th, all traffic targeting the legacy k8s.gcr.io registry will be redirected to the new image container registry at the registry.k8s.io endpoint.

Freeze of old container image registry – 3rd April 2023

On April 3rd, the legacy registry (k8s.gcr.io) will be frozen. This will impact all container images currently hosted in the old registry and will prevent any new images from being pushed to it. The registry will remain available for image pulls to assist end users in their migration away from k8s.gcr.io. But, the community cannot make long term guarantees around the old registry. Even if your organization isn’t impacted now it will be in the future. We recommend updating to the new registry.k8s.io as soon as possible.

What actions should you take?

Detect images from k8s.gcr.io

A good place to start is to find all the container images used by Pods in your cluster that are dependent on the old image registry. Below are some of the different approaches you can take to accomplish this:

  1. Using OPA Gatekeeper or Kyverno – If you’re running either of these policy admission controllers in your Kubernetes cluster, then you can use them to detect images that have been pulled from k8s.gcr.io, as well as prevent image pulls from the old registry. For examples, you can have a look at these Amazon Elastic Kubernetes Service (Amazon EKS) best practices guides.
  2. Using the kubectl community-images plugin – This is a command line interface (CLI) tool that displays container images running your Kubernetes cluster that were pulled from community owned repositories. It can be used to scan and warn users about switching repositories. You can watch an example of how to use this plugin to update the registry being used by images in your cluster.
  3. Using kubectl to check different resources – You can run a kubectl command to filter through and list the images of the different resources in your cluster that are dependent on the old registry:
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c |\
grep "k8s.gcr.io"

The above command specifically checks for Pods with images from the old registry. If you’re using this method, you will have to repeat the command for other resources like DaemonSets, Jobs, etc.

Update manifests

After detecting the old dependencies, you should update the relevant Helm charts and manifests that still point to k8s.gcr.io with the new registry endpoint, registry.k8s.io.

Review IP address restriction policies

If you currently have strict domain name or IP address access policies in place that limit image pulls to k8s.gcr.io, then you should revise this to match the aforementioned changes. Starting March 20th, if you have these restrictions in place then clusters on these networks will no longer be able to pull images due to the redirect changes. Furthermore, customers running Kubernetes in such restricted setups should carefully review their workload registry dependencies to mitigate the risks of any anomaly behaviour in their environments after the changes have taken place.

Copy images to private registry

If you host your own image registry, you can copy the relevant images to your self-hosted repositories using tools like crane. Customers running hosted private image registries like Amazon Elastic Container Registry (Amazon ECR) can similarly copy their images from the public repositories to the private ones in Amazon ECR. If you currently mirror images to a private registry from k8s.gcr.io, then you will need to update this to pull from the new public registry, registry.k8s.io.

How will AWS customers be impacted?

It’s strongly recommended that customers running Amazon EKS and self-managed Kubernetes clusters in AWS scan for and address the necessary changes required for image dependencies that were hosted in the old registry. There are a number of workloads, operators and sub-projects that make use of images stored in the old registry that may impact both Amazon EKS and self-managed Kubernetes customers. As such, it is imperative that they run through the necessary checks to avoid being impacted by this registry update.

Conclusion

To read more about the new Kubernetes container image registry, registry.k8s.io, the freezing of the old registry and a timeline of other implicit changes, please refer to the posts below:

Lukonde Mwila

Lukonde Mwila

Lukonde is a Senior Developer Advocate at AWS. He has years of experience in application development, solution architecture, cloud engineering, and DevOps workflows. He is a life-long learner and is passionate about sharing knowledge through various mediums. Nowadays, Lukonde spends the majority of his time contributing to the Kubernetes and cloud-native ecosystem

Chris Short

Chris Short

Chris Short has been a proponent of open source solutions throughout his 20+ years in various IT disciplines, including systems, security, networks, DevOps management, and cloud native advocacy across the public and private sectors. He currently works as a Developer Advocate at Amazon Web Services, and is an active Kubernetes contributor. Chris is a disabled US Air Force veteran living with his wife and son in Metro Detroit. Chris writes about Cloud Native, DevOps, and other topics at ChrisShort.net.