Containers

Amazon EKS launches IPv6 support

The ongoing growth of the internet, particularly in the fields of mobile applications, IoT, and application modernization, has led to an industry-wide move to IPv6. With 128 bits of address space, IPv6 can provide 340 undecillion IP addresses, compared to 4.3 billion IPv4 addresses. Over the last several years, Amazon Web Services (AWS) has added IPv6 support to a variety of services including Elastic Load Balancing, AWS IoT Core, AWS Direct Connect, Amazon Route 53, Amazon CloudFront, AWS Web Application Firewall, S3 Transfer Acceleration, and Amazon Elastic Container Service (Amazon ECS).

Following the momentum, Amazon Elastic Kubernetes Service (Amazon EKS) recently announced support for IPv6. EKS’s support for IPv6 is focused on resolving the IP exhaustion problem, which is constrained by the limited size of the IPv4 address space, a significant concern raised by a number of our customers and is distinct from Kubernetes’ “IPv4/IPv6 dual-stack” feature. Administrators of Kubernetes clusters can concentrate on migrating and scaling applications instead of spending time working around IPv4 limitations.

With the launch of IPv6 support in EKS, you can now create IPv6 Kubernetes clusters. In an IPv6 EKS cluster, pods and services will receive IPv6 addresses while maintaining the ability for legacy IPv4 endpoints to connect to services running on IPv6 clusters, as well as pods connecting to legacy IPv4 endpoints outside the cluster. All the pod-to-pod communication within a cluster is always IPV6. Within a VPC (/56), the IPv6 CIDR block size for IPv6 subnets is fixed at /64. This provides 2^64 (approximately 18 quintillion) IPv6 addresses allowing to scale your deployments on EKS.

In this blog post, you will learn how to create an IPv6 EKS cluster within a dual-stack Amazon Virtual Private Cloud (VPC). You will deploy a sample IPv6-only service and understand IPv4 and IPv6 ingress and egress functionality.

What is changing?

At cluster creation, you will have the option to specify either IPv4 or IPv6 as the IP address family for the cluster. When you do not specify an IP address family, IPv4 will be chosen. When you configure your cluster to run in IPv6 mode, Kubernetes pods and services receive IPv6 addresses.

Amazon EKS IPv6 support leverages native VPC IPv6 capabilities. IPv6 support works for new and existing VPCs; you can opt in on a VPC-by-VPC basis. Each VPC is given an IPv4 address prefix (CIDR block size can be from /16 to /28) a unique /56 IPv6 address prefix (fixed) from within Amazon’s GUA (Global Unicast Address); you can assign a /64 address prefix to each subnet in your VPC. All the VPC features such as Security Groups, Route Tables, Network ACLs, Peering, and DNS resolution within a VPC all operate in the same way as IPv4. Every instance gets both IPv4 and IPv6 addresses, along with corresponding DNS entries. For a given instance, only a single IPv4 address from the VPC address range is consumed. Please refer to the EKS user guide for complete VPC considerations.

In the IPv6 world, every address is internet routable. The IPv6 addresses associated with the nodes and pods are public. Private subnets are supported by implementing an egress-only internet gateways (EIGW) in a VPC, allowing outbound traffic while blocking all incoming traffic. Best practices for implementing IPv6 subnets can be found in the VPC user guide.

Pod Networking

IPv6 is supported in prefix assignment mode. Amazon VPC Container Network Interface (CNI) plugin is configured to assign an address from the prefix attached to the primary ENI. In contrast to IPv4, IPv6 prefix assignment now occurs only at node startup. This approach increases performance significantly by removing networking-related AWS API throttling for large clusters. A single IPv6 Prefix-Delegation prefix has many addresses (/80 => ~10^14 addresses per ENI) and is big enough to support large clusters with millions of pods, also removing the need for warm prefixes and minimum IP configurations. The VPC CNI currently supports only prefix assignment mode for IPv6 clusters and only works with AWS Nitro-based EC2 instances.

A single IPv6 prefix is sufficient to run many pods on a single node. This also effectively removes the max-pods limitations tied to ENI and IP limitations. Although IPv6 removes direct dependency on max-pods, when using prefix attachments with smaller instance types like the m5.large, you’re likely to exhaust the instance’s CPU and memory resources long before you exhaust its IP addresses. When using managed node groups and Amazon EKS optimized AMIs, the recommended maximum pods value is automatically computed and set based on instance type and VPC CNI configuration values. If you are using self-managed node groups or a managed node group with a custom AMI ID, you must manually set the EKS recommended maximum pod value. To help simplify this process for self-managed and managed node group custom AMI users, we’ve introduced a max-pod-calculator.sh script to find Amazon EKS recommend number of maximum pods based on your instance type and VPC CNI configuration settings.

Packet Flow

Pod to External IPv6

Private subnets in IPv6 VPCs are configured with an egress-only internet gateway. Any pod communication from within private subnets to IPv6 endpoints outside the cluster will be routed via an egress-only internet gateway by default.

Pod to External IPv4

While industry-wide efforts to migrate entirely to IPv6 are underway, IPv6 and IPv4 will continue to coexist. This also implies that Kubernetes pods must establish connections to IPv4 endpoints external to the cluster. To support connecting to IPv4 endpoints outside the cluster, EKS introduces an egress-only IPv4 model.

EKS implements a host-local CNI plugin chained along with VPC CNI to allocate and configure an IPv4 address for a pod. The CNI plugin configures a host-specific non-routable IPv4 address for a pod from the 169.254.172.0/22 range. The IPv4 address assigned to the pod is unique to the node and is not advertised to the Kubernetes control plane.

Pods will perform a DNS lookup for an endpoint and, upon receiving an IPv4 “A” response, will establish a connection with the IPv4 endpoint using the IPv4 address from the host-local 169.254.172.0/22 IP range. Pod’s node-only unique IPv4 address is translated through network address translation (NAT) to the IPv4 (VPC) address of the primary network interface attached to the node. The private IPv4 address of a node is translated by a NAT gateway to the public IPv4 address of the gateway and routed to and from the internet by an internet gateway, as shown in the following picture.

As of November 2021, IPv6 AWS resources in Amazon VPC can use NAT64 (on AWS NAT Gateway) and DNS64 (on Amazon Route 53 Resolver) to communicate with IPv4 services. IPv6 support on EKS currently supports egress-only IPv4 while we work to leverage NAT64 capabilities.

Pod to Pod

Any pod-to-pod communication within or across nodes always uses the pod’s IPv6 address. VPC CNI configures iptables to handle IPv6 while blocking any IPv4 connections.

Ingress into the Cluster

In an IPv6 EKS cluster, services will receive only IPv6 addresses from Unique Local IPv6 Unicast Addresses (ULA). The ULA Service CIDR for an IPv6 cluster is automatically assigned during cluster creation stage and cannot be modified.

You can expose a Kubernetes service outside the cluster via deploying a load balancer. The AWS Load Balancer Controller manages AWS Elastic Load Balancers for a Kubernetes cluster. The controller provisions an AWS Network Load Balancer (NLB) when you create a Kubernetes service of type LoadBalancer and an AWS Application Load Balancer (ALB) for Kubernetes Ingress type.

ALB and NLB in the current phase of their IPv6 support allow dual-stack for internet-facing (frontend) endpoints. Both IPv4 and IPv6 clients can connect to NLB or ALB in dual-mode. EKS IPv6 clusters provision ALB and NLB in dual stack IP mode when you add an annotation service.beta.kubernetes.io/aws-load-balancer-ip-address-type: dualstack to your service or ingress manifests. NLB and ALB use target types to define the destination targets. As of today, you can configure the ingress type with the annotation alb.ingress.kubernetes.io/target-type: ip only. Targets of type instance are not supported.

Note that the legacy Kubernetes in-tree service controller does not support IPv6.

EKS Cluster Communication

The EKS cluster consists of two VPCs: one managed by AWS that hosts the Kubernetes control plane and a second VPC managed by customers that hosts workloads as well as other AWS infrastructure (like load balancers) used by the cluster (dataplane). The control plane and worker node communications continue to follow the IPv4 model. EKS provisions managed cross-account elastic network interfaces (X-ENIs) in dual stack mode (IPv4/IPv6). Kubernetes node components such as kubelet and kube-proxy are configured to support dual stacks. Kubelet and kube-proxy bind to both IPv4 and IPv6 addresses attached to the primary network interface of a node. Kubernetes apiserver communicates to pods and node components via the EKS managed ENI’s (IPv6). Pods communicate with apiserver via the same EKS managed ENIs and pod to apiserver communication always uses IPv6 mode.

EKS in IPv6 mode continues to support existing methods of access control to the cluster endpoint: public, private, and both public and private. Cluster endpoint (NLB) is configured in IPv4 mode when you provision an IPv6 cluster. It will be possible to configure a cluster endpoint in dual stack mode when NLB adds support for instance target group type.

Connections to the Kubernetes cluster endpoint are determined by which endpoint setting you have configured for the cluster. When private or both the public and private endpoints are enabled, any non-pod Kubernetes apiserver requests from within the customer VPC by node components such as kubelet and kube-proxy always happen via EKS managed ENIs in IPv4 mode. When only the public endpoint is enabled, the requests that originate from within the customer’s VPC use IPv4 to communicate with the cluster endpoint.

Communication from outside the VPC, such as utilizing kubectl using public endpoints, uses NLB and is IPv4 at the moment. When using a private endpoint, there is no public access to your API server from the internet. Any kubectl commands must come from within the VPC or a connected network and use IPv4. For connectivity options, see Accessing a private-only API server.

Walkthrough

In this section, you will provision an EKS cluster in IPv6 mode, deploy a sample application, and demonstrate ingress and egress communication mechanisms.

Prerequisites

  • An AWS account with admin privileges
  • AWS CLI with appropriate credentials
  • A key pair in your account for remote access (key pair is named ipv6-ssh-key)
  • Amazon EKS vended kubectl
  • eksctl version 0.79.0 or later

Create Cluster

In this section, we will create an IPv6 EKS cluster and a node group. You will use eksctl to create a cluster. Make sure you are using the latest version of eksctl for this example. IPv6 is supported in prefix assignment mode and is only supported on AWS Nitro-based EC2 instance types. Choose one of Amazon EC2 Nitro Amazon Linux 2 instance types.

IPv6 requires version 1.10.1 or later of the Amazon VPC CNI add-on deployed to your cluster. Once deployed, you can’t downgrade your Amazon VPC CNI add-on to a version lower than 1.10.1 without removing all nodes in all node groups in your cluster.

Copy the following configuration and save it to a file called cluster.yaml:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: my-ipv6-cluster
  version: "1.21"
  region: us-west-2

kubernetesNetworkConfig:
  ipFamily: IPv6

vpc:
  clusterEndpoints:
    publicAccess: true
    privateAccess: true

iam:
  withOIDC: true

addons:
  - name: vpc-cni
    version: v1.10.1-eksbuild.1 # optional
  - name: coredns
    version: v1.8.4-eksbuild.1 # optional
  - name: kube-proxy
    version: v1.21.2-eksbuild.2 # optional

managedNodeGroups:
  - name: x86-al2-on-demand-xl
    amiFamily: AmazonLinux2
    instanceTypes: [ "m6i.xlarge", "m6a.xlarge" ]
    minSize: 1
    desiredCapacity: 2
    maxSize: 3
    volumeSize: 100
    volumeType: gp3
    volumeEncrypted: true
    ssh:
      allow: true
      publicKeyName: ipv6-ssh-key
    updateConfig:
      maxUnavailablePercentage: 33
    labels:
      os-distribution: amazon-linux-2

Create a cluster:

eksctl create cluster -f cluster.yml

Deploy Bastion Host

Deploy a bastion host on an IPv6 public subnet created as part of cluster creation. For steps, see Linux Bastion Hosts on the AWS Cloud.

Deploy AWS Load Balancer Controller

AWS Load Balancer Controller is responsible for the management of AWS Elastic Load Balancers in a Kubernetes cluster. To deploy an AWS Load Balancer Controller, follow the steps outlined in the EKS user guide.

We’re going to upgrade the AWS Load Balancer Controller to use the Ingress class, as our sample application uses ingress class annotation.

helm upgrade aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=my-ipv6-cluster \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --set createIngressClassResource=true 

Deploy Sample Application

Deploy a sample 2048 game application into your Kubernetes cluster and use the Ingress resource to expose it to traffic:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/2048/2048_full_dualstack.yaml

Wait a few minutes for the service to be active. You access your newly deployed 2048 game via a load balancer endpoint using the following command.

export GAME_2048=$(kubectl get ingress/ingress-2048 -n game-2048 -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo http://${GAME_2048}

You can sign in to the AWS Load Balancers console and find an ALB created for a service. You can confirm A (IPv4) and AAAA (IPv6) records under the DNS name. You can use utilities like dig to confirm A and AAAA records.

dig <ALB DNS name> A
dig <ALB DNS name> AAAA

Check IPv4 connectivity from your browser:

To view the sample application, navigate to the GAME 2048 URL from the browser running on the IPv4 network.

Check IPv6 connectivity from your Bastion host:

Connect to the instance using the ssh command in a terminal window.

ssh -i /path/ipv6-ssh-key.pem ec2-user@BastionPublicIp

You will see the following response:

The authenticity of host 'ec2-198-51-100-1.compute-1.amazonaws.com (198-51-100-1)' can't be established.
ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY.
Are you sure you want to continue connecting (yes/no)

You may curl the URL GAME 2048. The curl command will immediately resolve to ALB’s IPv6 address.

while true; do curl http://$GAME_2048/; echo sleep 5; done

Key Considerations

Migrate to IPv6

IPv6 is currently supported exclusively for new EKS clusters; upgrading from IPv4 to IPv6 is not supported. If you are having IP depletion and pod density issues, it is advised that you migrate to IPv6. Simultaneously, EKS suggests that you perform a thorough evaluation of your apps, EKS add-ons, and AWS services prior to migrating to IPv6 clusters.

We suggest a blue/green cluster migration strategy. You will continue to run your cluster and workloads in IPv4 mode while also deploying them to a new IPv6 cluster. Amazon EKS recommends setting up a new VPC per IPv6 guidelines. You can utilize the canary-based testing methodology to redirect a small percentage (usually 10%) of production traffic to the blue IPv6 cluster. You can gradually raise the traffic distribution based on your test findings and eventually migrate to an IPv6 cluster and swap green environment with 100% traffic distribution.

IPv6 and AWS Fargate

Amazon EKS now supports IPv6 for pods running on Fargate. Additionally, each pod running on Fargate receives IPv6 addresses as part of this release. This permits communication between Fargate-based pods and the rest of the cluster’s pods. Fargate utilizes a modified version of VPC CNI. Fargate CNI does not leverage prefix assignment. Because each pod runs on its own hardware unit, CNI assigns each pod a unique IPv6 address from the VPC CIDR range. To address the IPv4 egress issue, the underlying hardware unit that runs Fargate pods will get a unique IPv4 address from the VPC IPv4 address range, in addition to the IPv6 address. All of the admission and egress modes stated above apply to Fargate pods as well. To configure Fargate profiles in IPv6 mode, see the EKS Fargate user guide.

IPv6 and Windows

IPv6 is not supported on Windows nodes at the moment. We are currently working on adding IPv6 support to Windows. When supported, pods scheduled on a Windows node will receive an IPv6 address, and all of the ingress and egress modes discussed previously will apply to Windows nodes as well.

IPv6 with Custom Networking

Amazon VPC CNI custom networking enables network interfaces for pods to be assigned a separate subnet and security groups from the primary interface of the node. One of the primary use cases of custom networking is to solve the IPv4 exhaustion problem, along with security and network isolation. You are no longer required to use custom networking with IPv6 if IP exhaustion is your problem. Within the IPv6-enabled VPC, there are enough addresses available to meet your IP needs. Custom networking with IPv6 is not supported as part of the current launch. Submit feedback on the AWS Containers Roadmap if you believe custom networking is required to meet any security or networking needs.

IPv6 and Security Groups for Pod

You can use Amazon EC2 security groups to define rules that allow inbound and outbound network traffic to and from pods directly. Security groups for pods use a separate ENI called a branch interface, which is associated with the main trunk interface attached to the node. As of today, IPv6 is supported in prefix mode only. Support for security groups per pod requires management of branch ENI’s in non-prefix mode. As a result, you cannot use the security groups for pod functionality during the initial deployment phase of an IPv6 cluster. EKS recommends running clusters in IPv4 mode or IPv6 clusters with EKS Fargate if you have a strong use case for security groups per pod. Please refer to the AWS Containers Roadmap for updates on security groups per pod support for IPv6 clusters.

Cleanup

To avoid incurring future charges, you can delete all resources created during this exercise. Also, you may wish to delete the Bastion host created for this exercise using the AWS console.

aws eks delete-cluster --name my-ipv6-cluster

Conclusion

In this post, we have demonstrated how to provision an EKS cluster in IPv6 mode to handle the issue of IP depletion and increase pod density. The summary of changes and key considerations should provide sufficient guidance for using EKS in IPv6 mode. While we are thrilled to announce IPv6 support for EKS, we are committed to improving your experience as further AWS services add IPv6 support.

You can subscribe to What’s New at AWS to be notified when further AWS services add IPv6 support. On the AWS Containers Roadmap, you may review our roadmaps, provide feedback, and request new features.

Sheetal Joshi

Sheetal Joshi

Sheetal Joshi is a Principal Developer Advocate on the Amazon EKS team. Sheetal worked for several software vendors before joining AWS, including HP, McAfee, Cisco, Riverbed, and Moogsoft. For about 20 years, she has specialized in building enterprise-scale, distributed software systems, virtualization technologies, and cloud architectures. At the moment, she is working on making it easier to get started with, adopt, and run Kubernetes clusters in the cloud, on-premises, and at the edge.

Apurup Chevuru

Apurup Chevuru

Apurup is a Software Development Engineer (SDE) in the container service team, working on EKS Networking.

Mike Stefaniak

Mike Stefaniak

Mike Stefaniak is a Principal Product Manager at Amazon Web Services focusing on all things Kubernetes and delivering features that help customers accelerate their modernization journey on AWS.