Containers

A deep dive into simplified Amazon EKS access management controls

Introduction

Since the initial Amazon Elastic Kubernetes Service (Amazon EKS) launch, it has supported AWS Identity and Access Management (AWS IAM) principals as entities that can authenticate against a cluster. This was done to remove the burden—from administrators—of having to maintain a separate identity provider. Using AWS IAM also allows AWS customers to use their AWS IAM knowledge and experience and enables administrators to use AWS IAM security features, such as AWS CloudTrail audit logging and  multi-factor authentication.

Until now, administrators used Amazon EKS APIs to create clusters, then switched to the Kubernetes API to manage mappings of AWS IAM principals and their Kubernetes permissions. This manual and multi-step process complicated the way users were granted access to Amazon EKS clusters. It prevented administrators from revoking cluster-admin [root-like] permissions from the principal that was used to create the cluster. The need to make calls to different APIs (AWS and Kubernetes) to manage access also increased the likelihood of misconfiguration.

Feature Overview

The Amazon EKS team has improved the cluster authentication (AuthN) and authorization (AuthZ) user experience with improved cluster access management controls. As of the date of this post, cluster administrators can now grant AWS IAM principals access to all supported versions (v1.23 and beyond) of Amazon EKS clusters and Kubernetes objects directly through Amazon EKS APIs. This new functionality relies on two new concepts: access entries and access policies. An access entry is a cluster identity—directly linked to an AWS IAM principal user or role—that is used to authenticate to an Amazon EKS cluster. An Amazon EKS access policy authorizes an access entry to perform specific cluster actions.

Cluster access management API

The new cluster access management API objects and commands allow administrators to define access management configurations—including during cluster creation—using familiar infrastructure as code (IaC) tools such as AWS CloudFormation, Terraform, or the AWS Cloud Development Kit (CDK).

The improved customer access management controls enable administrators to completely remove or refine the permissions automatically granted to the AWS IAM principal used to create the cluster. If a misconfiguration occurs, then cluster access can be restored simply by calling an Amazon EKS API, as long as the caller has the necessary permissions. The aim on these new controls is to reduce the overhead associated with granting users and applications access to clusters and objects within those clusters.

Note: We have always recommended that AWS IAM roles be used as principals to create Amazon EKS clusters. Roles provide a layer-of-indirection that decouples users from permissions. Users can be removed from roles, without having to adjust AWS IAM policies that provide permissions to the cluster creator roles.

Kubernetes authorizers

Access policies are Amazon EKS-specific policies that assign Kubernetes permissions to access entries. At launch, Amazon EKS supports only predefined and AWS managed policies. Access policies are not AWS IAM entities and are defined and managed by Amazon EKS.

In Kubernetes, different AuthZ services—known as authorizers—are chained together in a sequence to make AuthZ decisions about inbound API server requests. This allows custom AuthZ services to be used with the Kubernetes API server. The new feature allows you to use upstream RBAC (Role-based access control) in combination with access policies. Both the upstream RBAC and Amazon EKS authorizer support allow and pass (but not deny) on AuthZ decisions. When creating an access entry with Kubernetes usernames or groups, the upstream RBAC evaluates and immediately returns a AuthZ decision upon an allow outcome. If the RBAC authorizer can’t determine the outcome, then it passes the decision to the Amazon EKS authorizer. If both authorizers pass, then a deny decision is returned.

Walkthrough

Getting started

Cluster access management using the access entry API is an opt-in feature for Amazon EKS v1.23 and either new or existing clusters. By default, Amazon EKS uses the latest Amazon EKS platform version when you create a new cluster. Amazon EKS automatically upgrades all existing clusters to the latest Amazon EKS platform version for their corresponding Kubernetes minor version. You can use new cluster access management controls when automatic upgrades of platform versions are rolled out on existing clusters. Or, you can update your cluster to the next supported Kubernetes minor version to take advantage of this feature.

To get started with this feature, cluster administrators create Amazon EKS access entries with the desired AWS IAM principals. Please see IAM policy control for access entries to configure AWS IAM permission for administrators. After these access entries are created, administrators can grant access to those entries by assigning access policies. Amazon EKS access policies include permission sets that support common use cases of administration, editing, or read-only access to Kubernetes resources.

The following command and output provide an up-to-date list of supported access policies for managing cluster access:

# List all access policies
$ aws eks list-access-policies
{
    "accessPolicies": [
        {
            "name": "AmazonEKSAdminPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy"
        },
        {
            "name": "AmazonEKSClusterAdminPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
        },
        {
            "name": "AmazonEKSEditPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy"
        },
        {
            "name": "AmazonEKSViewPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy"
        }
    ]
}

The following Amazon EKS access policies are based on these user-facing roles published in the Kubernetes documentation:

  • AmazonEKSClusterAdminPolicy – cluster-admin
  • AmazonEKSAdminPolicy – admin
  • AmazonEKSEditPolicy – edit
  • AmazonEKSViewPolicy – view

With the cluster access management controls, only AWS IAM principals with the appropriate permissions can authorize other AWS IAM principals to access Amazon EKS clusters. Permission is granted by creating access entries and associating access policies with those access entries. Be aware that access granted to AWS IAM principals by the Amazon EKS access policies are separate from permissions defined by any AWS IAM policy associated with the AWS IAM principal.

In short, only the AWS IAM principal and the applied Amazon EKS access entry policies are used by the cluster access management authorizer. The following diagram illustrates the workflow:

In the next sections, we’ll explore several use cases that are now possible via the new Amazon EKS cluster access management APIs.

Create or update a cluster to use access management API

With the introduction of this feature, Amazon EKS supports three modes of authentication: CONFIG_MAP, API_AND_CONFIG_MAP, and API. You can enable cluster to use access entry APIs by using authenticationMode API or API_AND_CONFIG_MAP. Use authenticationMode CONFIG_MAP to continue using aws-auth configMap exclusively. When API_AND_CONFIG_MAP is enabled, the cluster will source authenticated AWS IAM principals from both Amazon EKS access entry APIs and the aws-auth configMap, with priority given to the access entry API.

aws eks create-cluster \
   --name <CLUSTER_NAME> \
   --role-arn <CLUSTER_ROLE_ARN> \
  --resources-vpc-config subnetIds=<value>,endpointPublicAccess=true,endpointPrivateAccess=true \
  --logging '{"clusterLogging":[{"types":["api","audit","authenticator","controllerManager","scheduler"],"enabled":true}]}' \
  --access-config authenticationMode=API

Amazon EKS cluster access management is now the preferred means to manage access of AWS IAM principals to Amazon EKS clusters. While we made access management easier and more secure, we did so without disrupting cluster operations or current configurations. With this approach you can explore cluster access management for your needs, and plan subsequent migrations to cluster access management when it best fits your schedule.

Amazon EKS suggests updating existing clusters to use authenticationMode API_AND_CONFIG_MAP and creating equivalent access entries by specifying the same identity and/or groups used in aws-auth configMap. When API_AND_CONFIG_MAP is enabled, the cluster will source authenticated AWS IAM principals from Amazon EKS access entry APIs and the aws-auth configMap, with the access entry API taking precedence. When you set authenticationMode to API_AND_CONFIG_MAP, for authentication, an access entry is evaluated prior to a configuration map along with any associated username and groups. When no access entry is created for the principal, the ConfigMap is inspected for the presence of a principal and its associated username and groups.

You can update existing cluster configuration to enable API authenticationMode. Make sure platform version is updated before you run update-cluster-config command. For existing clusters using CONFIG_MAP you’ll have to first update the authenticationMode to API_AND_CONFIG_MAP and then to API.

aws eks update-cluster-config \
   --name <CLUSTER_NAME> \
   --access-config authenticationMode=API

Switching authentication modes on an existing cluster is a one-way operation. You can switch from CONFIG_MAP to API_AND_CONFIG_MAP. You can then switch from API_AND_CONFIG_MAP to API. You cannot revert these operations in the opposite direction. Meaning you cannot switch back to CONFIG_MAP or API_AND_CONFIG_MAP from API. And you cannot switch back to CONFIG_MAP from API_AND_CONFIG_MAP.

Removing the default cluster administrator

Until now, when an Amazon EKS cluster was created, the principal used to provision the cluster was permanently granted Kubernetes cluster-admin privileges. From this scenario emerged the best practice of using an AWS IAM role to create Amazon EKS clusters. Using an AWS IAM role provided a layer-of-indirection to control who could assume the role using AWS IAM. By removing the ability to assume the role or removing the role all together, you could revoke a user’s access to the cluster.

As of the date of this post, clusters can be created with the AWS IAM principal of your choosing or with no permissions at all. The example below uses the bootstrapClusterCreatorAdminPermissions=false flag for access-config to prevent the principal—used to create the cluster—from being granted cluster administrator access.

# Create Amazon EKS cluster with no cluster administrator
$ aws eks create-cluster --name <CLUSTER_NAME> \
  --role-arn <CLUSTER_ROLE_ARN> \
  --resources-vpc-config subnetIds=<value>,securityGroupIds=<value>,endpointPublicAccess=true,endpointPrivateAccess=true \
  --logging '{"clusterLogging":[{"types":["api","audit","authenticator","controllerManager","scheduler"],"enabled":true}]}' \
  --access-config authenticationMode=API_AND_CONFIG_MAP,bootstrapClusterCreatorAdminPermissions=false

To verify that no access entries exist for the cluster, the following AWS CLI command can be used to list existing cluster access entries:

# List access entries for cluster
$ aws eks list-access-entries --cluster-name <CLUSTER_NAME>

{
    "accessEntries": []
}

If we try to use the AWS IAM principal with the kubectl auth can-i –list command we see that the principal—even with a properly configured kube config file—is not authenticated to the cluster:

# Verify cluster creator cannot access cluster
$ kubectl auth can-i --list
error: You must be logged in to the server (Unauthorized)

To remove the cluster creator administrator role from an existing cluster, execute the following command on the associated access entry, which will appear once you’ve updated your cluster to an authentication mode that supports access entry.

# Delete access entry
$ aws eks delete-access-entry --cluster-name <CLUSTER_NAME> \
  --principal-arn <IAM_PRINCIPAL_ARN>

Adding cluster administrators to existing clusters

Now that we’ve seen how to handle a cluster administrator during cluster creation, we’ll explore how to add cluster administrators to existing clusters. The following AWS CLI commands can be used to perform the following tasks:

  • Create a cluster access entry to be granted cluster administrator access
  • Associate the cluster administrator access policy to the aforementioned cluster access entry
# Create cluster access entry
$ aws eks create-access-entry --cluster-name <CLUSTER_NAME> \
  --principal-arn <IAM_PRINCIPAL_ARN>
  
{
    "accessEntry": {
        "clusterName": "<value>",
        "principalArn": "<value>",
        "kubernetesGroups": [],
        "accessEntryArn": "<ACCESS_ENTRY_ARN>",
        "createdAt": "2023-03-30T21:38:24.185000-04:00",
        "modifiedAt": "2023-03-30T21:38:24.185000-04:00",
        "tags": {},
        "username": "arn:aws:sts::<AWS_ACCOUNT_ID>:assumed-role/<ROLE_NAME>/{{SessionName}}"
    }
}

With the access entry created and tied to an AWS IAM principal, the AmazonEKSClusterAdminPolicy is assigned by running the following AWS CLI command. Since we are creating a cluster administrator entry, we set the –access-scope type=cluster argument in the command:

# Associate access policy to access entry
$ aws eks associate-access-policy --cluster-name <CLUSTER_NAME> \
  --principal-arn <IAM_PRINCIPAL_ARN> \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
  --access-scope type=cluster
  
  {
    "clusterName": "<CLUSTER_NAME>",
    "principalArn": "<AWS_IAM_PRINCIPAL_ARN>",
    "associatedAccessPolicy": {
        "policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy",
        "accessScope": {
            "type": "cluster",
            "namespaces": []
        },
        "associatedAt": "2023-04-03T13:44:09.788000-04:00",
        "modifiedAt": "2023-04-03T13:44:09.788000-04:00"
    }
}

Adding namespace administrators

Namespace administrators have administrator permissions that are scoped to specific namespaces. They aren’t able to create cluster-scoped resources, like namespaces. To illustrate this use case, we’ll create an access entry based on a read-only AWS IAM role. This read-only role has read-only access to the underlying AWS account. While this example may seem contrived, it illustrates the difference between AWS IAM policies and Amazon EKS cluster access policies. For reference, the ReadOnly role has one attached AWS IAM policy—arn:aws:iam::aws:policy/ReadOnlyAccess—that gives the role read-only access to the underlying AWS account.

# Create access entry
$ aws eks create-access-entry --cluster-name <CLUSTER_NAME> \
  --principal-arn arn:aws:iam::<AWS_ACCOUNT_ID>:role/ReadOnly

  {
    "accessEntry": {
        "clusterName": "<CLUSTER_NAME>",
        "principalArn": "<IAM_PRINCIPAL_ARN>",
        "kubernetesGroups": [],
        "accessEntryArn": "arn:aws:eks:<REGION>:<AWS_ACCOUNT_ID>:accessEntry/<CLUSTER_NAME>/role/<AWS_ACCOUNT_ID>/ReadOnly/40c3cb02-38ed-3edc-4f8c-0043d4639029",
        "createdAt": "2023-04-18T16:18:06.556000-04:00",
        "modifiedAt": "2023-04-18T16:18:06.556000-04:00",
        "tags": {},
        "username": "arn:aws:sts::<AWS_ACCOUNT_ID>:assumed-role/ReadOnly/{{SessionName}}"
    }
}

The command above created a cluster access entry underpinned by the aforementioned ReadOnly AWS IAM role. Next, we’ll associate the AmazonEKSAdminPolicy to the newly-created access entry.

# Associate access policy to access entry
$ aws eks associate-access-policy --cluster-name <CLUSTER_NAME> \
  --principal-arn arn:aws:iam::<AWS_ACCOUNT_ID>:role/ReadOnly \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy \
  --access-scope type=namespace,namespaces=test*

  {
    "clusterName": "<CLUSTER_NAME>",
    "principalArn": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/ReadOnly",
    "associatedAccessPolicy": {
        "policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy",
        "accessScope": {
            "type": "namespace",
            "namespaces": [
                "test*"
            ]
        },
        "associatedAt": "2023-04-18T16:42:57.754000-04:00",
        "modifiedAt": "2023-04-18T16:42:57.754000-04:00"
    }
}

After executing this command, the AWS IAM ReadOnly role—that only has read-only access to the underlying AWS account—now has namespace-administrator access to the test* namespaces.

# Namespace admin cannot access namespaces not on the allowed list
$ kubectl -n kube-system get pods
Error from server (Forbidden): pods is forbidden: 
User "arn:aws:sts::<AWS_ACCOUNT_ID>:assumed-role/ReadOnly/<SESSION_ID>" 
cannot list resource "pods" in API group "" in the namespace "kube-system"

# Namespace admin can access namespaces on the allowed list
# This step assumes the namespaces exist, use clusteradmin to create these roles
# kubectl create ns test-1
$ kubectl auth can-i get pods -n test-1
yes

# Namespace admin can create pods in test1
$ kubectl create deployment nginx --image=nginx -n test-1
deployment.apps/nginx created

Adding readonly access users

To get started with this use case, we use the existing AWS IAM read-only role that we used in the preceding use case. To do that, we need to disassociate the existing access policy. The following commands remove the access policy from the namespace admin access entry, and then list any policies associated with the access entry.

# Disassociate access policy from access entry
aws eks disassociate-access-policy --cluster-name <CLUSTER_NAME> \
--principal-arn arn:aws:iam::<AWS_ACCOUNT_ID>:role/ReadOnly \
--policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy

# List associated access policies to access entry
aws eks list-associated-access-policies --cluster-name <CLUSTER_NAME> \
--principal-arn arn:aws:iam::<AWS_ACCOUNT_ID>:role/ReadOnly
{
    "clusterName": "<CLUSTER_NAME>",
    "principalArn": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/ReadOnly",
    "associatedAccessPolicies": []
}

As you can see, with the preceding commands we disassociated the access policy that granted our AWS IAM read-only role namespace admin access to the cluster. With the following commands we’ll associate the AmazonEKSViewPolicy to the access entry to provide cluster-wide read-only access to the AWS IAM role principal, and then verify that the access entry has readonly access across the cluster.

# Associate access policy to access entry
aws eks associate-access-policy --cluster-name <CLUSTER_NAME> \
  --principal-arn arn:aws:iam::<AWS_ACCOUNT_ID>:role/ReadOnly \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy \
  --access-scope type=cluster
{
    "clusterName": "<CLUSTER_NAME>",
    "principalArn": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/ReadOnly",
    "associatedAccessPolicy": {
        "policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy",
        "accessScope": {
            "type": "cluster",
            "namespaces": []
        },
        "associatedAt": "2023-04-20T10:08:17.503000-04:00",
        "modifiedAt": "2023-04-20T10:08:17.503000-04:00"
    }
}

# Cluster read-only user can GET pods in kube-system namespace
kubectl -n kube-system get po
NAME                       READY   STATUS    RESTARTS   AGE
aws-node-b9cpr             1/1     Running   0          2d20h
...

# Cluster read-only user can GET pods in test1 namespace
kubectl -n test1 get po
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7854ff8877-wnzfs   1/1     Running   0          47s

# Cluster read-only user cannot DELETE pods in test1 namespace
kubectl -n test1 delete deployment nginx
Error from server (Forbidden): error when deleting "nginx": deployment
"nginx" is forbidden: User 
"arn:aws:sts::<AWS_ACCOUNT_ID>:assumed-role/ReadOnly/<SESSION_ID>" 
cannot delete resource "deployments" in API group in the namespace "test1"

Using cluster access entries with Kubernetes Role Base Access Control (RBAC)

As previously mentioned, the cluster access management controls and associated APIs don’t replace the existing RBAC authorizer in Amazon EKS. Rather, Amazon EKS access entries can be combined with the RBAC authorizer to grant cluster access to an AWS IAM principal while relying on Kubernetes RBAC to apply desired permissions.

For example, the following Amazon EKS API command creates a cluster access entry and subsequently adds a Kubernetes group to that entry. The kubectl apply command applies a cluster role binding resource which binds the Kubernetes group to the cluster-admin cluster role resource. The result is a cluster access entry with permissions granted using Kubernetes RBAC.

# Create cluster access entry
$ aws eks create-access-entry --cluster-name <CLUSTER_NAME> \
  --principal-arn <IAM_PRINCIPAL_ARN> \
  --kubernetes-groups eks-admins
  
# Apply cluster role binding
# This command assumes you have created crb.yaml from the yaml out of below command
$ kubectl apply -f crb.yaml

# Get newly created cluster role binding
$ kubectl get cluster-admin-ae
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-admin-ae
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: eks-admins

You can use kubectl auth can-i –list command to verify that the cluster access entry has cluster administrator permissions and can perform all actions on all Kubernetes resources.

# List Kubernetes permissions for authenticated user
$ kubectl auth can-i --list
...
Resources                                       Non-Resource URLs   Resource Names   Verbs
*.*                                             []                  []               [*]
                                                [*]                 []               [*]
...

Deleting the AWS IAM principal from under the access entry

The reference of a cluster access entry to its underlying AWS IAM principal is unique, as seen in the accessEntryArn in the following create-access-entry output snippet:

"accessEntryArn": "arn:aws:eks:us-west-2:<AWS_ACOUNT_ID>:accessEntry/<CLUSTER_NAME>/role/<AWS_ACCOUNT_ID>/ekstest/c8c3cfab-ad74-8943-9741-1297bb3885b6",

Once an access entry is created, the underlying AWS IAM principal cannot be changed, while keeping the cluster access. The access entry and associated access policies must be recreated. In the following scenario, the following setup steps were completed:

  • An AWS IAM role called ekstest was created
  • A cluster access entry was created using the ekstest role
  • The cluster access AmazonEKSViewPolicy was associated with the access entry underpinned by the ekstest AWS IAM role

After setup, the access was verified:

# Use whoami to get authenticated principal
$ kubectl whoami
arn:aws:sts::<AWS_CLUSTER_ID>:assumed-role/ekstest/<SESSION_NAME>

# GET pods from kube-system namespace
kubectl -n kube-system get po
NAME                       READY   STATUS    RESTARTS   AGE
aws-node-b9cpr             1/1     Running   0          2d22h
...

The kubectl whoami plugin indicates the currently authenticated Kubernetes cluster principal.

Next, the ekstest AWS IAM role was deleted, recreated, and reused to authenticate to the Amazon EKS cluster. The following commands show that while the ekstest AWS IAM role successfully authenticated to the Amazon EKS cluster, the access entry no longer authorizes the new ekstest role instance:

# Use whoami to get authenticated principal
kubectl whoami
Error: Unauthorized

# Fail to GET pods from the kube-system namespace
kubectl -n kube-system  get po
error: You must be logged in to the server (Unauthorized)

The new ekstest role may look the same, with the same ARN, but the RoleId—returned by the following aws iam get-role command—is different. This RoleId—UserId in the case of a user principal—is used by the cluster access entry datastore to link the access entry to the AWS IAM role or user principal.

Note: Due to the separation of Amazon EKS and AWS IAM command line interface (CLI) permissions, the Amazon EKS API doesn’t expose the AWS IAM principal identifiers—RoleId or UserId—that is used to reference AWS IAM principals.

# Get AWS IAM role
$ aws iam get-role --role-name ekstest
{
    "Role": {
        "Path": "/",
        "RoleName": "ekstest",
        "RoleId": "<ROLE_ID>",
        "Arn": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/ekstest",
...
}

To prevent non-deterministic behavior and avoid incorrect security settings, the best practice for changing or recreating the underlying AWS IAM principal is to first delete the access entry from the specific Amazon EKS cluster via the delete-access-entry command. Then when the AWS IAM principal is deleted and recreated, the access entry can be recreated, and the required access policies can be associated.

Conclusion

Amazon EKS cluster access management is now the preferred means to manage access of AWS IAM principals to Amazon EKS clusters. With cluster access management you can continue to leverage principals maintained by AWS IAM, as Amazon EKS access entries, and apply Kubernetes permissions with cluster access policies. Cluster access management uses standard API approaches to extend the Kubernetes AuthZ model with Amazon EKS authorizers. Together, the cluster access management rich feature-set provides AWS IAM integration without disruption to existing Kubernetes security schemes currently used in Amazon EKS. Your Kubernetes RBAC schemes will still work, but you no longer have to edit the aws-auth configMap.

With cluster access management you can also remove the cluster creator from newly-created clusters without losing access to the cluster. This feature provides better DevSecOps practices through automation, and least-privileged and time-based access.

While using cluster access management does allow for cleaner integration to AWS IAM principals for AuthN, AuthZ permissions are separate from AWS IAM and are modeled after well-known Kubernetes permissions. This means that while you can use AWS IAM to manage your AuthN principals, your Amazon EKS permissions are separate from your AWS IAM permissions. The result is a more flexible AuthZ model where AWS IAM permissions do not impact Amazon EKS cluster permissions.

Finally, cluster access management allows Amazon EKS administrators to use the Amazon EKS API for cluster access management without having to switch to the local Kubernetes API to perform the last-mile AuthZ settings for cluster user permissions. This is also a better approach for automated processes—DevSecOps pipelines—that build and update Amazon EKS clusters.

Try cluster access management!

If you are looking for a replacement for means to avoid the aws-auth configMap, while using standard Kubernetes AuthZ approaches then you should try cluster access management. You can run both models in tandem, with a cutover based on your needs and schedule, for the least disruption to your Amazon EKS operations.

In a future to be determined Kubernetes version of Amazon EKS, the aws-auth configMap will be removed as a supported authentication source, so migrating to access entries is strongly encouraged.

Check out our Containers Roadmap!

If you have ideas about how we can improve Amazon EKS and our other container services, then please use our Containers Roadmap and give us feedback and review our existing roadmap items.

Sheetal Joshi

Sheetal Joshi

Sheetal Joshi is a Principal Developer Advocate on the Amazon EKS team. Sheetal worked for several software vendors before joining AWS, including HP, McAfee, Cisco, Riverbed, and Moogsoft. For about 20 years, she has specialized in building enterprise-scale, distributed software systems, virtualization technologies, and cloud architectures. At the moment, she is working on making it easier to get started with, adopt, and run Kubernetes clusters in the cloud, on-premises, and at the edge.

Rodrigo Bersa

Rodrigo Bersa

Rodrigo is a Specialist Solutions Architect for Containers and AppMod, with a focus on Security and Infrastructure-as-Code automation. In this role, Rodrigo aims to help customers achieve their business goals by leveraging best practices on AWS Containers Services, such as Amazon EKS, Amazon ECS, and Red Hat OpenShift on AWS (ROSA) during their Cloud Journey, when building new environments, or migrating existing technologies.

Mike Stefaniak

Mike Stefaniak

Mike Stefaniak is a Principal Product Manager at Amazon Web Services focusing on all things Kubernetes and delivering features that help customers accelerate their modernization journey on AWS.