@shinyaz

Building a Low-Maintenance Kubernetes Cluster with EKS Auto Mode

Table of Contents

Introduction

Node scaling and upgrades are an unavoidable operational burden when running Kubernetes in production. While EKS has offered Karpenter and Cluster Autoscaler for automation, node group design and lifecycle management have always remained the user's responsibility.

EKS Auto Mode, introduced in late 2024, fully delegates node management to EKS itself. This post walks through creating an Auto Mode EKS cluster with eksctl and clarifies the design philosophy differences from the traditional approach.

What Is EKS Auto Mode?

EKS Auto Mode automatically manages cluster infrastructure — compute (EC2 instances), networking, and storage. In traditional EKS, you had to handle:

  • Node group design — choosing instance types, min/max counts
  • Scaling configuration — setting up Karpenter or Cluster Autoscaler
  • Node upgrades — AMI updates and rolling updates
  • Security patches — OS-level patching

With Auto Mode, EKS handles all of this. You just deploy workloads (Pods), and the required nodes are provisioned automatically.

Comparison with Traditional Approach

AspectManaged Node GroupsAuto Mode
Node managementUserEKS
ScalingRequires Karpenter / CAAutomatic
Instance type selectionUser-specifiedAuto-selected based on workload
OS patchingUserEKS
Pricing modelEC2 + EKS pricingEKS Auto Mode pricing

Auto Mode is ideal when you want Kubernetes without the infrastructure overhead.

Setup Steps

Prerequisites

  • AWS CLI configured
  • eksctl installed (brew install eksctl)
  • Appropriate IAM permissions

Creating the Cluster

Creating an EKS cluster with Auto Mode is remarkably simple — just add the --enable-auto-mode flag.

eksctl create cluster \
  --name sandbox \
  --region ap-northeast-1 \
  --version 1.32 \
  --enable-auto-mode

This single command creates:

  • VPC — A new VPC with public/private subnets
  • EKS Cluster — Kubernetes 1.32 with Auto Mode enabled
  • IAM Roles — Roles for cluster and nodes
  • Security Groups — Rules for cluster communication

The process takes about 15-20 minutes. Progress is visible in real-time through eksctl logs.

What Gets Created Under the Hood

Despite being a single command, eksctl creates numerous resources via CloudFormation. Understanding these helps with troubleshooting and production deployments.

VPC and Networking

eksctl creates a new VPC with CIDR 192.168.0.0/16, placing one public and one private subnet in each of 3 AZs.

SubnetAZCIDRPublic IP
Public 1dap-northeast-1d192.168.0.0/19Yes
Public 1cap-northeast-1c192.168.32.0/19Yes
Public 1aap-northeast-1a192.168.64.0/19Yes
Private 1dap-northeast-1d192.168.96.0/19No
Private 1cap-northeast-1c192.168.128.0/19No
Private 1aap-northeast-1a192.168.160.0/19No

Public subnets route through an Internet Gateway, while private subnets route through a single NAT Gateway (with an EIP). Nodes are placed in private subnets, so they're never directly exposed to the internet.

IAM Roles

Auto Mode creates two IAM roles.

Cluster Service Role (ServiceRole) — Used by the EKS control plane, assumed by eks.amazonaws.com. Compared to traditional setups, Auto Mode adds several policies:

  • AmazonEKSClusterPolicy — Core cluster management
  • AmazonEKSComputePolicy — Auto Mode compute management
  • AmazonEKSNetworkingPolicy — Auto Mode networking management
  • AmazonEKSBlockStoragePolicy — Auto Mode storage management
  • AmazonEKSLoadBalancingPolicy — Load balancer management
  • AmazonEKSVPCResourceController — VPC resource control

Traditional setups only need AmazonEKSClusterPolicy and AmazonEKSVPCResourceController. The additional compute, networking, and storage policies are what enable EKS to automatically manage infrastructure.

Auto Mode Node Role (AutoModeNodeRole) — Assigned to nodes (EC2 instances), assumed by ec2.amazonaws.com.

  • AmazonEKSWorkerNodeMinimalPolicy — Minimal permissions for node operation
  • AmazonEC2ContainerRegistryPullOnly — ECR image pull permissions

The node-side policies follow least privilege. This is simpler than the traditional AmazonEKSWorkerNodePolicy and AmazonEKS_CNI_Policy combination.

Security Groups

Two security groups are created with three ingress rules controlling communication.

ControlPlaneSecurityGroup — For control plane to node communication. No ingress rules, all outbound allowed.

ClusterSharedNodeSecurityGroup — For inter-node communication. Allows all traffic from itself and from the control plane, enabling unrestricted node-to-node communication.

This configuration ensures both control plane ↔ node and node ↔ node communication paths are established.

How This Differs from Traditional Setup

With traditional managed node groups, you had to explicitly specify node configuration:

# Traditional approach (managed node groups)
eksctl create cluster \
  --name sandbox \
  --region ap-northeast-1 \
  --version 1.32 \
  --nodegroup-name sandbox-nodes \
  --node-type t3.medium \
  --nodes 2 \
  --nodes-min 1 \
  --nodes-max 3 \
  --managed

Auto Mode removes all node group specifications. EKS decides instance types and counts based on workload requirements.

Verification

Checking Cluster Status

After cluster creation, start by verifying the control plane and nodes.

kubectl cluster-info
Kubernetes control plane is running at https://xxxxx.gr7.ap-northeast-1.eks.amazonaws.com
kubectl get nodes
NAME                  STATUS   ROLES    AGE   VERSION
i-0a3b6967393de6f9d   Ready    <none>   18s   v1.32.11-eks-ac2d5a0
i-0bc928a12acfe91a2   Ready    <none>   18s   v1.32.11-eks-ac2d5a0

Right after creation, Auto Mode has already provisioned 2 nodes automatically. The key point is that we never specified instance types or node counts.

Deploying a Sample Workload

The real power of Auto Mode lies in dynamic node provisioning based on workload demand. Let's deploy nginx with 3 replicas to see it in action.

kubectl create deployment nginx --image=nginx:latest --replicas=3

Pods start in Pending state, but after about 30 seconds all are Running.

kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP                NODE
nginx-54c98b4f84-7sc9x   1/1     Running   0          54s   192.168.163.242   i-04012a4d55813e76a
nginx-54c98b4f84-8kp22   1/1     Running   0          54s   192.168.163.240   i-04012a4d55813e76a
nginx-54c98b4f84-mc9lw   1/1     Running   0          54s   192.168.163.241   i-04012a4d55813e76a

Checking nodes again reveals a third node (i-04012a4d55813e76a) was automatically added.

kubectl get nodes
NAME                  STATUS   ROLES    AGE   VERSION
i-04012a4d55813e76a   Ready    <none>   35s   v1.32.11-eks-ac2d5a0
i-0a3b6967393de6f9d   Ready    <none>   81s   v1.32.11-eks-ac2d5a0
i-0bc928a12acfe91a2   Ready    <none>   81s   v1.32.11-eks-ac2d5a0

Auto Mode automatically added a node in response to workload demand — no Karpenter or Cluster Autoscaler configuration required.

Workload Deletion and Scale-Down

Let's verify scale-down as well. Delete the nginx deployment we just created.

kubectl delete deployment nginx

Checking nodes after deletion, the third node that was added for the workload has been automatically removed, bringing the count back to 2.

kubectl get nodes
NAME                  STATUS   ROLES    AGE   VERSION
i-0bc928a12acfe91a2   Ready    <none>   79m   v1.32.11-eks-ac2d5a0
i-04012a4d55813e76a   Ready    <none>   78m   v1.32.11-eks-ac2d5a0

Both scale-up and scale-down are fully managed by Auto Mode. With traditional setups, you'd need to tune parameters like Karpenter's consolidationPolicy or Cluster Autoscaler's scale-down-unneeded-time — Auto Mode eliminates all of that.

When to Choose Auto Mode

Auto Mode isn't a silver bullet. Use these criteria to decide:

Auto Mode is a good fit when:

  • You want a dev/staging cluster without infrastructure management overhead
  • Your team has limited Kubernetes infrastructure experience
  • Your workloads are standard and don't require specialized instance types

Traditional approach is better when:

  • You need specific hardware like GPU instances
  • You want fine-grained control over spot instances for cost optimization
  • You need OS-level customization on nodes

Takeaways

  • One flag eliminates infrastructure management--enable-auto-mode delegates node group design, scaling, and patching to EKS, letting you focus on workload development.
  • eksctl makes the barrier to entry even lower — VPC, IAM roles, and everything else are created with a single command, making environment setup dramatically faster.
  • The key decision is "how much do you want to manage yourself?" — Auto Mode simplifies operations but limits fine-grained control. Evaluate your use case to choose between Auto Mode and traditional node groups.

Share this post

Shinya Tahara

Shinya Tahara

Solutions Architect @ AWS

I'm a Solutions Architect at AWS, providing technical guidance primarily to financial industry customers. I share learnings about cloud architecture and AI/ML on this blog.

Related Posts