EKS Add-ons

EKS add-ons are curated Kubernetes software components that extend cluster functionality. EKS provides managed add-ons for essential components like VPC CNI, CoreDNS, kube-proxy, and EBS CSI driver, plus support for installing popular third-party add-ons for monitoring, networking, security, and more.

EKS Add-ons Overview

EKS add-ons are managed Kubernetes components that AWS maintains and updates:

graph TB subgraph eks_addons[EKS Managed Add-ons] VPC_CNI[VPC CNI] COREDNS[CoreDNS] KUBE_PROXY[kube-proxy] EBS_CSI[EBS CSI Driver] EFS_CSI[EFS CSI Driver] end subgraph third_party[Third-Party Add-ons] ALB[AWS Load Balancer<br/>Controller] EXTERNAL_SECRETS[External Secrets<br/>Operator] CLUSTER_AUTOSCALER[Cluster Autoscaler] METRICS_SERVER[Metrics Server] end EKS_CLUSTER[EKS Cluster] --> VPC_CNI EKS_CLUSTER --> COREDNS EKS_CLUSTER --> KUBE_PROXY EKS_CLUSTER --> EBS_CSI EKS_CLUSTER --> ALB EKS_CLUSTER --> EXTERNAL_SECRETS style EKS_CLUSTER fill:#e1f5ff style VPC_CNI fill:#fff4e1 style ALB fill:#e8f5e9

EKS Managed Add-ons:

  • Maintained and updated by AWS
  • Tested for EKS compatibility
  • Automatic version management
  • Integrated with EKS console

Third-Party Add-ons:

  • Community-maintained
  • Manual installation and updates
  • More flexibility and customization

EKS Managed Add-ons

VPC CNI

The VPC Container Network Interface plugin provides networking for pods using VPC IP addresses.

Installation:

# Install VPC CNI add-on
aws eks create-addon \
  --cluster-name my-cluster \
  --addon-name vpc-cni \
  --addon-version latest \
  --resolve-conflicts OVERWRITE

Configuration:

# Update VPC CNI configuration
aws eks update-addon \
  --cluster-name my-cluster \
  --addon-name vpc-cni \
  --addon-version v1.15.0-eksbuild.1 \
  --configuration-values '{
    "env": {
      "ENABLE_PREFIX_DELEGATION": "true",
      "WARM_PREFIX_TARGET": "1"
    }
  }'

Key Features:

  • Pod networking with VPC IPs
  • Security group integration
  • Custom networking mode support
  • IP address management

CoreDNS

CoreDNS provides DNS resolution for pods and services within the cluster.

Installation:

# Install CoreDNS add-on
aws eks create-addon \
  --cluster-name my-cluster \
  --addon-name coredns \
  --addon-version latest

Configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }

Key Features:

  • Service discovery
  • Pod DNS resolution
  • Custom DNS entries
  • Health checks

kube-proxy

kube-proxy maintains network rules for service networking and load balancing.

Installation:

# Install kube-proxy add-on
aws eks create-addon \
  --cluster-name my-cluster \
  --addon-name kube-proxy \
  --addon-version latest

Key Features:

  • Service IP management
  • Load balancing
  • Network rules
  • iptables/ipvs mode

EBS CSI Driver

The EBS Container Storage Interface driver provides persistent block storage using Amazon EBS volumes.

Installation:

# Create IAM role for EBS CSI driver
eksctl create iamserviceaccount \
  --name ebs-csi-controller-sa \
  --namespace kube-system \
  --cluster my-cluster \
  --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
  --approve \
  --role-only \
  --role-name AmazonEKS_EBS_CSI_DriverRole

# Install EBS CSI driver add-on
aws eks create-addon \
  --cluster-name my-cluster \
  --addon-name aws-ebs-csi-driver \
  --addon-version latest \
  --service-account-role-arn arn:aws:iam::123456789012:role/AmazonEKS_EBS_CSI_DriverRole

Key Features:

  • Dynamic volume provisioning
  • Volume snapshots
  • Volume expansion
  • Encryption support

EFS CSI Driver

The EFS Container Storage Interface driver provides shared file storage using Amazon EFS.

Installation:

# Create IAM role for EFS CSI driver
eksctl create iamserviceaccount \
  --name efs-csi-controller-sa \
  --namespace kube-system \
  --cluster my-cluster \
  --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy \
  --approve \
  --role-only \
  --role-name AmazonEKS_EFS_CSI_DriverRole

# Install EFS CSI driver add-on
aws eks create-addon \
  --cluster-name my-cluster \
  --addon-name aws-efs-csi-driver \
  --addon-version latest \
  --service-account-role-arn arn:aws:iam::123456789012:role/AmazonEKS_EFS_CSI_DriverRole

Key Features:

  • Shared file storage
  • Multiple pod access
  • Automatic mount target management
  • Encryption support

AWS Load Balancer Controller

The AWS Load Balancer Controller manages Application Load Balancers (ALB) and Network Load Balancers (NLB) for Kubernetes services and ingresses.

Installation

Using Helm:

# Add Helm repository
helm repo add eks https://aws.github.io/eks-charts
helm repo update

# Install AWS Load Balancer Controller
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=my-cluster \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller

Using YAML:

# Create IAM role
eksctl create iamserviceaccount \
  --name aws-load-balancer-controller \
  --namespace kube-system \
  --cluster my-cluster \
  --attach-policy-arn arn:aws:iam::123456789012:policy/AWSLoadBalancerControllerIAMPolicy \
  --approve

# Install controller
kubectl apply -f https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.7.0/v2_7_0_full.yaml

Usage

Create LoadBalancer Service:

apiVersion: v1
kind: Service
metadata:
  name: web-service
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "external"
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 8080

Create Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-service
            port:
              number: 80

External Secrets Operator

External Secrets Operator syncs secrets from AWS Secrets Manager and Parameter Store into Kubernetes secrets.

Installation

Using Helm:

# Add Helm repository
helm repo add external-secrets https://charts.external-secrets.io
helm repo update

# Install External Secrets Operator
helm install external-secrets external-secrets/external-secrets \
  -n external-secrets-system \
  --create-namespace

Create IAM Role:

eksctl create iamserviceaccount \
  --name external-secrets-sa \
  --namespace external-secrets-system \
  --cluster my-cluster \
  --attach-policy-arn arn:aws:iam::aws:policy/SecretsManagerReadWrite \
  --attach-policy-arn arn:aws:iam::aws:policy/AmazonSSMReadOnlyAccess \
  --approve

Usage

Create SecretStore:

apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: aws-secrets-manager
spec:
  provider:
    aws:
      service: SecretsManager
      region: us-west-2
      auth:
        jwt:
          serviceAccountRef:
            name: external-secrets-sa

Create ExternalSecret:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: app-secret
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secrets-manager
    kind: SecretStore
  target:
    name: app-secret
    creationPolicy: Owner
  data:
  - secretKey: password
    remoteRef:
      key: prod/database/password

Cluster Autoscaler

Cluster Autoscaler automatically adjusts node group sizes based on pod scheduling demands.

Installation

Using Helm:

# Add Helm repository
helm repo add autoscaler https://kubernetes.github.io/autoscaler
helm repo update

# Install Cluster Autoscaler
helm install cluster-autoscaler autoscaler/cluster-autoscaler \
  -n kube-system \
  --set autoDiscovery.clusterName=my-cluster \
  --set aws.region=us-west-2 \
  --set rbac.serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::123456789012:role/ClusterAutoscalerRole

Create IAM Role:

eksctl create iamserviceaccount \
  --name cluster-autoscaler \
  --namespace kube-system \
  --cluster my-cluster \
  --attach-policy-arn arn:aws:iam::123456789012:policy/ClusterAutoscalerPolicy \
  --approve

Metrics Server

Metrics Server collects resource usage metrics from nodes and pods for HPA and kubectl top.

Installation

Using kubectl:

# Install Metrics Server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

Using Helm:

# Add Helm repository
helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
helm repo update

# Install Metrics Server
helm install metrics-server metrics-server/metrics-server \
  -n kube-system

Configuration for EKS:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
spec:
  template:
    spec:
      containers:
      - name: metrics-server
        args:
        - --kubelet-insecure-tls  # Required for EKS
        - --kubelet-preferred-address-types=InternalIP

Prometheus and Grafana

Monitoring and alerting stack:

# Add Helm repository
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

# Install Prometheus and Grafana
helm install prometheus prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --create-namespace

Calico

Network policies and advanced networking:

# Install Calico
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.28/config/master/calico-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.28/config/master/calico-crs.yaml

Karpenter

Next-generation node autoscaler:

# Add Helm repository
helm repo add karpenter https://charts.karpenter.sh
helm repo update

# Install Karpenter
helm install karpenter karpenter/karpenter \
  --namespace karpenter \
  --create-namespace \
  --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::123456789012:role/KarpenterControllerRole \
  --set settings.clusterName=my-cluster

Cert-Manager

Automatic TLS certificate management:

# Add Helm repository
helm repo add jetstack https://charts.jetstack.io
helm repo update

# Install cert-manager
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set installCRDs=true

Add-on Management and Updates

Listing Add-ons

# List installed add-ons
aws eks list-addons --cluster-name my-cluster

# Describe add-on
aws eks describe-addon \
  --cluster-name my-cluster \
  --addon-name vpc-cni

Updating Add-ons

Update to Latest Version:

# Update add-on to latest version
aws eks update-addon \
  --cluster-name my-cluster \
  --addon-name vpc-cni \
  --addon-version latest \
  --resolve-conflicts OVERWRITE

Update to Specific Version:

# List available versions
aws eks describe-addon-versions \
  --addon-name vpc-cni \
  --kubernetes-version 1.28

# Update to specific version
aws eks update-addon \
  --cluster-name my-cluster \
  --addon-name vpc-cni \
  --addon-version v1.15.0-eksbuild.1

Resolving Conflicts

When updating add-ons, conflicts may occur:

# Resolve conflicts automatically
aws eks update-addon \
  --cluster-name my-cluster \
  --addon-name vpc-cni \
  --addon-version latest \
  --resolve-conflicts OVERWRITE

# Or preserve your changes
aws eks update-addon \
  --cluster-name my-cluster \
  --addon-name vpc-cni \
  --addon-version latest \
  --resolve-conflicts PRESERVE

Deleting Add-ons

# Delete add-on
aws eks delete-addon \
  --cluster-name my-cluster \
  --addon-name vpc-cni

# Preserve add-on resources
aws eks delete-addon \
  --cluster-name my-cluster \
  --addon-name vpc-cni \
  --preserve

Custom Add-on Installation

Using Helm

Helm is the most common way to install custom add-ons:

# Add Helm repository
helm repo add <repo-name> <repo-url>
helm repo update

# Install add-on
helm install <release-name> <repo-name>/<chart-name> \
  --namespace <namespace> \
  --create-namespace \
  --set key=value

Using kubectl

Install directly from YAML manifests:

# Apply manifest
kubectl apply -f https://example.com/addon.yaml

# Or from local file
kubectl apply -f addon.yaml

Using Operators

Many add-ons use Kubernetes operators:

# Install operator
kubectl apply -f operator.yaml

# Create custom resource
kubectl apply -f addon-instance.yaml

Best Practices

  1. Use EKS Managed Add-ons - For core components when possible

  2. Keep Add-ons Updated - Regularly update to latest versions

  3. Test Updates - Test add-on updates in non-production first

  4. Document Customizations - Keep track of configuration changes

  5. Use IRSA - Use IAM roles for service accounts for AWS integrations

  6. Monitor Add-on Health - Set up monitoring for add-on components

  7. Version Control - Store add-on configurations in Git

  8. Namespace Isolation - Install add-ons in appropriate namespaces

  9. Resource Limits - Set resource limits for add-on pods

  10. Backup Configurations - Backup add-on configurations before updates

Common Issues

Add-on Installation Fails

Problem: Add-on fails to install

Solutions:

  • Check IAM permissions
  • Verify service account configuration
  • Check cluster version compatibility
  • Review add-on logs
  • Verify network connectivity

Add-on Update Conflicts

Problem: Add-on update conflicts with customizations

Solutions:

  • Use PRESERVE to keep customizations
  • Use OVERWRITE to apply EKS defaults
  • Document customizations before updating
  • Test updates in non-production

Add-on Not Working

Problem: Add-on installed but not functioning

Solutions:

  • Check pod status
  • Review add-on logs
  • Verify configuration
  • Check IAM role permissions
  • Verify network policies

See Also