EKS Security
Security on EKS involves multiple layers: authentication and authorization, network security, encryption, secrets management, and compliance. EKS integrates deeply with AWS security services like IAM, KMS, and Secrets Manager to provide enterprise-grade security for Kubernetes workloads.
Security Architecture
EKS security operates at multiple levels:
IAM Roles for Service Accounts (IRSA)
IRSA allows pods to assume IAM roles, eliminating the need to store AWS credentials in pods or use instance profiles.
How IRSA Works
Components:
- OIDC Provider - Links EKS cluster to AWS IAM
- Service Account - Kubernetes service account with IAM role annotation
- IAM Role - AWS IAM role with necessary permissions
- Trust Policy - Allows OIDC provider to assume the role
- Pod - Uses service account to get temporary credentials
Setting Up IRSA
Step 1: Create OIDC Provider
# Check if OIDC provider exists
aws eks describe-cluster --name my-cluster --query "cluster.identity.oidc.issuer" --output text
# Create OIDC provider (if needed)
eksctl utils associate-iam-oidc-provider \
--cluster my-cluster \
--approve
Step 2: Create IAM Role
# Create IAM role with trust policy
cat > trust-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::123456789012:oidc-provider/oidc.eks.us-west-2.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-west-2.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E:sub": "system:serviceaccount:default:my-service-account",
"oidc.eks.us-west-2.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E:aud": "sts.amazonaws.com"
}
}
}
]
}
EOF
aws iam create-role \
--role-name my-app-role \
--assume-role-policy-document file://trust-policy.json
# Attach permissions policy
aws iam attach-role-policy \
--role-name my-app-role \
--policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
Step 3: Create Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
namespace: default
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/my-app-role
Step 4: Use in Pod
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
serviceAccountName: my-service-account
containers:
- name: app
image: my-app:latest
# Pod automatically gets IAM role credentials
# No need to configure AWS credentials
Using eksctl for IRSA
Simplified creation with eksctl:
# Create service account with IAM role
eksctl create iamserviceaccount \
--name my-service-account \
--namespace default \
--cluster my-cluster \
--attach-policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
--approve
This automatically:
- Creates the IAM role
- Sets up trust policy
- Creates service account
- Adds annotation to service account
Pod Identity and Authentication
AWS SDK Credential Chain
Pods using IRSA automatically get credentials through the AWS SDK credential chain:
- Environment Variables - AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
- Web Identity Token - From service account (IRSA)
- Instance Profile - From EC2 instance (fallback)
The AWS SDK in your application automatically uses IRSA credentials when available.
Testing IRSA
# Deploy test pod
kubectl run aws-cli-test \
--image=amazon/aws-cli:latest \
--serviceaccount=my-service-account \
--command -- sleep 3600
# Test access
kubectl exec -it aws-cli-test -- aws s3 ls
# Check credentials
kubectl exec -it aws-cli-test -- env | grep AWS
Network Security
Security Groups
Security groups provide network-level access control:
Node Security Group:
- Controls traffic to/from worker nodes
- Applied to all pods on the node (default)
- Managed by EKS during cluster creation
Pod Security Groups:
- Applied at pod level via VPC CNI
- Fine-grained network isolation
- Pods can have different security groups
apiVersion: v1
kind: Pod
metadata:
name: web-pod
annotations:
vpc.amazonaws.com/security-group-ids: sg-1234567890abcdef0
spec:
containers:
- name: web
image: nginx:latest
Security Group Best Practices:
- Use least privilege principle
- Separate security groups for different tiers
- Restrict ingress to necessary ports only
- Use security groups with network policies
Network Policies
Network policies provide pod-to-pod firewall rules:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-policy
namespace: default
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
Network Policy Use Cases:
- Isolate namespaces
- Restrict pod-to-pod communication
- Allow only specific ingress/egress
- Defense in depth with security groups
Secrets Management
Kubernetes Secrets
Basic secret management (encrypted at rest with etcd encryption):
apiVersion: v1
kind: Secret
metadata:
name: app-secret
type: Opaque
stringData:
username: admin
password: secretpassword
---
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: app
image: my-app:latest
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: app-secret
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: app-secret
key: password
AWS Secrets Manager Integration
Use External Secrets Operator to sync AWS Secrets Manager:
# Install External Secrets Operator
helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets \
external-secrets/external-secrets \
-n external-secrets-system \
--create-namespace
Create Secret Store:
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: aws-secrets-manager
namespace: default
spec:
provider:
aws:
service: SecretsManager
region: us-west-2
auth:
jwt:
serviceAccountRef:
name: external-secrets-sa
Sync Secret:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-secret
namespace: default
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-manager
kind: SecretStore
target:
name: app-secret
creationPolicy: Owner
data:
- secretKey: password
remoteRef:
key: prod/database/password
property: password
AWS Systems Manager Parameter Store
Similar integration with Parameter Store:
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: aws-parameter-store
spec:
provider:
aws:
service: ParameterStore
region: us-west-2
auth:
jwt:
serviceAccountRef:
name: external-secrets-sa
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-config
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-parameter-store
target:
name: app-config
data:
- secretKey: api-key
remoteRef:
key: /prod/app/api-key
Encryption
Control Plane Encryption
EKS control plane encryption is enabled by default:
- Encryption at Rest - etcd data encrypted with AWS KMS
- Encryption in Transit - TLS for API server communication
- KMS Key - Customer-managed or AWS-managed key
Enable Custom KMS Key:
# Create KMS key
aws kms create-key \
--description "EKS cluster encryption key" \
--key-usage ENCRYPT_DECRYPT
# Create cluster with custom key
aws eks create-cluster \
--name my-cluster \
--role-arn arn:aws:iam::123456789012:role/eks-service-role \
--resources-vpc-config subnetIds=subnet-123,subnet-456 \
--encryption-config resources=[{provider={keyArn=arn:aws:kms:us-west-2:123456789012:key/12345678-1234-1234-1234-123456789012}}]
EBS Volume Encryption
Enable encryption in storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp3-encrypted
provisioner: ebs.csi.aws.com
parameters:
type: gp3
encrypted: "true"
kmsKeyId: arn:aws:kms:us-west-2:123456789012:key/12345678-1234-1234-1234-123456789012
EFS Encryption
Enable encryption when creating EFS:
aws efs create-file-system \
--creation-token eks-efs \
--encrypted \
--kms-key-id arn:aws:kms:us-west-2:123456789012:key/12345678-1234-1234-1234-123456789012
OIDC Provider Configuration
OIDC provider links EKS to AWS IAM for IRSA:
Create OIDC Provider:
# Get cluster OIDC issuer URL
CLUSTER_NAME=my-cluster
OIDC_ISSUER=$(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text)
# Extract OIDC provider ID
OIDC_ID=$(echo $OIDC_ISSUER | cut -d '/' -f 5)
# Create OIDC provider
aws iam create-open-id-connect-provider \
--url $OIDC_ISSUER \
--client-id-list sts.amazonaws.com \
--thumbprint-list 9e99a48a9960b14926bb7f3b02e22da2b0ab7280
RBAC with AWS IAM Integration
Mapping IAM Users to Kubernetes Users
Use aws-auth ConfigMap to map IAM users/roles:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapUsers: |
- userarn: arn:aws:iam::123456789012:user/john
username: john
groups:
- system:masters
- userarn: arn:aws:iam::123456789012:user/jane
username: jane
groups:
- system:authenticated
mapRoles: |
- rolearn: arn:aws:iam::123456789012:role/eks-admin
username: eks-admin
groups:
- system:masters
Using eksctl:
# Add IAM user
eksctl create iamidentitymapping \
--cluster my-cluster \
--arn arn:aws:iam::123456789012:user/john \
--username john \
--group system:masters
# Add IAM role
eksctl create iamidentitymapping \
--cluster my-cluster \
--arn arn:aws:iam::123456789012:role/eks-admin \
--username eks-admin \
--group system:masters
Kubernetes RBAC
Combine IAM mapping with Kubernetes RBAC:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer
namespace: default
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "create", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developer-binding
namespace: default
subjects:
- kind: User
name: john
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: developer
apiGroup: rbac.authorization.k8s.io
Compliance and Audit Logging
CloudTrail Integration
EKS API calls are logged to CloudTrail:
# View EKS API calls
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=ResourceName,AttributeValue=my-cluster \
--max-results 10
Logged Events:
- Cluster creation/deletion
- Node group operations
- Add-on installations
- Configuration changes
VPC Flow Logs
Monitor network traffic:
# Enable VPC Flow Logs
aws ec2 create-flow-logs \
--resource-type VPC \
--resource-ids vpc-12345678 \
--traffic-type ALL \
--log-destination-type cloud-watch-logs \
--log-group-name /aws/vpc/flowlogs
EKS Audit Logs
Enable cluster audit logging:
# Enable audit logs
aws eks update-cluster-config \
--name my-cluster \
--logging '{
"enable": [
{"types": ["api"]},
{"types": ["audit"]},
{"types": ["authenticator"]},
{"types": ["controllerManager"]},
{"types": ["scheduler"]}
]
}'
Log Types:
api- API server logsaudit- Audit logsauthenticator- Authentication logscontrollerManager- Controller manager logsscheduler- Scheduler logs
Security Best Practices
Enable IRSA - Use IAM roles for service accounts instead of instance profiles
Least Privilege - Grant minimum necessary permissions
Encrypt Everything - Enable encryption for control plane, volumes, and secrets
Use Private Endpoints - Restrict API server access to VPC
Network Policies - Implement pod-to-pod network isolation
Security Groups - Use security groups at pod level for fine-grained control
Secrets Management - Use AWS Secrets Manager or Parameter Store
Regular Updates - Keep cluster and nodes updated with security patches
Audit Logging - Enable all audit logs for compliance
Multi-Factor Authentication - Require MFA for IAM users accessing clusters
Separate Environments - Use different clusters or namespaces for dev/staging/prod
Image Scanning - Scan container images for vulnerabilities
Pod Security Standards - Enforce pod security policies
Regular Reviews - Review IAM roles, RBAC, and network policies regularly
Common Security Issues
IRSA Not Working
Problem: Pod can’t assume IAM role
Solutions:
- Verify OIDC provider is created
- Check service account annotation
- Verify trust policy conditions
- Check IAM role permissions
- Review pod logs for credential errors
Access Denied Errors
Problem: kubectl access denied
Solutions:
- Verify IAM user/role in aws-auth ConfigMap
- Check RBAC permissions
- Verify cluster endpoint access
- Check IAM user permissions
Network Policy Not Enforcing
Problem: Network policies not working
Solutions:
- Verify Calico is installed
- Check network policy syntax
- Verify pod selectors
- Check Calico logs
See Also
- Cluster Setup - Initial security configuration
- Networking - Network security
- Troubleshooting - Security issues