The Managed Kubernetes Trifecta: EKS, AKS, and GKE Compared

Table of Contents
Introduction
By mid-2018, the managed Kubernetes landscape had reached a critical inflection point. Amazon EKS and Azure Kubernetes Service (AKS) both reached general availability in June 2018, joining Google Kubernetes Engine (GKE), which had been production-ready since late 2017. For the first time, teams had three mature, cloud-native options for managed Kubernetes, each with distinct strengths and trade-offs.
This comparison mattered because it represented the industry’s validation of the managed control plane model. Teams no longer needed to choose between self-managed complexity (kops, kubeadm) and vendor lock-in to a single cloud—they could evaluate managed Kubernetes across AWS, Azure, and GCP based on features, pricing, and operational fit.
Historical note: EKS and AKS both launched GA within days of each other in June 2018, creating a competitive landscape that would drive rapid feature development in subsequent years.
Feature Comparison (Mid-2018)
| Capability | EKS (AWS) | AKS (Azure) | GKE (Google) |
|---|---|---|---|
| GA Launch | June 2018 | June 2018 | November 2017 |
| Control Plane SLA | 99.95% | 99.95% | 99.95% |
| Control Plane Cost | $0.20/hour per cluster | Free | ~$0.10/hour per cluster |
| Node Management | Self-managed EC2 or Fargate | Self-managed VMs | Self-managed GCE or GKE Autopilot (beta) |
| Networking | VPC CNI (native) or Calico | Azure CNI or kubenet | VPC-native (GKE-native) |
| Load Balancing | AWS Load Balancer Controller | Azure Load Balancer | Google Cloud Load Balancer |
| Identity Integration | IAM + RBAC | Azure AD + RBAC | Google Cloud IAM + RBAC |
| Auto-Scaling | Cluster Autoscaler | Cluster Autoscaler | Cluster Autoscaler + GKE Autopilot |
| Upgrades | Manual (via console/CLI) | Manual or automatic | Automatic with maintenance windows |
| Multi-Zone HA | Yes (multi-AZ) | Yes (availability zones) | Yes (regional clusters) |
| Private Clusters | Yes (private endpoint) | Yes (private cluster) | Yes (private cluster) |
| Windows Support | Yes (Windows nodes) | Yes (Windows containers) | Limited (experimental) |
| GPU Support | Yes (EC2 GPU instances) | Yes (GPU-enabled VMs) | Yes (GPU node pools) |
| Fargate/Serverless | EKS on Fargate (preview) | ACI integration | GKE Autopilot (beta) |
EKS Highlights
- AWS Native Integration: Deep integration with EC2, VPC, IAM, CloudWatch, and other AWS services.
- CNI Flexibility: Supports AWS VPC CNI (native IP per pod) or Calico for advanced networking.
- Fargate Preview: Serverless container execution without node management (preview in 2018).
- IAM Integration: Kubernetes RBAC can map to AWS IAM roles for fine-grained access control.
- Operational Model: Control plane managed by AWS; nodes managed by customers (EC2 or Fargate).
AKS Highlights
- Azure AD Integration: Native integration with Azure Active Directory for authentication and authorization.
- Azure CNI: Advanced networking with Azure Virtual Networks, supporting up to 4,000 nodes per cluster.
- Free Control Plane: No charge for the managed control plane (unlike EKS and GKE).
- DevOps Integration: Tight integration with Azure DevOps, Azure Monitor, and Azure Policy.
- Windows Containers: Full support for Windows Server containers alongside Linux.
GKE Highlights
- Mature Platform: Longest production history among the three (GA since 2017).
- Automatic Upgrades: Most automated upgrade process with maintenance windows.
- Regional Clusters: Native support for multi-zone regional clusters with automatic failover.
- GKE Autopilot: Serverless mode (beta) that manages nodes automatically.
- Google Cloud Integration: Seamless integration with Cloud SQL, Cloud Storage, Cloud Monitoring.
Migration Paths from Self-Managed
From kops (AWS) to EKS
- Networking: kops clusters often use Calico or Weave; EKS supports both, easing migration.
- IAM Roles: Map existing kops IAM roles to EKS service accounts using IRSA (IAM Roles for Service Accounts).
- Node Groups: Replace kops instance groups with EKS-managed node groups or self-managed EC2.
- Add-Ons: Most kops add-ons (CoreDNS, metrics-server) work on EKS with minimal changes.
From ACS/DIY to AKS
- Networking: Azure CNI provides similar capabilities to ACS networking; plan IP address space carefully.
- Identity: Migrate from Kubernetes RBAC-only to Azure AD integration for enhanced security.
- Storage: Azure Disks and Azure Files work similarly to ACS; verify CSI driver compatibility.
- Load Balancing: Azure Load Balancer replaces ACS load balancer configurations.
From kubeadm/GKE to Other Services
- Application Portability: Most Kubernetes workloads are portable; focus on cloud-specific integrations (storage, networking, IAM).
- Add-On Migration: CNI plugins, ingress controllers, and monitoring stacks may need reconfiguration.
- Cost Analysis: Compare control plane and node costs across providers before migrating.
Choosing the Right Managed Service
Choose EKS When:
- AWS Ecosystem: Already heavily invested in AWS services (EC2, RDS, S3, Lambda).
- Enterprise IAM: Need fine-grained IAM integration for compliance or security requirements.
- Fargate Interest: Want serverless container execution without node management.
- Multi-Cloud Strategy: Running workloads across AWS and other clouds.
Choose AKS When:
- Azure Ecosystem: Using Azure services (Azure SQL, Azure Storage, Azure Functions).
- Cost Sensitivity: Free control plane makes AKS attractive for cost-optimized deployments.
- Windows Workloads: Running Windows Server containers alongside Linux.
- Azure AD Integration: Need tight integration with Azure Active Directory for enterprise auth.
Choose GKE When:
- Operational Simplicity: Want the most automated upgrade and maintenance experience.
- Google Cloud Native: Using Google Cloud services (Cloud SQL, BigQuery, Cloud Functions).
- Regional HA: Need robust multi-zone high availability out of the box.
- Mature Platform: Prefer the longest production history and most battle-tested platform.
Practical Considerations
Networking Differences
- EKS: AWS VPC CNI assigns each pod a VPC IP address, enabling direct AWS service integration but requiring large IP ranges.
- AKS: Azure CNI provides similar native networking, while kubenet uses overlay networking (simpler but less performant).
- GKE: VPC-native networking assigns GCE IPs to pods, enabling direct Google Cloud service access.
Cost Reality
While AKS offers a “free” control plane, total cost of ownership includes:
- Node Costs: Usually the largest expense; compare VM pricing across providers.
- Networking: Egress costs, load balancer fees, and data transfer charges vary significantly.
- Storage: Persistent volume costs differ; EBS (AWS) vs Azure Disks vs GCE Persistent Disks.
- Operational Overhead: Factor in time saved from managed control planes vs self-managed.
Upgrade Experiences
- EKS: Manual upgrades require planning; AWS provides upgrade paths but operators coordinate timing.
- AKS: Supports automatic upgrades with maintenance windows; more automated than EKS.
- GKE: Most automated; upgrades happen during maintenance windows with minimal operator intervention.
Getting Started
EKS
# Create EKS cluster
eksctl create cluster \
--name my-cluster \
--region us-west-2 \
--nodegroup-name workers \
--node-type t3.medium \
--nodes 3
# Or use AWS CLI
aws eks create-cluster \
--name my-cluster \
--role-arn arn:aws:iam::ACCOUNT:role/eks-service-role \
--resources-vpc-config subnetIds=subnet-xxx,securityGroupIds=sg-xxx
AKS
# Create AKS cluster
az aks create \
--resource-group myResourceGroup \
--name myAKSCluster \
--node-count 3 \
--enable-addons monitoring \
--generate-ssh-keys
# Get credentials
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
GKE
# Create GKE cluster
gcloud container clusters create my-cluster \
--region us-central1 \
--num-nodes 3 \
--machine-type n1-standard-2
# Get credentials
gcloud container clusters get-credentials my-cluster --region us-central1
Caveats & Lessons Learned
- Regional Availability: Not all features available in all regions; verify before committing to a provider.
- Version Lag: Managed services may lag behind upstream Kubernetes releases; check version support.
- Add-On Compatibility: Some CNI plugins or operators may not work on all managed services; test before migrating.
- Vendor Lock-In: While Kubernetes is portable, cloud-specific integrations (IAM, storage, networking) create lock-in.
Common Failure Modes
- “IP Address Exhaustion”: EKS and AKS with native CNI require large IP ranges; plan subnet sizes carefully.
- “Upgrade Failures”: Automatic upgrades can fail if node pools have incompatible configurations; monitor upgrade status.
- “IAM Permission Issues”: Cloud IAM integration requires careful role mapping; test permissions before production.
Conclusion
The mid-2018 launch of EKS and AKS GA, alongside the mature GKE platform, marked the beginning of the managed Kubernetes era. Teams could finally choose a managed service based on their cloud provider, feature needs, and operational preferences rather than being forced into self-management.
The competition between the three providers would drive rapid innovation: EKS would add Fargate support and improved networking, AKS would enhance Azure AD integration and Windows support, and GKE would pioneer Autopilot and advanced automation. This competitive landscape benefited all users, as each provider pushed the boundaries of what managed Kubernetes could offer.
For teams still running self-managed clusters (kops, kubeadm), 2018 was the year to seriously evaluate migration. The operational burden of managing control planes was no longer justified for most use cases, and the managed services offered better reliability, security, and feature velocity than most teams could achieve on their own.