Multi-Cluster Bootstrapping at Scale

Multi-Cluster Bootstrapping at Scale

Introduction

By late 2022, managing multiple Kubernetes clusters had become the norm rather than the exception. Organizations deployed clusters across regions, cloud providers, and environments (dev, staging, production), requiring tools and patterns for bootstrapping and managing clusters at scale. Cluster API v1beta1, Rancher, and GitOps tools had matured to support fleet management, enabling teams to manage hundreds of clusters from centralized control planes.

This mattered because single-cluster management didn’t scale. Teams needed fleet management capabilities: consistent bootstrapping, centralized configuration, automated updates, and unified observability across all clusters. The tools and patterns that emerged in 2022 would become the foundation for platform engineering and internal developer platforms.

Historical note: Cluster API reached v1beta1 in 2021, marking significant maturity. Rancher had been managing multi-cluster deployments since 2016, but 2022 saw broader adoption of fleet management patterns.

Multi-Cluster Challenges

Bootstrapping at Scale

  • Consistency: Ensuring all clusters are bootstrapped with identical configurations.
  • Automation: Automating cluster creation across multiple environments and regions.
  • Version Management: Managing Kubernetes versions across clusters.
  • Configuration Drift: Preventing configuration differences between clusters.

Day-2 Operations

  • Centralized Management: Managing add-ons, policies, and configurations from a single point.
  • Observability: Unified monitoring and logging across all clusters.
  • Security: Consistent security policies and compliance across clusters.
  • Updates: Coordinating upgrades across clusters with minimal disruption.

Cluster API v1beta1 for Fleet Management

Management Cluster Pattern

Cluster API uses a management cluster to create and manage workload clusters:

# Management cluster manages multiple workload clusters
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: prod-us-west-2
  namespace: production
spec:
  infrastructureRef:
    kind: AWSCluster
    name: prod-us-west-2
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: prod-us-east-1
  namespace: production
spec:
  infrastructureRef:
    kind: AWSCluster
    name: prod-us-east-1
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: staging-eu-west-1
  namespace: staging
spec:
  infrastructureRef:
    kind: AWSCluster
    name: staging-eu-west-1

ClusterClass for Consistency

Cluster API v1beta1 introduced ClusterClass for consistent cluster definitions:

apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
  name: production-cluster-class
spec:
  infrastructure:
    ref:
      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: AWSClusterTemplate
  controlPlane:
    ref:
      apiVersion: controlplane.cluster.x-k8s.io/v1beta1
      kind: KubeadmControlPlaneTemplate
  workers:
    machineDeployments:
    - class: default-worker
      template:
        metadata:
          labels:
            pool: default
        spec:
          infrastructure:
            ref:
              apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
              kind: AWSMachineTemplate

Creating Clusters from ClusterClass

apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: prod-cluster-1
spec:
  clusterClassRef:
    name: production-cluster-class
  topology:
    version: v1.24.0
    controlPlane:
      replicas: 3
    workers:
      machineDeployments:
      - class: default-worker
        replicas: 5

Rancher for Multi-Cluster Management

Rancher Architecture

Rancher provides a centralized management platform for multiple clusters:

  • Rancher Server: Central management plane running on a management cluster.
  • Imported Clusters: Import existing clusters (EKS, AKS, GKE, or self-managed).
  • Provisioned Clusters: Rancher can provision clusters via RKE, RKE2, or k3s.
  • Fleet Management: Manage applications and configurations across clusters.

Importing Clusters

# Import existing cluster into Rancher
# 1. Create cluster in Rancher UI
# 2. Run registration command on target cluster
kubectl apply -f https://rancher.example.com/v3/import/xxx.yaml

Provisioning Clusters

Rancher can provision clusters using:

  • RKE (Rancher Kubernetes Engine): Rancher’s Kubernetes distribution.
  • RKE2: Rancher’s next-generation Kubernetes distribution.
  • k3s: Lightweight Kubernetes for edge deployments.
  • EKS/AKS/GKE: Managed services via cloud providers.

GitOps for Multi-Cluster

FluxCD Multi-Cluster

FluxCD can manage multiple clusters from a single Git repository:

cluster-config/
├── clusters/
│   ├── production/
│   │   ├── flux-system/
│   │   ├── bootstrap/
│   │   └── apps/
│   ├── staging/
│   │   ├── flux-system/
│   │   ├── bootstrap/
│   │   └── apps/
│   └── development/
│       ├── flux-system/
│       ├── bootstrap/
│       └── apps/
└── base/

Each cluster runs its own FluxCD instance, watching different paths.

ArgoCD Multi-Cluster

ArgoCD supports multi-cluster management:

# Register remote clusters
argocd cluster add prod-cluster-context
argocd cluster add staging-cluster-context

# Create Applications for each cluster
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: prod-apps
spec:
  destination:
    server: https://prod-cluster:6443
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: staging-apps
spec:
  destination:
    server: https://staging-cluster:6443

Comparison: Cluster API vs Rancher vs GitOps

CapabilityCluster APIRancherGitOps (FluxCD/ArgoCD)
Cluster CreationDeclarative (CRDs)UI + CLIManual + GitOps sync
Multi-CloudExcellentExcellentExcellent
Centralized ManagementManagement clusterRancher serverGit repository
UILimitedRich web UILimited (ArgoCD has UI)
GitOpsCan integrateFleet supports GitOpsNative GitOps
Learning CurveModerateModerateModerate
Best ForInfrastructure automationOperational managementConfiguration management

Fleet Management Patterns

Centralized Configuration

Manage configurations from a single source:

# Base configuration applied to all clusters
apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-config
data:
  monitoring-enabled: "true"
  logging-enabled: "true"
  network-policy-enabled: "true"

Environment-Specific Overlays

Override base configuration per environment:

# Production overlay
apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-config
data:
  monitoring-enabled: "true"
  logging-enabled: "true"
  network-policy-enabled: "true"
  resource-limits: "strict"

Automated Updates

Coordinate updates across clusters:

# Update all clusters to new Kubernetes version
# Using Cluster API
kubectl patch cluster prod-cluster-1 -p '{"spec":{"topology":{"version":"v1.25.0"}}}'
kubectl patch cluster prod-cluster-2 -p '{"spec":{"topology":{"version":"v1.25.0"}}}'

Practical Considerations

Management Cluster Requirements

  • High Availability: Management cluster must be HA; if it fails, fleet management stops.
  • Resource Capacity: Management cluster needs resources to manage all workload clusters.
  • Network Access: Management cluster needs access to cloud provider APIs and workload clusters.

Cluster Lifecycle Management

  • Creation: Automate cluster creation using Cluster API or Rancher.
  • Configuration: Use GitOps to manage configurations across clusters.
  • Updates: Coordinate upgrades with maintenance windows and testing.
  • Deletion: Plan for cluster decommissioning and resource cleanup.

Observability

  • Centralized Monitoring: Aggregate metrics from all clusters.
  • Unified Logging: Centralize logs from all clusters.
  • Alerting: Set up alerts for cluster health and configuration drift.

Getting Started

Cluster API Multi-Cluster

# Initialize Cluster API
clusterctl init --infrastructure aws

# Create workload clusters
kubectl apply -f clusters/production/
kubectl apply -f clusters/staging/

Rancher Multi-Cluster

# Install Rancher
helm install rancher rancher-latest/rancher \
  --namespace cattle-system \
  --set hostname=rancher.example.com

# Import or provision clusters via Rancher UI

Caveats & Lessons Learned

  • Management Cluster Dependency: Fleet management depends on management cluster; ensure HA and backups.
  • Configuration Drift: Prevent manual changes to clusters; enforce Git as source of truth.
  • Network Complexity: Multi-cluster networking can be complex; plan network architecture carefully.
  • Cost Management: Multiple clusters increase costs; monitor and optimize resource usage.

Common Failure Modes

  • “Management cluster down”: If management cluster fails, fleet management stops; ensure HA.
  • “Configuration drift”: Manual changes cause configuration differences; enforce GitOps.
  • “Network connectivity”: Clusters need network access for management; verify connectivity.

Conclusion

Multi-cluster bootstrapping at scale in 2022 had become a solved problem with mature tools and patterns. Cluster API v1beta1, Rancher, and GitOps tools provided different approaches to fleet management, each with distinct strengths: Cluster API for infrastructure automation, Rancher for operational management, and GitOps for configuration management.

For organizations managing multiple clusters, these tools enabled centralized management, consistent bootstrapping, and automated operations at scale. They demonstrated that Kubernetes cluster management didn’t have to be manual or per-cluster—it could be automated, declarative, and scalable.

The patterns and tools that emerged in 2022 would become the foundation for platform engineering and internal developer platforms, where teams would manage hundreds of clusters with minimal operational overhead. Multi-cluster management had evolved from a challenge to a capability.