Federation v2 Working Group: Redesigning Multi-Cluster Kubernetes

K8s Guru
2 min read
Federation v2 Working Group: Redesigning Multi-Cluster Kubernetes

Introduction

On December 5, 2017, SIG-Multicluster announced the formation of the Federation v2 Working Group, tasked with reimagining Kubernetes multi-cluster management. After a year of learning from Federation v1’s centralized control plane, the community shifted to a declarative, CRD-based approach—nicknamed Kubefed—that aligns with Kubernetes extension patterns.


Why Federation Needed a Rethink

  • Operational Complexity: Federation v1 required a dedicated API server and controller manager, adding another control plane to operate.
  • Limited Resource Coverage: Only a handful of objects (Services, Deployments, Ingress) were federated, leaving gaps for ConfigMaps, CRDs and StatefulSets.
  • Opinionated Workflows: DNS integration and join mechanisms imposed infrastructure choices that didn’t fit every environment.
  • Ecosystem Shift: CRDs and API aggregation matured in 2017, enabling federation features to live inside clusters instead of bespoke apiservers.

Federation v2 Vision

  • Custom Resource Definitions: Desired state is expressed through CRDs like FederatedDeployment and FederatedNamespace.
  • Template + Placement + Overrides: Users define base specs, target cluster placements, and per-cluster overrides in a single manifest.
  • Namespace Scoped Control: Controllers run using Kubernetes-native patterns, respecting RBAC and admission hooks.
  • Pluggable Scheduling Policies: Weighting and failover decisions are handled by schedulers that can be extended or replaced.
  • GitOps Friendly: Declarative manifests store federation intent in Git, supporting familiar review workflows.

Early Architecture Sketch

apiVersion: types.kubefed.io/v1beta1
kind: FederatedDeployment
metadata:
  name: payments
spec:
  template:
    metadata:
      labels:
        app: payments
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: payments
      template:
        metadata:
          labels:
            app: payments
        spec:
          containers:
          - name: api
            image: gcr.io/example/payments:v1.4.0
  placement:
    clusterSelector:
      matchLabels:
        region: us
  overrides:
  - clusterName: us-east1
    clusterOverrides:
    - path: "/spec/template/spec/replicas"
      value: 5

Controllers reconcile these resources into member clusters using standard kubeconfigs and RBAC credentials.


Roadmap

  • Release Kubefed v0.1 with Deployments, ConfigMaps, Secrets and Namespaces in early 2018.
  • Integrate with Service Discovery projects (e.g., CoreDNS) for multi-cluster failover records.
  • Explore Multi-Cluster Ingress with external DNS controllers.
  • Document best practices for cluster lifecycle, health checks and disaster recovery.

Get Involved

  • Join the #sig-multicluster channel on Kubernetes Slack.
  • Participate in weekly working group calls and review design docs in the Kubernetes community repo.
  • Experiment with the prototype by deploying the Kubefed controller manager into dedicated namespaces across clusters.

Summary

AspectDetails
Announcement DateDecember 5, 2017
Key InnovationsCRD-driven federation, template/placement model, GitOps alignment
SignificanceLaid the foundation for Federation v2 (Kubefed), addressing operational pain points from the first-generation design