Container Storage Interface 0.1: A Common Language for Volumes

Table of Contents
Introduction
On December 7, 2017, the Kubernetes, Mesos, Docker and Cloud Foundry communities jointly announced Container Storage Interface (CSI) 0.1—a vendor-neutral specification for exposing block and file storage to container orchestrators. CSI aims to end bespoke volume plugins per orchestrator by defining one portable gRPC API that storage vendors can implement once.
Why CSI Exists
- Plugin Explosion: In-tree Kubernetes volume drivers require code vendoring, recompiles and kubelet restarts.
- Cross-Orchestrator Demand: Vendors were maintaining separate integrations for Kubernetes, Mesos, Swarm and Cloud Foundry.
- Security & Stability: Moving drivers out-of-tree decouples storage releases from Kubernetes core upgrades.
- Innovation Velocity: Independent lifecycle allows faster iteration, certification and distribution of storage plugins.
Core API Concepts
| Capability | Description |
|---|---|
| Identity | Reports plugin name and capabilities for orchestrator feature detection. |
| Controller Service | Handles CreateVolume, DeleteVolume, snapshotting, and controller-side publish/unpublish. |
| Node Service | Mounts/attaches volumes to nodes with NodePublishVolume and NodeStageVolume. |
| Volume Lifecycle | Explicit separation of staging, publishing and expansion operations to support dynamic provisioning. |
CSI uses protobuf + gRPC to define requests/responses. Plugins run out-of-tree with minimal host requirements (usually /var/lib/kubelet/plugins access when deployed on Kubernetes).
Implications for Kubernetes
- Kubernetes introduces an external-attacher, external-provisioner and node-driver-registrar to bridge CSI drivers with existing PersistentVolume claims.
- The kubelet exposes a CSI plugin registration socket so drivers can advertise capabilities and receive lifecycle callbacks.
- In-tree drivers stay supported short term, but new storage investments (snapshots, volume expansion) target CSI first.
- Cluster operators can ship drivers as DaemonSets/Deployments without touching the core kubelet binary.
Getting Started with CSI 0.1
- Deploy the sidecar controllers alongside a CSI driver container (often packaged together).
- Grant appropriate RBAC roles so the controller can watch PersistentVolumeClaims and create PersistentVolumes.
- Define a
StorageClassreferencing the CSI driver name and parameters exposed by the vendor (e.g.,csi.storage.k8s.io/provisioner-secret-name). - Schedule workloads with standard PVCs; the CSI controller provisions volumes dynamically via the driver.
Early adopters focused on proof-of-concept drivers (e.g., NetApp, Portworx, Ceph RBD) to validate the workflow ahead of the 0.2/0.3 spec refreshes in 2018.
Known Gaps in 0.1
- Snapshot and clone APIs were still under active design; vendors used out-of-band tooling.
- Node-driver communication relied on Unix sockets; Windows support would solidify in later revisions.
- Rolling upgrades required careful coordination between controllers and node plugins to prevent orphaned mounts.
- The spec emphasized block volumes; richer file semantics (quota, NFS exports) remained loosely defined.
What Comes Next
The CSI working group roadmap for 2018 focused on:
- Spec stabilization leading to 1.0.
- Snapshot/restore primitives and data protection workflows.
- Volume expansion for both file and block devices.
- Topology awareness so schedulers understand zone/rack constraints supplied by drivers.
CSI 0.1 laid the groundwork for universal storage integrations across orchestrators, clearing the path for Kubernetes to eventually remove most in-tree volume plugins.
Summary
| Aspect | Details |
|---|---|
| Release Date | December 7, 2017 |
| Key Innovations | Unified gRPC API, out-of-tree volume lifecycle, controller/node separation |
| Significance | Paved the way for portable, rapidly evolving storage integrations across Kubernetes and other platforms |