Kubernetes 1.26: Electrifying the Core – Security, Scalability, and Modern APIs

Table of Contents
Introduction
On December 9, 2022, the Kubernetes project released version 1.26, codenamed “Electrifying the Core.”
This version emphasized API maturity, scalability, and security — delivering features like Storage Capacity Tracking GA, Ephemeral Containers GA, and key deprecations that cleaned up the core API surface.
Official Highlights
1. Storage Capacity Tracking (GA)
Kubernetes 1.26 graduated Storage Capacity Tracking to General Availability, allowing the scheduler to make smarter decisions based on storage availability in the cluster.
This greatly improved volume provisioning efficiency and reliability, especially in multi-zone and hybrid environments.
Benefits:
- Smarter scheduling: Pods are scheduled to nodes with available storage
- Better resource utilization: Prevents scheduling failures due to storage unavailability
- Multi-zone awareness: Works across availability zones
- Improved reliability: Reduces pod scheduling failures
How it works:
- CSI drivers report storage capacity via CSIStorageCapacity objects
- Scheduler uses this information when scheduling pods with PVCs
- Pods are placed on nodes/zones with sufficient storage
Example:
# CSI driver creates CSIStorageCapacity objects
apiVersion: storage.k8s.io/v1
kind: CSIStorageCapacity
metadata:
name: fast-ssd-zone-a
spec:
storageClassName: fast-ssd
capacity: 1000Gi
maximumVolumeSize: 100Gi
nodeTopology:
matchLabels:
topology.kubernetes.io/zone: zone-a
---
# Pod with PVC - scheduler uses capacity info
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: app
image: my-app:latest
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: fast-ssd
Requirements:
- CSI driver must support capacity tracking
- Storage class must be configured correctly
- Nodes must have proper topology labels
Verification:
# Check storage capacity objects
kubectl get csistoragecapacity
kubectl describe csistoragecapacity fast-ssd-zone-a
2. Ephemeral Containers (GA)
Ephemeral Containers, first introduced for debugging in earlier releases, finally reached General Availability in 1.26.
They enable administrators to attach a temporary container to a running pod for live troubleshooting without restarting workloads.
Use cases:
- Debugging: Attach debug tools to running pods
- Troubleshooting: Inspect pod state without restarting
- Security: Minimal attack surface (no pod restart)
- Production safety: Debug without affecting running workloads
Key features:
- ✅ No pod restart: Attach to running pods
- ✅ Temporary: Removed when pod terminates
- ✅ Limited: Can’t modify pod spec
- ✅ Secure: Minimal permissions
Example - Debugging a pod:
# Create ephemeral container for debugging
kubectl debug my-pod -it --image=busybox:latest --target=my-pod -- sh
# Or using kubectl alpha debug (older method)
kubectl alpha debug my-pod -it --image=busybox:latest --target=my-pod -- sh
Example - Using EphemeralContainer API:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: app
image: my-app:latest
ephemeralContainers:
- name: debugger
image: busybox:latest
command: ["sh"]
stdin: true
tty: true
Common debugging scenarios:
Network debugging:
kubectl debug my-pod -it --image=nicolaka/netshoot -- sh # Now you can use network tools: curl, dig, tcpdump, etc.File system inspection:
kubectl debug my-pod -it --image=busybox:latest --target=my-pod -- sh # Access the pod's filesystem at /proc/1/rootProcess inspection:
kubectl debug my-pod -it --image=busybox:latest --target=my-pod -- sh # Use ps, top, strace, etc. to inspect processes
Limitations:
- Ephemeral containers can’t modify the pod’s main containers
- Resource limits apply to ephemeral containers
- Some security contexts may restrict capabilities
- Not all container images work (must support the target pod’s architecture)
Best practices:
- Use minimal debug images (busybox, distroless debug variants)
- Remove ephemeral containers after debugging
- Use appropriate security contexts
- Document debugging procedures
“Ephemeral containers mark a major leap in debugging production workloads safely.”
— Kubernetes SIG Node Team
3. CronJobs Stability Improvements
CronJobs became even more robust, with improved job scheduling reliability and better handling of missed start times.
The internal controller logic was optimized for high-scale clusters, reducing API server load during scheduling bursts.
4. API Evolution and Cleanups
Kubernetes 1.26 continues API modernization with important cleanups:
Removed v1beta1 APIs
FlowSchema and PriorityLevelConfiguration v1beta1 removed, replaced by stable v1 API.
Migration example:
# Old (v1beta1) - No longer works in 1.26+
apiVersion: flowcontrol.apiserver.k8s.io/v1beta1
kind: FlowSchema
metadata:
name: my-flowschema
---
# New (v1) - Use this instead
apiVersion: flowcontrol.apiserver.k8s.io/v1
kind: FlowSchema
metadata:
name: my-flowschema
CRD Validation Improvements
CustomResourceDefinitions gained better schema validation, improved error messages, and stricter type checking.
Example:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: myresources.example.com
spec:
group: example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
name:
type: string
minLength: 1
maxLength: 63
replicas:
type: integer
minimum: 1
maximum: 100
required:
- name
- replicas
Additional API Updates
- CSI Migration fully completed for legacy in-tree drivers
- PodDisruptionBudget (PDB) enhancements for accurate disruption tracking
These changes simplified API management and reduced upgrade friction.
5. Security and Runtime Updates
SeccompDefault (GA)
SeccompDefault graduated to General Availability, applying secure default seccomp profiles to all pods that don’t explicitly specify one.
Benefits:
- Better security by default: All pods get seccomp protection
- Reduced attack surface: Limits available system calls
- Compliance: Meets security hardening requirements
- No breaking changes: Pods can still opt-out if needed
How it works:
- Kubelet applies
RuntimeDefaultseccomp profile to pods without explicit seccomp configuration - Pods can still specify custom seccomp profiles
- Opt-out available for legacy workloads if needed
Example:
# Pod with default seccomp (applied automatically)
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
containers:
- name: app
image: my-app:latest
# SeccompDefault applies RuntimeDefault profile automatically
---
# Pod with custom seccomp profile
apiVersion: v1
kind: Pod
metadata:
name: custom-seccomp-pod
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/my-profile.json
containers:
- name: app
image: my-app:latest
Enable SeccompDefault:
# Kubelet configuration
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
seccompDefault: true
Verification:
# Check if seccomp is applied
kubectl get pod my-pod -o jsonpath='{.spec.securityContext.seccompProfile}'
Additional Runtime Updates
- Kubelet Credential Providers stabilized for multi-cloud compatibility
- CRI API v1 adoption completed, unifying runtime interfaces
This cemented Kubernetes’ fully CRI-based runtime model, improving maintainability and compliance.
Breaking Changes and Migration
What You Need to Know Before Upgrading
Critical changes requiring attention:
v1beta1 API Removals
- ⚠️ Action Required: Update FlowSchema and PriorityLevelConfiguration to v1
- v1beta1 APIs removed, no backward compatibility
- Review all API versions in cluster
CSI Migration Completion
- All in-tree storage plugins removed
- Ensure CSI drivers are properly configured
- Verify storage class compatibility
SeccompDefault Behavior
- Seccomp profiles applied by default
- May affect legacy workloads
- Test applications for compatibility
Upgrade checklist:
- Update FlowSchema and PriorityLevelConfiguration to v1 API
- Verify CSI driver compatibility and configuration
- Test applications with SeccompDefault enabled
- Review CRD validation schemas
- Test ephemeral container usage if needed
- Verify storage capacity tracking if using custom storage
- Test in non-production environment first
Milestones Timeline
| Date | Event |
|---|---|
| Dec 9, 2022 | Kubernetes 1.26 officially released |
| Jan–Feb 2023 | Ephemeral Containers adopted across cloud environments |
| Mid 2023 | Full CSI Migration and SeccompDefault adoption in managed Kubernetes platforms |
Patch Releases for 1.26
Patch releases (1.26.x) focused on storage stability, runtime security, and API compatibility.
| Patch Version | Release Date | Notes |
|---|---|---|
| 1.26.0 | 2022-12-09 | Initial release |
| 1.26.1+ | various dates | Maintenance and security patches |
Legacy and Impact
Kubernetes 1.26 symbolized the completion of Kubernetes’ API and runtime modernization journey.
By finalizing CSI migration, stabilizing ephemeral containers, and improving API maturity, this release made Kubernetes more secure, debuggable, and future-ready.
Summary
| Aspect | Description |
|---|---|
| Release Date | December 9, 2022 |
| Key Innovations | Storage Capacity Tracking GA, Ephemeral Containers GA, SeccompDefault GA |
| Significance | Strengthened runtime debugging, scalability, and API maturity |
Getting Started with Kubernetes 1.26
Quick Verification
Check cluster version:
kubectl version
kubectl get nodes
Verify Storage Capacity Tracking:
kubectl get csistoragecapacity
kubectl describe csistoragecapacity
Test Ephemeral Containers:
# Debug a running pod
kubectl debug my-pod -it --image=busybox:latest --target=my-pod -- sh
Check SeccompDefault:
# Check if seccomp is applied to pods
kubectl get pod my-pod -o jsonpath='{.spec.securityContext.seccompProfile}'
Verify API versions:
# Check for deprecated API usage
kubectl get flowschema -o jsonpath='{.items[*].apiVersion}'
# Should show v1, not v1beta1
Next in the Series
Next up: Kubernetes 1.27 (April 2023) — focusing on stability, developer experience, and improved resource efficiency across the control plane.