Fibre Channel Storage Integration
Fibre Channel (FC) is a high-speed network technology primarily used for storage area networks (SANs). In Kubernetes, Fibre Channel provides high-performance block storage connectivity, typically used in enterprise environments where maximum performance and reliability are critical requirements.
What is Fibre Channel?
Fibre Channel is a high-speed network technology (1Gbps to 128Gbps) designed for storage area networks. It uses a dedicated network fabric (separate from IP networks) to connect servers to storage systems, providing low latency and high throughput for block storage access.
Key characteristics:
- Dedicated network - Separate FC fabric (not IP network)
- High performance - Very low latency, high throughput
- Enterprise-grade - Designed for mission-critical applications
- Block storage - Provides block-level access (like local disks)
- Cost - More expensive than iSCSI or NFS
Fibre Channel in Kubernetes
Kubernetes supports Fibre Channel volumes through:
- In-tree FC volume plugin - Built-in support (deprecated but functional)
- CSI FC drivers - Vendor-specific CSI drivers (recommended)
In-Tree Fibre Channel Plugin
The in-tree plugin requires manual configuration of FC volumes:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fc-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
fc:
targetWWNs: ["500a048200000000", "500a048300000000"] # World Wide Names
lun: 0
fsType: ext4
readOnly: false
Key fields:
targetWWNs- Array of World Wide Names (WWNs) of FC target portslun- Logical Unit Number (LUN) identifierwwids- Alternative to targetWWNs/lun (World Wide Identifiers)fsType- Filesystem type (ext4, xfs, etc.)
CSI Fibre Channel Drivers
Vendor-specific CSI drivers provide better integration. Examples include:
- Storage vendor CSI drivers (Dell EMC, NetApp, IBM, HPE, etc.)
- These drivers handle FC connectivity and volume provisioning
Example using vendor CSI driver:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fc-storage
provisioner: vendor.csi.storage.k8s.io # Vendor-specific
parameters:
storagePool: "production-pool"
# Vendor-specific parameters
Fibre Channel Components
Host Bus Adapter (HBA)
Kubernetes nodes need Fibre Channel HBAs installed:
- Physical FC HBA cards in nodes
- FC HBA drivers installed on nodes
- FC connectivity to storage fabric
World Wide Names (WWNs)
FC uses WWNs to identify devices:
- Port WWN (pWWN) - Unique identifier for each FC port
- Node WWN (nWWN) - Unique identifier for the device/node
- WWNs are like MAC addresses for FC devices
Logical Unit Number (LUN)
A LUN represents a logical block storage device presented by the storage array:
- Storage arrays can present multiple LUNs
- Each LUN appears as a block device to the host
- LUNs are identified by LUN numbers
Static Provisioning Example
Here’s a complete example using static provisioning:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fc-pv
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
fc:
targetWWNs:
- "500a048200000000"
- "500a048300000000"
lun: 1
fsType: ext4
readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fc-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: database
spec:
serviceName: database
replicas: 1
template:
spec:
containers:
- name: db
image: postgres:14
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
Using WWIDs
Instead of specifying targetWWNs and LUN, you can use WWIDs (World Wide Identifiers):
apiVersion: v1
kind: PersistentVolume
metadata:
name: fc-pv-wwid
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
fc:
wwids:
- "3600508b400105e210000900000490000"
fsType: ext4
WWIDs provide better portability and are preferred when available.
Zoning and Security
FC uses zoning to control which initiators (hosts) can access which targets (storage):
- Zoning - Logical grouping of FC devices for access control
- WWPN zoning - Zone based on port WWNs
- LUN masking - Further restrict which LUNs initiators can access
Zoning is configured on FC switches and storage arrays, not in Kubernetes.
Use Cases
Fibre Channel is ideal for:
- High-performance databases - Maximum I/O performance requirements
- Mission-critical applications - Applications requiring highest reliability
- Enterprise SAN environments - Existing FC infrastructure
- Low-latency requirements - Applications sensitive to storage latency
- Large-scale deployments - Enterprise environments with FC investments
Advantages
- High performance - Lowest latency, highest throughput
- Dedicated network - Separate from IP network, no interference
- Enterprise reliability - Designed for mission-critical applications
- Mature technology - Well-established, proven technology
- Block storage - Direct block device access
Limitations
- Cost - Expensive infrastructure (HBAs, switches, storage arrays)
- Complexity - Requires FC expertise and infrastructure
- Limited to block storage - Only supports block volumes (ReadWriteOnce)
- Vendor-specific - Often requires vendor-specific CSI drivers
- Infrastructure requirements - Needs dedicated FC fabric
Best Practices
- Use CSI drivers - Prefer vendor CSI drivers over in-tree plugin
- Proper zoning - Configure FC zoning correctly for security and isolation
- LUN masking - Use LUN masking for additional access control
- Multi-path - Configure multi-path I/O for high availability
- Document WWNs - Document WWNs and LUN assignments
- Test failover - Test FC path failover scenarios
- Monitor performance - Monitor FC fabric performance and utilization
- Work with storage team - Coordinate with storage administrators
Multi-Path I/O
For high availability, configure multi-path I/O (MPIO) so nodes can access storage through multiple FC paths:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fc-pv-multipath
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
fc:
wwids:
- "3600508b400105e210000900000490000"
fsType: ext4
MPIO is typically configured at the OS level on nodes, not in Kubernetes manifests.
Troubleshooting
Volume Not Discovered
If FC volumes are not discovered:
- Verify HBA is installed and recognized:
lspci | grep Fibre - Check FC link status:
cat /sys/class/fc_host/host*/port_state - Verify zoning configuration on FC switches
- Check LUN masking on storage array
- Review node system logs:
dmesg | grep -i fibre
Performance Issues
If experiencing performance issues:
- Check FC link speed:
cat /sys/class/fc_host/host*/speed - Verify multi-path configuration
- Review storage array performance metrics
- Check FC fabric utilization
- Verify proper zoning (avoid oversubscription)
Connection Failures
If FC connections fail:
- Verify physical FC cable connections
- Check FC switch port status
- Verify HBA driver is loaded:
lsmod | grep -i fc - Review FC fabric zoning configuration
- Check storage array port status
See Also
- Storage Integration - Storage integration overview
- CSI Persistent Volumes - Container Storage Interface
- iSCSI - iSCSI block storage integration
- Block vs Filesystem - Block storage details