iSCSI Storage Integration

iSCSI (Internet Small Computer Systems Interface) allows Kubernetes to use block storage devices over a TCP/IP network. iSCSI is a cost-effective way to provide block storage to Kubernetes clusters using existing network infrastructure, making it popular for on-premises deployments and hybrid cloud environments.

What is iSCSI?

iSCSI enables block-level storage access over IP networks by encapsulating SCSI commands in TCP/IP packets. This allows remote block storage devices to appear as local disks to Kubernetes nodes.

graph LR A[Kubernetes Node] --> B[iSCSI Initiator] B --> C[IP Network] C --> D[iSCSI Target] D --> E[Block Storage Device] style A fill:#e1f5ff style B fill:#fff4e1 style C fill:#e8f5e9 style D fill:#f3e5f5 style E fill:#ffe1e1

Key concepts:

  • iSCSI Initiator - Client on Kubernetes nodes that connects to storage
  • iSCSI Target - Server that provides block storage devices
  • LUN (Logical Unit Number) - A logical block storage device provided by the target
  • IQN (iSCSI Qualified Name) - Unique identifier for initiators and targets

iSCSI in Kubernetes

Kubernetes supports iSCSI volumes through:

  1. In-tree iSCSI volume plugin - Built-in support (deprecated but still functional)
  2. CSI iSCSI drivers - Modern CSI-based drivers (recommended)

In-Tree iSCSI Plugin

The in-tree plugin allows you to directly reference iSCSI targets in volume specifications:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: iscsi-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  iscsi:
    targetPortal: 192.168.1.100:3260
    iqn: iqn.2010-10.org.openstack:volume-12345
    lun: 0
    fsType: ext4
    readOnly: false

Limitations:

  • Deprecated (maintenance mode only)
  • Limited features (no snapshots, cloning, expansion)
  • Manual PV creation required

CSI iSCSI Drivers

Modern CSI drivers provide better integration and features. Popular options include:

  • csi-driver-iscsi - Generic iSCSI CSI driver
  • Vendor-specific iSCSI CSI drivers

Example using CSI:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: iscsi-storage
provisioner: iscsi.csi.k8s.io
parameters:
  targetPortal: 192.168.1.100:3260
  iqn: iqn.2010-10.org.openstack:iscsi-pool
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: iscsi-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: iscsi-storage
  resources:
    requests:
      storage: 50Gi

iSCSI Components

iSCSI Target Setup

The iSCSI target (storage server) needs to be configured to provide LUNs:

Using tgtadm (Linux):

# Create a target
tgtadm --lld iscsi --op new --mode target --tid 1 --targetname iqn.2024-01.example:storage

# Create a LUN
tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 --backing-store /dev/sdb

# Bind to network interface
tgtadm --lld iscsi --op bind --mode target --tid 1 --initiator-address ALL

Using targetcli (more user-friendly):

# Start targetcli
targetcli

# Create target
/> cd /iscsi
/iscsi> create iqn.2024-01.example:storage

# Create LUN
/iscsi> cd iqn.2024-01.example:storage/tpg1/luns
/iscsi/.../luns> create /backstores/block/block1

iSCSI Initiator on Nodes

Kubernetes nodes need iSCSI initiator tools installed:

# Install on Ubuntu/Debian
apt-get install open-iscsi

# Install on RHEL/CentOS
yum install iscsi-initiator-utils

# Start and enable service
systemctl enable iscsid
systemctl start iscsid

Authentication (CHAP)

iSCSI supports CHAP (Challenge-Handshake Authentication Protocol) for security:

In PV specification:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: iscsi-pv-chap
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  iscsi:
    targetPortal: 192.168.1.100:3260
    iqn: iqn.2010-10.org.openstack:volume-12345
    lun: 0
    fsType: ext4
    secretRef:
      name: iscsi-secret
    initiatorName: iqn.1993-08.org.debian:01:node1

Secret with CHAP credentials:

apiVersion: v1
kind: Secret
metadata:
  name: iscsi-secret
type: "kubernetes.io/iscsi-chap"
stringData:
  discovery.sendtargets.auth.username: username
  discovery.sendtargets.auth.password: password
  node.session.auth.username: username
  node.session.auth.password: password

Static Provisioning Example

Here’s a complete example using static provisioning:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: iscsi-pv
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  iscsi:
    targetPortal: 192.168.1.100:3260
    iqn: iqn.2010-10.org.openstack:volume-abc123
    lun: 0
    fsType: ext4
    readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: iscsi-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: app-with-iscsi
spec:
  containers:
  - name: app
    image: nginx
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: iscsi-pvc

Dynamic Provisioning with CSI

For dynamic provisioning, use a CSI iSCSI driver:

1. Install CSI iSCSI driver (example - check driver documentation):

# Example installation (check driver docs for actual steps)
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-iscsi/master/deploy/install.yaml

2. Create StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: iscsi-csi
provisioner: iscsi.csi.k8s.io
parameters:
  targetPortal: 192.168.1.100:3260
  iqn: iqn.2010-10.org.openstack:iscsi-pool
reclaimPolicy: Delete
volumeBindingMode: Immediate

3. Create PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: iscsi-pvc-dynamic
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: iscsi-csi
  resources:
    requests:
      storage: 50Gi

Use Cases

iSCSI is well-suited for:

  • On-premises Kubernetes - Using existing SAN infrastructure
  • Hybrid cloud - Connecting cloud clusters to on-premises storage
  • Block storage needs - Applications requiring block devices
  • Cost-effective storage - Using existing network infrastructure
  • Performance-sensitive workloads - Good performance over network

Advantages

  • Cost-effective - Uses existing IP network infrastructure
  • Standard protocol - Widely supported across vendors
  • Block storage - Direct block device access
  • Good performance - Low latency for network storage
  • Flexible - Works with various storage systems

Limitations

  • Network dependency - Requires stable network connection
  • Single pod access - Typically ReadWriteOnce (one pod at a time)
  • Network latency - Higher latency than local storage
  • Configuration complexity - Requires target and initiator setup
  • Security - Should use CHAP authentication

Best Practices

  1. Use CHAP authentication - Secure iSCSI connections
  2. Dedicated network - Use dedicated network/VLAN for storage traffic
  3. Monitor network - Monitor iSCSI network performance and availability
  4. Use CSI drivers - Prefer CSI drivers over in-tree plugin
  5. Test failover - Test storage and network failover scenarios
  6. Document configuration - Document target and initiator configuration
  7. Regular backups - Implement backup strategies for iSCSI volumes
  8. Capacity planning - Monitor storage capacity on iSCSI targets

Troubleshooting

Volume Mount Fails

If iSCSI volumes fail to mount:

  1. Check initiator is installed: systemctl status iscsid
  2. Verify target is reachable: iscsiadm -m discovery -t sendtargets -p <target-ip>
  3. Check initiator name: cat /etc/iscsi/initiatorname.iscsi
  4. Review pod events: kubectl describe pod <pod-name>
  5. Check iSCSI sessions: iscsiadm -m session -P 3

Authentication Failures

If CHAP authentication fails:

  1. Verify secret exists: kubectl get secret <secret-name>
  2. Check secret type: kubernetes.io/iscsi-chap
  3. Verify credentials match target configuration
  4. Check initiator name matches target configuration

Performance Issues

If experiencing performance issues:

  1. Check network latency: ping <target-ip>
  2. Verify dedicated storage network
  3. Check target performance metrics
  4. Consider jumbo frames (MTU 9000) if supported
  5. Review storage system performance

See Also