Exercise 4: Storage with Persistent Volumes
Task 1 - Understanding Kubernetes Storage Concepts
Kubernetes provides several abstractions for persistent storage:
- Persistent Volumes (PV): Cluster-level storage resources
- Persistent Volume Claims (PVC): Requests for storage by users
- Storage Classes: Define different tiers of storage with specific properties
In AKS, several storage classes are available by default:
kubectl get storageclass
kubectl get storageclass
You should see several storage classes, including:
managed-premium
: Premium SSD storage (default in most AKS clusters)managed-csi
: Standard SSD storageazurefile-csi
: Azure Files storage for ReadWriteMany scenariosdefault
: a default storageclass used when not specified in the requests. This defaults to using Azure Disk
Task 2 - Create a Persistent Volume Claim
Examine the default storage class:
kubectl get storageclass default -o yaml
kubectl get storageclass default -o yaml
Note that it’s provisioned dynamically by Azure Disk and has the
volumeBindingMode
set toWaitForFirstConsumer
- which means no storage volumes are provisioned until a pod requests them.Create a Persistent Volume Claim (PVC) that uses the default storage class:
$pvClaim = @" apiVersion: v1 kind: PersistentVolumeClaim metadata: name: azure-disk-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: managed "@ $pvClaim | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: azure-disk-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: managed EOF
Check the status of the PVC:
kubectl get pvc
kubectl get pvc
Note that the PVC status should be
Pending
because AKS usesWaitForFirstConsumer
binding mode, which means the PV will only be provisioned when a pod uses the claim.
Task 3 - Deploy a pod with Persistent Storage
Create a pod that uses the PVC:
$storagePod = @" apiVersion: v1 kind: Pod metadata: name: storage-pod spec: containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx volumeMounts: - mountPath: "/mnt/azure" name: azure-disk-volume volumes: - name: azure-disk-volume persistentVolumeClaim: claimName: azure-disk-pvc "@ $storagePod | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: storage-pod spec: containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx volumeMounts: - mountPath: "/mnt/azure" name: azure-disk-volume volumes: - name: azure-disk-volume persistentVolumeClaim: claimName: azure-disk-pvc EOF
Wait for the pod to be running:
kubectl wait --for=condition=Ready pod/storage-pod --timeout=60s
kubectl wait --for=condition=Ready pod/storage-pod --timeout=60s
Check that the PVC is now bound:
kubectl get pvc
kubectl get pvc
The status should now be
Bound
and a Persistent Volume (PV) has been dynamically created.Examine the created Persistent Volume:
kubectl get pv
kubectl get pv
Note how AKS has automatically provisioned an Azure Disk for your claim.
Task 4 - Use the Persistent Storage
Create a file in the persistent volume:
kubectl exec -it storage-pod -- /bin/bash -c "echo 'This data will persist' > /mnt/azure/test-file.txt"
kubectl exec -it storage-pod -- /bin/bash -c "echo 'This data will persist' > /mnt/azure/test-file.txt"
Verify the file was created:
kubectl exec -it storage-pod -- cat /mnt/azure/test-file.txt
kubectl exec -it storage-pod -- cat /mnt/azure/test-file.txt
Delete the pod:
kubectl delete pod storage-pod
kubectl delete pod storage-pod
Create a new pod that uses the same PVC:
$newStoragePod = @" apiVersion: v1 kind: Pod metadata: name: new-storage-pod spec: containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx volumeMounts: - mountPath: "/mnt/azure" name: azure-disk-volume volumes: - name: azure-disk-volume persistentVolumeClaim: claimName: azure-disk-pvc "@ $newStoragePod | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: new-storage-pod spec: containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx volumeMounts: - mountPath: "/mnt/azure" name: azure-disk-volume volumes: - name: azure-disk-volume persistentVolumeClaim: claimName: azure-disk-pvc EOF
Wait for the new pod to be running:
kubectl wait --for=condition=Ready pod/new-storage-pod --timeout=60s
kubectl wait --for=condition=Ready pod/new-storage-pod --timeout=60s
Verify that the data persisted:
kubectl exec -it new-storage-pod -- cat /mnt/azure/test-file.txt
kubectl exec -it new-storage-pod -- cat /mnt/azure/test-file.txt
You should see
This data will persist
because the data is stored on the Azure Disk which persists beyond the lifecycle of the pod(s).
Task 5 - Using Azure Files for ReadWriteMany
AKS also supports Azure Files, which allows for the ReadWriteMany
access mode (multiple pods can read from and write to the same volume simultaneously).
Create a PVC that uses Azure Files:
$azureFilesPVC = @" apiVersion: v1 kind: PersistentVolumeClaim metadata: name: azurefile-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi storageClassName: azurefile-csi "@ $azureFilesPVC | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: azurefile-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi storageClassName: azurefile-csi EOF
Deploy multiple pods that share the same Azure File share:
$sharedStoragePods = @" apiVersion: apps/v1 kind: Deployment metadata: name: shared-storage-deployment spec: replicas: 3 selector: matchLabels: app: shared-storage template: metadata: labels: app: shared-storage spec: containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx volumeMounts: - mountPath: "/mnt/shared" name: azurefile-volume volumes: - name: azurefile-volume persistentVolumeClaim: claimName: azurefile-pvc "@ $sharedStoragePods | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: shared-storage-deployment spec: replicas: 3 selector: matchLabels: app: shared-storage template: metadata: labels: app: shared-storage spec: containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx volumeMounts: - mountPath: "/mnt/shared" name: azurefile-volume volumes: - name: azurefile-volume persistentVolumeClaim: claimName: azurefile-pvc EOF
Wait for the Deployment to be ready:
kubectl rollout status deployment/shared-storage-deployment
kubectl rollout status deployment/shared-storage-deployment
Write data from one pod:
# Get the name of the first pod $firstPod = (kubectl get pods -l app=shared-storage -o jsonpath='{.items[0].metadata.name}') # Write data to the shared volume from the first pod kubectl exec -it $firstPod -- /bin/bash -c "echo 'Shared data across multiple pods' > /mnt/shared/shared-file.txt"
# Get the name of the first pod firstPod=$(kubectl get pods -l app=shared-storage -o jsonpath='{.items[0].metadata.name}') # Write data to the shared volume from the first pod kubectl exec -it $firstPod -- /bin/bash -c "echo 'Shared data across multiple pods' > /mnt/shared/shared-file.txt"
Read the data from another pod:
# Get the name of the second pod $secondPod = (kubectl get pods -l app=shared-storage -o jsonpath='{.items[1].metadata.name}') # Read the data from the second pod kubectl exec -it $secondPod -- cat /mnt/shared/shared-file.txt
# Get the name of the second pod secondPod=$(kubectl get pods -l app=shared-storage -o jsonpath='{.items[1].metadata.name}') # Read the data from the second pod kubectl exec -it $secondPod -- cat /mnt/shared/shared-file.txt
You should see
Shared data across multiple pods
from the second pod, demonstrating that Azure Files allows for simultaneous access from multiple pods.
Task 6 - Clean Up
Delete the resources created in this exercise:
kubectl delete deployment shared-storage-deployment kubectl delete pod new-storage-pod kubectl delete pvc azure-disk-pvc azurefile-pvc
kubectl delete deployment shared-storage-deployment kubectl delete pod new-storage-pod kubectl delete pvc azure-disk-pvc azurefile-pvc
Note that deleting the PVCs will also delete the dynamically provisioned PVs and the underlying Azure storage resources.