Exercise 3: Advanced Pod Scheduling
Task 1 - Pod Affinity and Anti-Affinity
Create the affinity example manifest:
$affinityYaml = @" apiVersion: apps/v1 kind: Deployment metadata: name: web-app spec: replicas: 3 selector: matchLabels: app: web-app template: metadata: labels: app: web-app spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - web-app topologyKey: "kubernetes.io/hostname" podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - mongodb topologyKey: "kubernetes.io/hostname" containers: - name: web-app image: k8sonazureworkshoppublic.azurecr.io/nginx:latest "@
cat << EOF > web-app-affinity.yaml apiVersion: apps/v1 kind: Deployment metadata: name: web-app spec: replicas: 3 selector: matchLabels: app: web-app template: metadata: labels: app: web-app spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - web-app topologyKey: "kubernetes.io/hostname" podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - mongodb topologyKey: "kubernetes.io/hostname" containers: - name: web-app image: k8sonazureworkshoppublic.azurecr.io/nginx:latest EOF
Output the file to review it:
$affinityYaml
cat web-app-affinity.yaml
This manifest includes:
podAntiAffinity
: Ensures pods of the same app don’t run on the same nodepodAffinity
: Prefers to run near pods with the labelapp: mongodb
Deploy the web application with affinity rules:
$affinityYaml | kubectl apply -f -
kubectl apply -f web-app-affinity.yaml
Verify the deployment:
kubectl get pods -l app=web-app -o wide
kubectl get pods -l app=web-app -o wide
Examine where pods were scheduled:
kubectl describe pods -l app=web-app | Select-String Node:
kubectl describe pods -l app=web-app | grep Node:
Note which nodes the pods were scheduled to and how the anti-affinity rules were applied. Note that one pod cannot be scheduled, and is in the
Pending
state due to us having only two nodes and the anti-affinity rule requiring pods to be on different nodes.
Task 2 - Create a deployment with node affinity
Get a list node names
kubectl get nodes
kubectl get nodes
Label one of your nodes (replace
NODE_NAME
with actual node name):kubectl label nodes NODE_NAME workload=frontend
kubectl label nodes NODE_NAME workload=frontend
Create the deployment with node affinity:
$frontendDeploy = @" apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: workload operator: In values: - frontend containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx:latest resources: requests: memory: "64Mi" cpu: "100m" limits: memory: "128Mi" cpu: "200m" "@ $frontendDeploy | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: workload operator: In values: - frontend containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx:latest resources: requests: memory: "64Mi" cpu: "100m" limits: memory: "128Mi" cpu: "200m" EOF
Verify the deployment is running on the labelled node:
kubectl get pods -l app=frontend -o wide
kubectl get pods -l app=frontend -o wide
All pods should be running on the node you labelled.
Task 3 - Taints and Tolerations
Get a list of node names:
kubectl get nodes
kubectl get nodes
Apply taint to a node (replace NODE_NAME with actual node name):
kubectl taint nodes NODE_NAME dedicated=analytics:NoSchedule
kubectl taint nodes NODE_NAME dedicated=analytics:NoSchedule
Deploy the application with tolerations:
$tolerationsYaml = @" apiVersion: apps/v1 kind: Deployment metadata: name: analytics-app spec: replicas: 2 selector: matchLabels: app: analytics template: metadata: labels: app: analytics spec: tolerations: - key: "dedicated" operator: "Equal" value: "analytics" effect: "NoSchedule" containers: - name: analytics-app image: k8sonazureworkshoppublic.azurecr.io/busybox command: ["sh", "-c", "while true; do echo analytics processing; sleep 10; done"] resources: requests: memory: "64Mi" cpu: "100m" limits: memory: "128Mi" cpu: "200m" "@ $tolerationsYaml | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: analytics-app spec: replicas: 2 selector: matchLabels: app: analytics template: metadata: labels: app: analytics spec: tolerations: - key: "dedicated" operator: "Equal" value: "analytics" effect: "NoSchedule" containers: - name: analytics-app image: k8sonazureworkshoppublic.azurecr.io/busybox command: ["sh", "-c", "while true; do echo analytics processing; sleep 10; done"] resources: requests: memory: "64Mi" cpu: "100m" limits: memory: "128Mi" cpu: "200m" EOF
Verify pod placement:
kubectl get pods -l app=analytics -o wide
kubectl get pods -l app=analytics -o wide
The pods should be scheduled on both nodes, including the tainted node because they have the required toleration.
Test the effect of the taint by creating a pod without tolerations:
$noTolerationPod = @" apiVersion: v1 kind: Pod metadata: name: no-toleration-pod spec: containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx "@ $noTolerationPod | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: no-toleration-pod spec: containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx EOF
Check where this pod was scheduled:
kubectl get pod no-toleration-pod -o wide
kubectl get pod no-toleration-pod -o wide
The pod should avoid the tainted node.
Remove the taint from the node:
kubectl taint nodes NODE_NAME dedicated=analytics:NoSchedule-
kubectl taint nodes NODE_NAME dedicated=analytics:NoSchedule-
Task 4 - Pod Topology Spread Constraints
Create a deployment with topology spread constraints:
$spreadDemo = @" apiVersion: apps/v1 kind: Deployment metadata: name: spread-demo spec: replicas: 6 selector: matchLabels: app: spread-demo template: metadata: labels: app: spread-demo spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app: spread-demo containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx "@ $spreadDemo | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: spread-demo spec: replicas: 6 selector: matchLabels: app: spread-demo template: metadata: labels: app: spread-demo spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app: spread-demo containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx EOF
Verify the pod distribution:
kubectl get pods -l app=spread-demo -o wide
kubectl get pods -l app=spread-demo -o wide
The pods should be evenly spread across your nodes due to the topology spread constraint.
Task 5 - Cleanup
Clean up the resources created in this exercise:
kubectl delete deployment web-app frontend spread-demo analytics-app kubectl delete pod no-toleration-pod
kubectl delete deployment web-app frontend spread-demo analytics-app kubectl delete pod no-toleration-pod