Exercise 2: Resource Management
Task 1 - Understanding Resource Requests and Limits
Properly setting resource request
s and limit
s is essential for optimal cluster performance. Let’s see how to optimize resource usage:
Create a namespace for a QoS (Quality of Service) demonstration:
kubectl create namespace qos-demo
kubectl create namespace qos-demo
Create a Guaranteed QoS pod (equal
request
s andlimit
s):$guaranteedPodYaml = @" apiVersion: v1 kind: Pod metadata: name: guaranteed-pod namespace: qos-demo spec: containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx resources: requests: memory: "100Mi" cpu: "100m" limits: memory: "100Mi" cpu: "100m" "@ $guaranteedPodYaml | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: guaranteed-pod namespace: qos-demo spec: containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx resources: requests: memory: "100Mi" cpu: "100m" limits: memory: "100Mi" cpu: "100m" EOF
Create a Burstable QoS pod (
request
s less thanlimit
s):$burstablePodYaml = @" apiVersion: v1 kind: Pod metadata: name: burstable-pod namespace: qos-demo spec: containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx resources: requests: memory: "50Mi" cpu: "50m" limits: memory: "100Mi" cpu: "100m" "@ $burstablePodYaml | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: burstable-pod namespace: qos-demo spec: containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx resources: requests: memory: "50Mi" cpu: "50m" limits: memory: "100Mi" cpu: "100m" EOF
Create a BestEffort QoS pod (no
request
s orlimit
s):$bestEffortPodYaml = @" apiVersion: v1 kind: Pod metadata: name: besteffort-pod namespace: qos-demo spec: containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx "@ $bestEffortPodYaml | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: besteffort-pod namespace: qos-demo spec: containers: - name: nginx image: k8sonazureworkshoppublic.azurecr.io/nginx EOF
Check the QoS class of each pod:
kubectl get pods -n qos-demo -o custom-columns="NAME:.metadata.name,QOS:.status.qosClass"
kubectl get pods -n qos-demo -o custom-columns="NAME:.metadata.name,QOS:.status.qosClass"
You should see
Guaranteed
,Burstable
, andBestEffort
respectively.When the cluster is under resource pressure, Kubernetes will evict pods in this order:
BestEffort
,Burstable
, and finallyGuaranteed
. This is important to understand when planning workload priorities.
Task 2 - Monitor Resource Usage
Check current resource usage for your pods:
kubectl top pods -n qos-demo
kubectl top pods -n qos-demo
Tip
If you receive a message stating
error: metrics not available yet
, you may need to wait a few moments for metrics to be collected.The
top
command shows the actual resources in use by each pod, rather than the requested or limited resources.Check node resource usage:
kubectl top nodes
kubectl top nodes
Task 3 - Create Namespace with Resource Quotas
Create a new namespace for testing resource quotas:
kubectl create namespace limited-resources
kubectl create namespace limited-resources
Create a ResourceQuota in this namespace:
$quota = @" apiVersion: v1 kind: ResourceQuota metadata: name: mem-cpu-quota namespace: limited-resources spec: hard: requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi pods: "5" "@ $quota | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: v1 kind: ResourceQuota metadata: name: mem-cpu-quota namespace: limited-resources spec: hard: requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi pods: "5" EOF
Verify the ResourceQuota:
kubectl describe resourcequota -n limited-resources
kubectl describe resourcequota -n limited-resources
Task 4 - Test Resource Quota Enforcement
Create a deployment that uses a lot of the memory quota:
$deployment1 = @" apiVersion: apps/v1 kind: Deployment metadata: name: memory-demo namespace: limited-resources spec: replicas: 1 selector: matchLabels: app: memory-demo template: metadata: labels: app: memory-demo spec: containers: - name: memory-demo image: k8sonazureworkshoppublic.azurecr.io/nginx resources: requests: memory: "800Mi" cpu: "500m" limits: memory: "1.5Gi" cpu: "1" "@ $deployment1 | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: memory-demo namespace: limited-resources spec: replicas: 1 selector: matchLabels: app: memory-demo template: metadata: labels: app: memory-demo spec: containers: - name: memory-demo image: k8sonazureworkshoppublic.azurecr.io/nginx resources: requests: memory: "800Mi" cpu: "500m" limits: memory: "1.5Gi" cpu: "1" EOF
Try to create a second deployment that would exceed the quota:
$deployment2 = @" apiVersion: apps/v1 kind: Deployment metadata: name: memory-demo2 namespace: limited-resources spec: replicas: 1 selector: matchLabels: app: memory-demo2 template: metadata: labels: app: memory-demo2 spec: containers: - name: memory-demo2 image: k8sonazureworkshoppublic.azurecr.io/nginx resources: requests: memory: "800Mi" cpu: "500m" limits: memory: "1.5Gi" cpu: "1" "@ $deployment2 | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: memory-demo2 namespace: limited-resources spec: replicas: 1 selector: matchLabels: app: memory-demo2 template: metadata: labels: app: memory-demo2 spec: containers: - name: memory-demo2 image: k8sonazureworkshoppublic.azurecr.io/nginx resources: requests: memory: "800Mi" cpu: "500m" limits: memory: "1.5Gi" cpu: "1" EOF
Observe the error message indicating the quota has been exceeded by reviewing the replicaset:
kubectl describe rs -l "app=memory-demo2" -n limited-resources
kubectl describe rs -l "app=memory-demo2" -n limited-resources
Check the current quota usage:
kubectl describe resourcequota -n limited-resources
kubectl describe resourcequota -n limited-resources
Task 5 - LimitRange for Default Values
Create a LimitRange to set default resource
limit
s andrequest
s:$limitRange = @" apiVersion: v1 kind: LimitRange metadata: name: default-limits namespace: limited-resources spec: limits: - default: cpu: "500m" memory: "256Mi" defaultRequest: cpu: "100m" memory: "128Mi" type: Container "@ $limitRange | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: v1 kind: LimitRange metadata: name: default-limits namespace: limited-resources spec: limits: - default: cpu: "500m" memory: "256Mi" defaultRequest: cpu: "100m" memory: "128Mi" type: Container EOF
Create a pod without specifying resources:
$defaultPod = @" apiVersion: v1 kind: Pod metadata: name: default-resources-pod namespace: limited-resources spec: containers: - name: default-resources image: k8sonazureworkshoppublic.azurecr.io/nginx "@ $defaultPod | kubectl apply -f -
cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: default-resources-pod namespace: limited-resources spec: containers: - name: default-resources image: k8sonazureworkshoppublic.azurecr.io/nginx EOF
Check that default resources were applied:
kubectl get pod default-resources-pod -n limited-resources -o jsonpath='{.spec.containers[0].resources}' | ConvertFrom-Json
kubectl get pod default-resources-pod -n limited-resources \ -o jsonpath='Requests: cpu={.spec.containers[0].resources.requests.cpu}, memory={.spec.containers[0].resources.requests.memory}{"\n"}Limits: cpu={.spec.containers[0].resources.limits.cpu}, memory={.spec.containers[0].resources.limits.memory}{"\n"}'
Look for the “Requests” and “Limits” sections to confirm the LimitRange defaults were applied.
Task 6 - Cleanup
Delete the namespace and all resources created in this exercise:
kubectl delete namespace limited-resources kubectl delete resourcequota mem-cpu-quota -n limited-resources kubectl delete limitrange default-limits -n limited-resources kubectl delete pod default-resources-pod -n limited-resources kubectl delete ns qos-demo
kubectl delete namespace limited-resources kubectl delete resourcequota mem-cpu-quota -n limited-resources kubectl delete limitrange default-limits -n limited-resources kubectl delete pod default-resources-pod -n limited-resources kubectl delete ns qos-demo