Node and Pod Affinity/Anti-Affinity
Affinity rules allow you to control pod placement based on node labels and other pods’ labels. This gives you more flexibility than node selectors, enabling complex scheduling scenarios.
Node Affinity
Node affinity is a more expressive way to constrain which nodes your pod can be scheduled on, based on node labels. It provides:
- More complex matching expressions
- Soft preferences (preferred vs. required rules)
- Both positive (affinity) and negative (anti-affinity) matching
Types of Node Affinity
Required Node Affinity (
requiredDuringSchedulingIgnoredDuringExecution
):- The pod will not be scheduled unless the rule is satisfied
- Existing pods continue running even if node labels change
Preferred Node Affinity (
preferredDuringSchedulingIgnoredDuringExecution
):- The scheduler tries to find a node that satisfies the rule
- If no matching node is found, the pod is still scheduled on a non-matching node
- Each preference has a weight to prioritize some preferences over others
Example: Node Affinity
apiVersion: v1
kind: Pod
metadata:
name: nginx-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: disk-type
operator: In
values:
- ssd
containers:
- name: nginx
image: nginx
In this example:
- The pod must run on Linux nodes (required rule)
- The scheduler will prefer nodes with SSD disks (preferred rule)
Pod Affinity and Anti-Affinity
Pod affinity and anti-affinity rules constrain pod placement based on the labels of pods that are already running in the cluster:
- Pod Affinity: Attracts pods to each other (co-location)
- Pod Anti-Affinity: Repels pods from each other (separation)
Use Cases for Pod Affinity/Anti-Affinity
- Co-location: Place frontend and backend pods on the same node to reduce latency
- Spreading: Ensure high availability by distributing replicas across different nodes
- Exclusive nodes: Prevent certain workloads from sharing nodes
Example: Pod Affinity and Anti-Affinity
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
replicas: 3
selector:
matchLabels:
app: web-server
template:
metadata:
labels:
app: web-server
spec:
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- cache
topologyKey: kubernetes.io/hostname
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- web-server
topologyKey: kubernetes.io/hostname
containers:
- name: web-app
image: nginx
In this web-server
pod example:
- Pods prefer to be scheduled on the same node as pods with the label
app=cache
(pod affinity) - Pods must not be scheduled on nodes that already have a pod with the label
app=web-server
(pod anti-affinity)
The topologyKey
field defines the scope of the rule:
kubernetes.io/hostname
: Node leveltopology.kubernetes.io/zone
: Availability zone leveltopology.kubernetes.io/region
: Region level