Introduction
By default, OpenShift’s scheduler places pods based on available resources such as CPU and memory. While this works well for simple workloads, production-grade applications often require more control.
Affinity and anti-affinity rules allow you to influence where pods run—helping you reduce latency, improve fault tolerance, and optimize infrastructure utilization.
๐ Node Affinity
Node affinity ensures that pods are scheduled only on nodes that match specific labels.
๐น Use Case
-
Run workloads on SSD-backed nodes
-
Isolate GPU or high-memory workloads
-
Separate production and non-production workloads
๐งพ Example
This ensures pods are scheduled only on nodes labeled with disktype=ssd.
๐ค Pod Affinity
Pod affinity places pods close to other related pods based on labels.
๐น Use Case
-
Co-locate frontend and backend services
-
Improve performance for tightly coupled components
-
Reduce network latency
๐งพ Example
This schedules pods on the same node as other frontend pods.
๐ซ Pod Anti-Affinity
Pod anti-affinity ensures pods are spread across nodes to avoid single points of failure.
๐น Use Case
-
High availability deployments
-
Zone-aware scheduling
-
Disaster recovery planning
๐งพ Example
This prevents multiple frontend pods from running on the same node.
๐ง Best Practices
-
Use
topologyKeywisely (hostname,zone,region) -
Combine affinity rules with taints and tolerations
-
Avoid overly strict rules that block scheduling
-
Monitor pod placement regularly
๐งช Troubleshooting Tips
-
Inspect scheduling decisions:
-
Verify node labels:
-
Check for scheduling conflicts in events and logs
โ Conclusion
Affinity and anti-affinity rules give OpenShift administrators powerful control over pod placement. When used correctly, they improve performance, enhance fault tolerance, and support enterprise-grade reliability. For mission-critical workloads, mastering these scheduling strategies is essential.
FAQs (0)
Sign in to ask a question. You can read FAQs without logging in.