I have a kubernetes deployment running. Let’s call it “my-spboot-deployment”.
This deployment has 2 replicas running in two different worker nodes. (I have used node affinity for provisioning). These two worker nodes are from two different geographical locations.
pod 1 ---> node 1
pod 2 ---> node 2
now i need to scale down this deployment into one pod and my expectation is terminating the pod2 which is running in node 2. But when I execute kubectl scale deployment my-spboot-deployment --replicas=1
, it always terminates the pod1 which is running in node1.
I just need to know how kubernetes decides which pod to terminate when we command to scale down. Is that just random or any algorithm behind it.
My second question is, in this kind of scenario how can I purposely bring down pod2 in node2. Any possible way of doing that? (I can’t bring down docker or kubelet service in those worker nodes. It will impact other services running on those)
**I don’t want to re-deploy by adding a new node label.
1