Recreate or Recover a Statefulset pod when the node failed
Summary:
Recover the statefulset pod
Content (please ensure you mask any confidential information):
Hello Team,
A stateful pod goes into terminating state when the Node, where the mentioned pod is running is failed (due to hw issue or manual testing) and node controller marked the node status as NotReady.
And i understand this is expected upstream behaviour as per the document -
Now i wanted to recreate or recover the pod and start on the other available (Ready) node, how to handle this? I dont want to force delete the pod to recreate.
I have added the following flag to /etc/kubernetes/manifests/kube-controller-manager.yaml, Post that with in a min, if the failed node does not responds, node controller marks the node state as NotReady.
Tagged:
0