You're almost there! Please answer a few more questions for access to the Applications content. Complete registration
Interested in joining? Complete your registration by providing Areas of Interest here. Register

Recreate or Recover a Statefulset pod when the node failed

Summary:

Recover the statefulset pod

Content (please ensure you mask any confidential information):

Hello Team,

A stateful pod goes into terminating state when the Node, where the mentioned pod is running is failed (due to hw issue or manual testing) and node controller marked the node status as NotReady.

And i understand this is expected upstream behaviour as per the document - 

Now i wanted to recreate or recover the pod and start on the other available (Ready) node, how to handle this? I dont want to force delete the pod to recreate.

I have added the following flag to /etc/kubernetes/manifests/kube-controller-manager.yaml, Post that with in a min, if the failed node does not responds, node controller marks the node state as NotReady.

Howdy, Stranger!

Log In

To view full details, sign in.

Register

Don't have an account? Click here to get started!