What happens when one of your Kubernetes nodes fails?
kubernetes
We already know that Kubernetes is the No. 1 orchestration platform for container-based applications, automating the deployment and scaling of these apps, and streamlining maintenance operations. It coordinates a highly available cluster of computers that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. The Kubernetes cluster usually comprises multiple nodes, what happens when one of the nodes fails? we are going to look at how this scenario is being handled by Kubernetes in this post.
A quick recap on the Architecture
Any Kubernetes cluster (example below) would have two types of resources:
Master which controls the cluster
Node is the workers’ nodes that run applications
Image – Kubernetes cluster
The Master coordinates all activities in your cluster, such as scheduling applications, maintaining applications’ desired state, scaling applications, and rolling out new updates.
Each Node can be a VM or a physical computer that serves as a worker machine in a cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes master. The node should also have tools for handling container operations, such as Docker.
When any applications need to be deployed on Kubernetes, the master issues a command to start the application containers. The master schedules the containers to run on the cluster’s nodes.
The nodes communicate with the master using the Kubernetes API, which the master exposes. End-users can also use the Kubernetes API directly to interact with the cluster.
Master components make global decisions about the cluster (like for example, scheduling applications), and detecting and responding to cluster events.
Now that we have a good understanding of Kubernetes architecture, we will look at what happens when one of the nodes fails?
What happens when one of your Kubernetes nodes fails?
This section details what happens during a node failure and what is expected during the recovery.
Post node failure, in about 1 minute, kubectl get nodes will report NotReady state.
In about 5 minutes, the states of all the pods running on the NotReady node will change to either Unknown or NodeLost.This is based on pod eviction timeout settings, the default duration is five minutes.
Irrespective of deployments (StatefuleSet or Deployment), Kubernetes will automatically evict the pod on the failed node and then try to recreate a new one with old volumes.
If the node is back online within 5 – 6 minutes of the failure, Kubernetes will restart pods, unmount, and re-mount volumes.
If incase if evicted pod gets stuck in Terminating state and the attached volumes cannot be released/reused, the newly created pod(s) will get stuck in ContainerCreating state. There are 2 options now:
Either to forcefully delete the stuck pods manually (or)
Kubernetes will take about another 6 minutes to delete the VolumeAttachment objects associated with the Pod and then finally detach the volume from the lost Node and allow it to be used by the new pod(s).
In summary, if the failed node is recovered later, Kubernetes will restart those terminating pods, detach the volumes, wait for the old VolumeAttachment cleanup, and reuse (re-attach & re-mount) the volumes. Typically these steps would take about 1 ~ 7 minutes.
Allo! My name is Karthik,experienced IT professional.Upnxtblog covers key technology trends that impacts technology industry.This includes Cloud computing,Blockchain,Machine learning & AI,Best mobile apps, Best tools/open source libs etc.,I hope you would love it and you can be sure that each post is fantastic and will be worth your time.