Kubernetes Guides

What happens when one of your Kubernetes nodes fails?

We already know that Kubernetes is the No. 1 orchestration platform for container-based applications, automating the deployment and scaling of these apps, and streamlining maintenance operations. It coordinates a highly available cluster of computers that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. The Kubernetes cluster usually comprises multiple nodes, what happens when one of the nodes fails? we are going to look at how this scenario is being handled by Kubernetes in this post.

A quick recap on the Architecture

  1. Any Kubernetes cluster (example below) would have two types of resources:
    1. Master which controls the cluster
    2. Node is the workers’ nodes that run applications

      Image – Kubernetes cluster

  2. The Master coordinates all activities in your cluster, such as scheduling applications, maintaining applications’ desired state, scaling applications, and rolling out new updates.
  3. Each Node can be a VM or a physical computer that serves as a worker machine in a cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes master. The node should also have tools for handling container operations, such as Docker.
  4. When any applications need to be deployed on Kubernetes, the master issues a command to start the application containers. The master schedules the containers to run on the cluster’s nodes.
  5. The nodes communicate with the master using the Kubernetes API, which the master exposes. End-users can also use the Kubernetes API directly to interact with the cluster.
Image – Kubernetes Architecture (Source: Kubernetes.io)

Master components provide the cluster’s control plane. Kubernetes Control Plane consists of a collection of below processes on your cluster:

  • Kubernetes Master is a collection of three processes
    • API Server that exposes the Kubernetes API. It is the front-end of the Kubernetes control plane.
    • Controller-Manager that runs controllers, which are designed to handle routine tasks in the cluster.
    • Scheduler is to keep watch for newly created pods that have no node assigned and selects a node for them to run on.
  • Each individual non-master node on the cluster runs two processes:
    • Kubelet – this is to communicate with Kubernetes Master
    • kube-proxy – this is nothing but network proxy (Kubernetes networking services) on each node.
    • Container runtime such as Docker.

Master components make global decisions about the cluster (like for example, scheduling applications), and detecting and responding to cluster events.

Now that we have a good understanding of Kubernetes architecture, we will look at what happens when one of the nodes fails?

What happens when one of your Kubernetes nodes fails?

This section details what happens during a node failure and what is expected during the recovery.

  1. Post node failure, in about 1 minute, kubectl get nodes will report NotReady state.
  2. In about 5 minutes, the states of all the pods running on the NotReady node will change to either Unknown or NodeLost.This is based on pod eviction timeout settings, the default duration is five minutes.
  3. Irrespective of deployments (StatefuleSet or Deployment), Kubernetes will automatically evict the pod on the failed node and then try to recreate a new one with old volumes.
  4. If the node is back online within 5 – 6 minutes of the failure, Kubernetes will restart pods, unmount, and re-mount volumes.
  5. If incase if evicted pod gets stuck in Terminating state and the attached volumes cannot be released/reused, the newly created pod(s) will get stuck in ContainerCreating state. There are 2 options now:
    1. Either to forcefully delete the stuck pods manually (or)
    2. Kubernetes will take about another 6 minutes to delete the VolumeAttachment objects associated with the Pod and then finally detach the volume from the lost Node and allow it to be used by the new pod(s).

In summary, if the failed node is recovered later, Kubernetes will restart those terminating pods, detach the volumes, wait for the old VolumeAttachment cleanup, and reuse (re-attach & re-mount) the volumes. Typically these steps would take about 1 ~ 7 minutes.

Like this post? Don’t forget to share it!

What’s next :

Summary
Article Name
What happens when one of your Kubernetes nodes fails?
Description
In this post, we are going to look at what happens when one of your Kubernetes nodes fails?
Author
Publisher Name
Upnxtblog
Publisher Logo
Karthik

Allo! My name is Karthik,experienced IT professional.Upnxtblog covers key technology trends that impacts technology industry.This includes Cloud computing,Blockchain,Machine learning & AI,Best mobile apps, Best tools/open source libs etc.,I hope you would love it and you can be sure that each post is fantastic and will be worth your time.

Share
Published by
Karthik

Recent Posts

Developing a Strong Disaster Recovery Plan for Your Business

Operating a business often entails balancing tight schedules, evolving market dynamics, and shifting consumer requirements.…

5 days ago

How to Secure Your WordPress Hosting by Upgrading Your Login URL

Of course, every site has different needs. In the end, however, there is one aspect…

7 days ago

Social Media Marketing: A Key to Business Success with Easy Digital Life

In today's digital-first world, businesses must adopt effective strategies to stay competitive. Social media marketing…

1 week ago

Best 7 AI Tools Every UI/UX Designer Should Know About

62% of UX designers now use AI to enhance their workflows. Artificial intelligence (AI) rapidly…

2 weeks ago

How AI Enhances Photoshop Workflow: A Beginner’s Guide

The integration of artificial intelligence into graphic design through tools like Adobe Photoshop can save…

3 weeks ago

The Rise Of Crypto Trading Bots: A New Era In Digital Trading

The cryptocurrency trading world has grown significantly in recent years, with automation playing a key…

1 month ago

This website uses cookies.