In the last post, we have looked at how to create the local cluster, deploy an app, and check the status of the deployments. In continuation of the series, in this post, we are going to check how to scale & perform updates to applications running on the Kubernetes cluster. Earlier posts of this series were featured in KubeWeekly
Quick Snapshot
If you can remember, in the last post we have deployed our Nginx application using the run command. So let us check the list of application deployments using get deployments command.
The run command would have created only one Pod for running our application. But in the real-life scenario, when traffic increases, we will need to scale the application to keep up with user demand. Running multiple instances of an application will require a way to distribute the traffic to all of them. Services have an integrated load-balancer that will distribute network traffic to all Pods of an exposed Deployment. Services will monitor continuously the running Pods using endpoints, to ensure the traffic is sent only to available Pods.
To list your deployments use the get deployments command:
We should have 1 Pod. If not, run the command again. This shows:
Now let’s scale the Deployment to 4 replicas. We are going to use the kubectl scale command, followed by the deployment type, name, and desired number of instances:
The change was applied, and we have 4 instances of the application available. Next, let’s check if the number of Pods changed:
Now there should be 4 pods running in the cluster
There are 4 Pods now, with different IP addresses. The change was registered in the Deployment events log. To check that, use the describe command:
You can also view in the output of this command that there are 4 replicas now.
To scale down the Service to 2 replicas, run again the scale command:
If you have multiple instances of an Application running, there could be scenarios where old instances can clash with the new instances, and if you shut down the cluster for updates, downtime could never be not acceptable. Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day.
In Kubernetes, this is done with rolling updates. Rolling updates allow deployments update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.
Rolling updates allow the following actions:
To view the current image version of the app, run a describe command against the Pods (look at the Image field):
To update the image of the application to the new version, use the set image command, followed by the deployment name and the new image version:
The command notified the Deployment to use a different image for your app and initiated a rolling update. Check the status of the new Pods, and view the old one terminating with the get pods command:
Suppose if you want to roll out the updates we made, We’ll use the rollout undo command:
The rollout command reverted the deployment to the previously known state. Updates are versioned and you can revert to any previously known state of a Deployment. List again the Pods:
After the rollout succeeds, you may want to get the Deployment.
Finally, you can clean up the resources you created in your cluster:
kubectl delete service my-nginx
kubectl delete deployment my-nginx
Like this post? Don’t forget to share it!
There are few things as valuable to a business as well-designed software. Organizations today rely…
The cryptocurrency industry is being reshaped by the fusion of blockchain technology and artificial intelligence…
Introduction Artificial Intelligence (AI) has also found its relevance in graphic design and is quickly…
Imagine a world where the brilliance of Artificial Intelligence (AI) meets the unbreakable security of…
In today’s fast-paced digital landscape, automation is not just a luxury but a necessity for…
The world of casino gaming has leveraged the emerging technology advancements to create immersive and…
This website uses cookies.