In this post, we are going to take look at Lightweight Kubernetes engine k3s that can run on edge, IoT, and appliances. Rancher k3s is great for offline development, prototyping, and testing purpose. You can very well use it on a VM as a small, cheap, reliable k8s for CI/CD.
k3s is a fully compliant Kubernetes distribution with the following changes:
In the next section, we will look at how to install and deploy sample application on to K3s cluster.
In the below steps, we would be installing the k3s cluster, it would install a limited set of components like api-server, controller-manager, scheduler, kubelet, cni, kube-proxy.
k3s can be deployed via installation script located at https://get.k3s.io, use the following command to install.
curl -sfL https://get.k3s.io | sh -
To avoid colliding with a kubectl
already installed and to avoid overwriting any existing Kubernetes configuration file, k3s adds a k3s.kubectl
command. If you are only using k3s, consider adding an alias command.
At this point you have installed k3s, check whether the newly deployed node is in Ready
state using the following command.
At this point, you have a fully functional Kubernetes cluster. The following command will deploy the Nginx web application.
kubectl run nginx --image nginx:alpine
Once Nginx has been deployed, the application can be exposed with the following command.
k3s kubectl expose deployment nginx \
--port 80 \
--target-port 80 \
--type ClusterIP \
--selector=run=nginx \
--name nginx
Now that services are exposed outside, we can launch lynx a terminal-based web browser to access the Nginx application using the following command.
export CLUSTER_IP=$(k3s kubectl get svc/nginx -o go-template='{{(index .spec.clusterIP)}}')
echo CLUSTER_IP=$CLUSTER_IP
lynx $CLUSTER_IP:80
Congrats! now you have a deployed Nginx application to a fully functional Kubernetes cluster using k3s.
In the next section, we will look at how to install and configure 2-nodes to this k3s cluster.
To install on worker nodes and add them to the cluster, run the installation script with the K3S_URL
and K3S_TOKEN
environment variables.
curl -sfL https://get.k3s.io | K3S_URL=https://<server>:6443 K3S_TOKEN=<token> sh -
Now go back to Master node, check whether the newly deployed node is in Ready
state using the following command.
Install the next worker node and add them to the cluster using the following command:
curl -sfL https://get.k3s.io | K3S_URL=https://<server>:6443 K3S_TOKEN=<token> sh -
Now go back to Master node, check whether the newly deployed node is in Ready
state using the following command.
Congrats! now we have fully functional 2-node Kubernetes cluster using K3s.
Like this post? Don’t forget to share it!
There are few things as valuable to a business as well-designed software. Organizations today rely…
The cryptocurrency industry is being reshaped by the fusion of blockchain technology and artificial intelligence…
Introduction Artificial Intelligence (AI) has also found its relevance in graphic design and is quickly…
Imagine a world where the brilliance of Artificial Intelligence (AI) meets the unbreakable security of…
In today’s fast-paced digital landscape, automation is not just a luxury but a necessity for…
The world of casino gaming has leveraged the emerging technology advancements to create immersive and…
This website uses cookies.