In this post, we are going to take look at Lightweight Kubernetes engine k3s that can run on edge, IoT, and appliances. Rancher k3s is great for offline development, prototyping, and testing purpose. You can very well use it on a VM as a small, cheap, reliable k8s for CI/CD.
k3s is a fully compliant Kubernetes distribution with the following changes:
In the next section, we will look at how to install and deploy sample application on to K3s cluster.
In the below steps, we would be installing the k3s cluster, it would install a limited set of components like api-server, controller-manager, scheduler, kubelet, cni, kube-proxy.
k3s can be deployed via installation script located at https://get.k3s.io, use the following command to install.
curl -sfL https://get.k3s.io | sh -
To avoid colliding with a kubectl
already installed and to avoid overwriting any existing Kubernetes configuration file, k3s adds a k3s.kubectl
command. If you are only using k3s, consider adding an alias command.
At this point you have installed k3s, check whether the newly deployed node is in Ready
state using the following command.
At this point, you have a fully functional Kubernetes cluster. The following command will deploy the Nginx web application.
kubectl run nginx --image nginx:alpine
Once Nginx has been deployed, the application can be exposed with the following command.
k3s kubectl expose deployment nginx \
--port 80 \
--target-port 80 \
--type ClusterIP \
--selector=run=nginx \
--name nginx
Now that services are exposed outside, we can launch lynx a terminal-based web browser to access the Nginx application using the following command.
export CLUSTER_IP=$(k3s kubectl get svc/nginx -o go-template='{{(index .spec.clusterIP)}}')
echo CLUSTER_IP=$CLUSTER_IP
lynx $CLUSTER_IP:80
Congrats! now you have a deployed Nginx application to a fully functional Kubernetes cluster using k3s.
In the next section, we will look at how to install and configure 2-nodes to this k3s cluster.
To install on worker nodes and add them to the cluster, run the installation script with the K3S_URL
and K3S_TOKEN
environment variables.
curl -sfL https://get.k3s.io | K3S_URL=https://<server>:6443 K3S_TOKEN=<token> sh -
Now go back to Master node, check whether the newly deployed node is in Ready
state using the following command.
Install the next worker node and add them to the cluster using the following command:
curl -sfL https://get.k3s.io | K3S_URL=https://<server>:6443 K3S_TOKEN=<token> sh -
Now go back to Master node, check whether the newly deployed node is in Ready
state using the following command.
Congrats! now we have fully functional 2-node Kubernetes cluster using K3s.
Like this post? Don’t forget to share it!
The world of wearable technology has been evolving at a rapid pace, with one of…
As we wrap up 2024, it’s time to reflect on the incredible journey we’ve had…
Operating a business often entails balancing tight schedules, evolving market dynamics, and shifting consumer requirements.…
Of course, every site has different needs. In the end, however, there is one aspect…
In today's digital-first world, businesses must adopt effective strategies to stay competitive. Social media marketing…
62% of UX designers now use AI to enhance their workflows. Artificial intelligence (AI) rapidly…
This website uses cookies.