Kubernetes Guides

What is Service Mesh & why do we need it? + Linkered Tutorial

In the Microservice ecosystem, usually cross-cutting concerns such as service discovery, service-to-service, and origin-to-service security, observability and resiliency, etc., are deployed via shared assets such as an API gateway or ESB. As microservice grows in size and complexity, it can become harder to understand and manage.

The service mesh technique addresses these challenges where the implementation of these cross-cutting capabilities is configured as code. A service mesh provides an array of network proxies alongside containers. Each proxy serves as a gateway to each interaction that occurs, both between containers and between servers. The proxy accepts the connection and spreads the load across the service mesh. Service mesh serves as a dedicated infrastructure layer for handling service-to-service communication.

Service mesh offers consistent discovery, security, tracing, monitoring, and failure handling without the need for a shared asset such as an API gateway or ESB. So if you have service mesh on your cluster, you can achieve all the below items without making changes to your application code.

  • Automatic Load balancing
  • Fine-grained control of traffic behavior with routing rules, retries, failovers, etc.,
  • Pluggable policy layer
  • Configuration API supporting access controls, rate limits, and quotas
  • Service discovery
  • Service monitoring with automatic metrics, logs, and traces for all traffic
  • Secure service to service communication

In, Service Mesh model, each Microservice will have a companion proxy sidecar. Sidecar gets attached to the parent application and provides supporting features for the application. The sidecar also shares the same life cycle as the parent application, is created, and retired alongside the parent.

Image – Side Car Pattern

Key Use Cases for Service Mesh

  • Service discovery: Service mesh provides service-level visibility and telemetry, which helps enterprises with service inventory information and dependency analysis.
  • Operation reliability: Metrics data from service mesh allows you to see how your services are performing, for example, how long did it take it to respond to service requests, how much resource it is using, etc., This data is useful to detect the issues & correct them.
  • Traffic governance: With service mesh, you can configure the mesh network to perform fine-grained traffic management policies without going back and changing the application. This includes all ingress and egress traffic to and from the mesh.
  • Access control: With service mesh, you can assign a policy that a service request can only be granted based on the location where the request came can only succeed if the requester passes the health check.
  • Secure service-to-service communications: You can enforce mutual TLS for service-to-service communications for all your service in the mesh. Also, you can enforce service-level authentication using either TLS or JSON Web Tokens.

Currently, the service mesh is being offered by Linkerd, Istio, and Conduit providers. A service mesh is ideal for multi-cloud scenarios since it offers a single abstraction layer that obscures the specifics of the underlying cloud. Enterprises can set policies with the service mesh, and have them enforced across different cloud providers.

In the next section, we can look at how to implement Linkerd service mesh for the sample application we have used before i.e., Nginx deployment

Linkered Service Mesh

Linkerd is a service sidecar and service mesh for Kubernetes and other frameworks. Linkerd sidecar is attached to the parent application and provides supporting features for the application. It also shares the same life cycle as the parent application, is created, and retired alongside the parent.

Applications and services often require related functionality, such as monitoring, logging, configuration, and networking services. Linkerd makes running your service easier and safer by giving you runtime debugging, observability, reliability, and security–all without requiring any changes to your code.

Architecture

Linkerd has three basic components: (1) User Interface (both command-line and web-based options are available), (2) data plane, and (3) control plane.

Image – Linkerd Architecture

Key Components

  • User Interface is comprised of a CLI (linkerd) and a web UI. The CLI runs on your local machine; the web UI is hosted by the control plane.
  • Control plane is composed of a number of services that run on your cluster and drive the behavior of the data plane. It is responsible for aggregating telemetry data from data plane proxies.
  • Data plane is comprised of ultralight, transparent proxies that are deployed in front of service. These proxies automatically handle all traffic to and from the service.

Next steps, we will download and install Linkerd, deploy the Sample app.

If you’re looking for a quick start on a basic understanding of Kubernetes concepts, please refer to earlier posts for understanding on Kubernetes & how to create, deploy & rollout updates to the cluster.

Step #1: Validate Kubernetes Version

Check if you’re running Kubernetes cluster 1.9 or later by using kubectl version command.

Image – Validate Kubernetes version

Step #2: Install Linkerd CLI

We will be using  CLI to interact with the Linkerd control plane, download the CLI onto your local machine using curl command.

You can also download the CLI directly via the Linkerd releases page.

Image – Linkerd CLI Installation

Verify if the CLI is installed and running correctly using linkerd command.

Image – Verify Linkerd Installation

Step #3: Validate the Kubernetes cluster

To ensure that the Linkerd control plane will install correctly, we are going to run a pre-check to validate that everything is configured correctly.

Image – Pre check Kubernetes cluster

Step #4: Install Linkerd on the Kubernetes cluster

We are going to install the Linkerd control plane into its own namespace using linkerd installcommand. Post-installation, Linkerd control plane resources will be added to your cluster and start running immediately.

Image – Linkerd Installation

Post installation, run linkerd check to check if everything is ok.

Image – Linkerd Validation (1)

Post validation, you should be having [ok] status for all the items.

Image – Linkerd Validation (2)

Step #5: View Control plane components

We have installed the control plane and its running, To view the components of the control plane, use kubectl command.

Image – Control Plane components

You can also view the Linkerd dashboard by running linkerd dashboard

Image – Launch Linkerd Dashboard

To view traffic, use linkerd -n linkerd top deploy/web command.

Image – Linkerd Traffic

Congrats! we have successfully installed and configured Linkerd components.

The next step is to set up a sample application, check the metrics.

Step #6: Deploy the sample app

We are going to use the Ngnix web app as a sample, to install run kubectl apply command

Image – Deploy Sample application

Now the application is installed, the next step is to inject Linkerd into the app by piping linkerd inject and kubectl apply command. Kubernetes will execute a rolling deployment and update each pod with the data plane’s proxies, all without any downtime.

Image – Inject Linkerd to application

If you’ve noticed, we have added Linkerd to existing services without touching the original YAML.

To view, high-level stats about the app, you can run linkerd -n ngnix-deployment stat deploy command.

The Linkerd dashboard provides a high-level view of what is happening with your services in real-time. It can be used to view the “golden” metrics (success rate, requests/second, and latency), visualize service dependencies, and understands the health of specific service routes. To view detailed metrics, you can use Grafana which is part of the Linkerd control plane and provides actionable dashboards for your services out of the box. It is possible to see high-level metrics and dig down into the details, even for pods.

Image – Sample Grafana Dashboard – Top Line Metrics

Today, we have learned how to install Linkerd & its components. We have also deployed sample service and able to view traffic and its metrics.

Like this post? Don’t forget to share it!

Additional Resources

Summary
Article Name
Linkerd Introduction + Kubernetes Tutorial
Description
Linkerd is a service sidecar and service mesh for Kubernetes and other frameworks.In this post,we will look at how to install Linkerd plus adding to sample service.
Author
Publisher Name
Upnxtblog
Publisher Logo
Karthik

Allo! My name is Karthik,experienced IT professional.Upnxtblog covers key technology trends that impacts technology industry.This includes Cloud computing,Blockchain,Machine learning & AI,Best mobile apps, Best tools/open source libs etc.,I hope you would love it and you can be sure that each post is fantastic and will be worth your time.

Share
Published by
Karthik

Recent Posts

Social Media Marketing: A Key to Business Success with Easy Digital Life

In today's digital-first world, businesses must adopt effective strategies to stay competitive. Social media marketing…

13 hours ago

Best 7 AI Tools Every UI/UX Designer Should Know About

62% of UX designers now use AI to enhance their workflows. Artificial intelligence (AI) rapidly…

3 days ago

How AI Enhances Photoshop Workflow: A Beginner’s Guide

The integration of artificial intelligence into graphic design through tools like Adobe Photoshop can save…

2 weeks ago

The Rise Of Crypto Trading Bots: A New Era In Digital Trading

The cryptocurrency trading world has grown significantly in recent years, with automation playing a key…

3 weeks ago

Real-World Insights on White-Label NFT Marketplace Development

The non-fungible token (NFT) market has witnessed explosive growth over the past few years, transforming…

4 weeks ago

Finding the Right Time to Build Your Software Instead of Buy

There are few things as valuable to a business as well-designed software. Organizations today rely…

1 month ago

This website uses cookies.