Cloud Computing

Tutorial : Prometheus open-source systems monitoring and alerting toolkit

Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud.Prometheus primarily supports a pull-based HTTP model but it also supports alerts, it would be the right fit to be part of your operational toolset. Prometheus works well for recording any purely numeric time series. It fits both machine-centric monitoring as well as monitoring of highly dynamic service-oriented architectures.

In a world of microservices, its support for multi-dimensional data collection and querying is a particular strength. Grafana has become the dashboard visualization tool of choice for Prometheus users and support for Grafana ships with the tool.

This quickstart assumes a basic understanding of Docker concepts, please refer to earlier posts for understanding on Docker & how to install and containerize applications.

In this post, we are going to learn about Prometheus concepts, configuration & view metrics.

Key Features

Some of the key features of Prometheus are :

  • Multi-dimensional data model with time series data identified by metric name and key/value pairs
  • Flexible query language to leverage this dimensionality
  • No reliance on distributed storage; single server nodes are autonomous
  • Time series collection happens via a pull model over HTTP
  • Pushing time series is supported via an intermediary gateway
  • Targets are discovered via service discovery or static configuration
  • Multiple modes of graphing and dashboarding (ex.Grafana) support

Bit about Architecture

Prometheus is designed for reliability. Each Prometheus server is standalone, not depending on network storage or other remote services. You can rely on it when other parts of your infrastructure are broken, and you do not need to set up extensive infrastructure to use it.

The Prometheus ecosystem consists of multiple components, many of which are optional:

Image – Prometheus architecture / prometheus.io

Prometheus scrapes metrics from instrumented jobs, either directly or via an intermediary push gateway for short-lived jobs. It stores all scraped samples locally and runs rules over this data to either aggregate and record new time series from existing data or generate alerts. Grafana or other API consumers can be used to visualize the collected data.

Step#1: Download Prometheus

Download the latest release of Prometheus for your platform, then extract it:

tar xvfz prometheus-*.tar.gz
cd prometheus-*

Post extraction, run the binary and see help on its options bypassing the --help flag.

./prometheus --help
usage: prometheus [<flags>]

The Prometheus monitoring server

. . .

Prometheus configuration is YAML. The Prometheus download comes with a sample configuration in a file called prometheus.yml .We are going to use the same to customize it for our needs.

The Prometheus server requires a configuration file that defines the endpoints to scrape along with how frequently the metrics should be accessed and to define the servers and ports that Prometheus should scrape data from. In the below example, we have defined two targets running on different ports.

global:
 scrape_interval: 15s
 evaluation_interval: 15s

scrape_configs:
 - job_name: 'prometheus'

static_configs:
 - targets: ['localhost:9090', 'localhost:9100']
 labels:
 group: 'prometheus'

 

Port # 9090 is port for Prometheus itself. Prometheus exposes information related to its internal metrics and performance and allows it to monitor itself.Port# 9100 is the Node Exporter Prometheus process. This exposes information about the Node, such as disk space, memory, and CPU usage. Prometheus expects metrics to be available on targets on a path of /metrics.

Prometheus Dashboard would be available via the URL: http://localhost:9090/metrics.

For a complete specification of configuration options, see the configuration documentation.

Step#2: Start Prometheus

For this example, we are going to use pre-compiled Prometheus Docker Container, you can get one here. Prometheus uses the configuration to scrape the targets, collect and store the metrics before making them available via API that allows dashboards, graphing, and alerting.

To launch the container with the Prometheus configuration start with prometheus.yml as an argument. Any data created by Prometheus will be stored on the host, in the directory /prometheus/data. When we update the container, the data will be persisted.

docker run -d --net=host \ 

-v /root/prometheus.yml:/etc/prometheus/prometheus.yml \ 

--name prometheus-server \ 

prom/prometheus

Image – Launch Prometheus Docker Container

You can view the dashboard on port 9090 i.e., http://localhost:9090/metrics

Now that we have launched Prometheus container, the next step is to configure Node exporter on the particular node where we want to collect metrics.

Step#3: Configure Prometheus Node Exporter

For this example, we are going to launch the pre-compiled Node Exporter Docker container.

If you’re looking for configuring it in local, here are steps :

Download the latest release of the Node Exporter of Prometheus for your platform, then extract it:

tar xvfz node_exporter-*.tar.gz
cd node_exporter-*

You can start the Node Exporter like below

./node_exporter

Here in the below example of Docker container, you have to mount the host /proc and /sys directory so that the container have accessed to the necessary information to report on.

docker run -d -p 9100:9100 \ 

-v "/proc:/host/proc" \ 

-v "/sys:/host/sys" \ 

-v "/:/rootfs" \ 

--net="host" \ 

--name=prometheus \ 

quay.io/prometheus/node-exporter:v0.13.0 \ 

-collector.procfs /host/proc \ 

-collector.sysfs /host/sys \ 

-collector.filesystem.ignored-mount-points "^/(sys|proc|dev|host|etc)($|/)"

Image – Configure Prometheus Node Exporter

As you can see for this node, Prometheus is configured on port 9100, for the local dashboard, you can visit http://localhost:9100/metrics.

Congrats! we have configured containers & node exporter on one of the nodes.

Step#4: View Metrics

Prometheus will scrape and store the data based on the internals in the configuration. Go to the dashboard and verify that Prometheus now has information about the time series that this endpoint exposes on the node.

Use the dropdown next to the “Execute” button to see a list of metrics this server is collecting. In the list, you’ll see a number of metrics prefixed with node_, that have been collected by the Node Exporter. For example, you can see the node’s CPU usage via the node_cpu metric.

Image – Prometheus Dashboard

Step#5: Snapshot of current data

Prometheus stores all time series data in a local time series database with custom format on disk. There are scenarios where you want to create a snapshot of all current data.  In this section,we can check steps on how to do it.

  • Make sure you have enabled --web.enable-admin-api when you start the prometheus
  • Make an HTTP POST request to get snapshot using the command curl -XPOST http://localhost:9090/api/v1/admin/tsdb/snapshot
  • This will save snapshots in  snapshots/<datetime>-<rand> under the TSDB’s data directory and returns the directory as a response.
  • Move the snapshots to /tmp folder and create a new Prometheus container with the below command and point the snapshot file

docker run --rm -p 9090:9090 -uroot -v /tmp/snapshots/20180611T130634Z-69ffcdcc60b89e54/:/prometheus prom/prometheus --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/prometheus

In this post, we have got introduced to Prometheus, installed it, and configured it to monitor our first resources.

Like this post? Don’t forget to share it!

Useful Resources

Summary
Article Name
Tutorial : Prometheus open-source systems monitoring and alerting toolkit
Description
Prometheus primarily supports a pull-based HTTP model but it also supports alerts, it would be right fit to be part of your operational toolset.
Author
Publisher Name
Upnxtblog
Publisher Logo
Karthik

Allo! My name is Karthik,experienced IT professional.Upnxtblog covers key technology trends that impacts technology industry.This includes Cloud computing,Blockchain,Machine learning & AI,Best mobile apps, Best tools/open source libs etc.,I hope you would love it and you can be sure that each post is fantastic and will be worth your time.

Share
Published by
Karthik

Recent Posts

Finding the Right Time to Build Your Software Instead of Buy

There are few things as valuable to a business as well-designed software. Organizations today rely…

1 week ago

Innovators in Crypto: Prominent AI-Powered Coins

The cryptocurrency industry is being reshaped by the fusion of blockchain technology and artificial intelligence…

3 weeks ago

Top AI Design Tools Every Graphic Designer Should Use in 2024

Introduction Artificial Intelligence (AI) has also found its relevance in graphic design and is quickly…

2 months ago

Transforming Industries: The Integration of AI and Blockchain

Imagine a world where the brilliance of Artificial Intelligence (AI) meets the unbreakable security of…

2 months ago

How Can I Use Automation to Streamline My Digital Marketing Efforts?

In today’s fast-paced digital landscape, automation is not just a luxury but a necessity for…

2 months ago

Top 5 AI Technologies Transforming the Casino Gaming Landscape in 2025

The world of casino gaming has leveraged the emerging technology advancements to create immersive and…

3 months ago

This website uses cookies.