Kubernetes Logging with Grafana Loki & Promtail in under 10 minutes

Akriotis Kyriakos
8 min readFeb 20, 2023

--

Consolidate all your Kubernetes logs in a intuitive Grafana dashboard.

What is the goal?

After completing this lab, we will have consolidated all the logs generated in our Kubernetes cluster in a tidy, neat, real-time dashboard in Grafana.

What are we going to need?

We are going to need a:

  1. Kubernetes cluster.
  2. Grafana installation.
  3. Grafana Loki installation.
  4. Promtail agent on every node of the Kubernetes cluster.

If you don’t have a Kubernetes cluster already in place, you can get started quickly with a containerized variant based either on K3D/K3S or on KinD. If on the other hand, you want to invest in a full-blown environment based on virtual machines in the cloud or on premises you can find a very simple solution in the following article:

What is Grafana?

Grafana is an analytics and interactive visualization platform. It provides a rich variety of charts, graphs, and alerts and connects to plead of supported data sources as Prometheus, time-series databases or the known RDBMs. It allows you to query, visualize, create alerts on your metrics regardless where they are stored.

You have to think of it as the equivalent of Kibana in the ELK stack.

The installation is fairly simple and we are going to perform it via Helm. If you haven’t Helm already installed on your workstation you can do it either with brew if you are working on MacOS:

brew install helm

or with the following bash commands if you are working on Debian/Ubuntu Linux:

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null

sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list

sudo apt-get update
sudo apt-get install helm --yes

Putting this behind, we can now install the Helm chart for Grafana:

helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

helm install grafana grafana/grafana --namespace grafana --create-namespace

service/grafana service would be of type ClusterIP in a vanilla installation, in my case I am using already MetalLB as a network loadbalancer in my cluster and I have patched the service as of type LoadBalancer. Feel free to ignore this, we are going to port-forward this service later.

What is Grafana Loki & Promtail?

Grafana Loki is a logs aggregation system, more specifically as stated in their website: ”is a horizontally scalable, highly available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.” It’s a fairly new open source project that aws started in 2018 at Grafana Labs.

Loki uses Promtail to aggregate logs. Promtail is a logs collector agent that collects, (re)labels and ships logs to Loki. It is built specifically for Loki — an instance of Promtail will run on each Kubernetes node. It uses the exact same service discovery as Prometheus and support similar methods for labeling, transforming, and filtering logs before their ingestion to Loki.

Loki doesn’t index the actual text of the logs. The log entries are grouped into streams and then indexed with labels. In that way, Loki not only reduces the overall costs but additionally reduces the time between ingestion of log entries and their availability in queries.

It comes with its own query language, LogQL, which can be used from its own command-line interface or directly from Grafana. Last but not least, it can tightly integrate with the Alert Manager of Prometheus — though, the last two are out of the scope of this article.

You have to think of it as the equivalent (not 1–1 but in bigger context) of Elasticsearch in the ELK stack.

Loki, consists of multiple components/microservices:

that can be deployed in 3 different modes:

  1. Monolithic mode, all of Loki’s microservice components run inside a single process as a single binary.
  2. Simple Scalable mode, if you want to separate read and write paths.
  3. Microservices mode, every Loki component runs as a distinct processes.

The scalable installation requires a S3 compatible object store such as AWS S3, Google Cloud Storage, Open Telekom Cloud OBS or a self-hosted store such as MinIO. In the monolithic deployment mode only the filesystem can be used for storage.

In this lab, we are going to use the microservices deployment mode with Open Telekom Cloud OBS as Loki’s storage. The installation (and essentially the configuration) of Loki and Promtail is performed by two distinct and independent charts.

First let’s download the default chart values for every chart and make the necessary changes. For Loki (given that you chose as well to go with the loki-distributed chart)

helm show values grafana/loki-distributed > loki-distributed-overrides.yaml

If you are planning to go with an S3 compatible storage and not with the filesystem, make the following changes to your chart values:

Change the object and shared store target to s3
Add the configuration of your storage, pointing to the designated S3 bucket.

The format of S3 endpoint is s3://{AK}:{SK}@{endpoint}/{region}/{bucket}

Enable the compactor
Configure the compactor

Loki values are now set, let’s install it and move to Promtail:

helm upgrade --install --values loki-distributed-overrides.yaml loki grafana/loki-distributed -n grafana-loki --create-namespace
helm show values grafana/promtail > promtail-overrides.yaml

Get all the components that we installed from Loki chart:

kubectl get all -n grafana-loki

We are going to need the endpoint of Loki’s gateway as the designated endpoint that Promtail will use in order to push logs to Loki. In our case that would be loki-loki-distributed-gateway.grafana-loki.svc.cluster.local, so let’s add it in the Promtail chart values:

We are ready to now to deploy Promtail. Run the command and wait for a bit till all pods come to a Ready state.

helm upgrade --install --values promtail-overrides.yaml promtail grafana/promtail -n grafana-loki

Configure Grafana Data Sources & Dashboard

All the deployments now are completed. It is time we set up our Grafana. As we saw before Grafana has a simple service, let’s then port-forward it and access the Grafana directly from http://localhost:8080/:

kubectl port-forward service/grafana 8080:80 -n grafana

Of course you are free to expose this service in a different way either with assigning to it an external IP by a Load Balancer or as an ingress route via the Ingress solution of choice.

You are going to need those credentials in order to login. Default user is admin, but the password will need a bit of work to be retrieved. Get all the Secrets in the grafana namespace:

kubectl get secrets -n grafana

This is where our password lives. Let’s extract it and decode it:

kubectl get secret grafana -n grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

We are now in. As next we need to add Grafana Loki as a data source:

As URL, use the endpoint of the Grafana Loki gateway service: http://loki-loki-distributed-gateway.grafana-loki.svc.cluster.local; test, save and exit.

Last step, now we need to add a dashboard in order to see eventually our logs. At the very beginning you can step on the shoulders of existing ones and then tailor them according to your needs. A good stepping stone is:

Copy the dashboard template ID from the web page:

Get the ID of the dashboard template, we are going to need it straight ahead.

and in your Grafana environment, choose to Import a new Dashboard:

Paste the template ID we just acquired and load the dashboard:

Now all the puzzle pieces should come together and you should be able to see logs from your Kubernetes workloads directly into your Grafana interface as an almost real-time experience:

Summary

Definitively, when it comes to Kubernetes monitoring and observability this is only scratching the surface, but nevertheless it is robust first step you can fulfil with almost minimum effort and in less than 10 minutes.

However, in most cases, workloads are not starting nor limiting only to Kubernetes pods and containers. Many solutions still depend on using various constellations of virtual machines, and there as well are inevitably generated a huge amount of logs and a unified logging mechanism based on the same tools is definitely needed. You could check the article below to learn how to use exact the same tooling (Grafana/Loki/Promtail) to aggregate all you linux servers logs in Loki and easily navigate them with Grafana dashboard:

If you found this information useful, don’t forget to 👏 under this article and follow my account for more content on Kubernetes. Stay tuned…

--

--

Akriotis Kyriakos
Akriotis Kyriakos

Written by Akriotis Kyriakos

talking about: kubernetes, golang, open telekom cloud, aws, openstack, sustainability, software carbon emissions

Responses (6)