Centralized Linux logs monitoring with ELK and rsyslog in Open Telekom Cloud

Akriotis Kyriakos
9 min readApr 27, 2022
Open Telekom Cloud “offers Infrastructure as a Service (IaaS) from the public cloud. Companies of all sizes and in all industries can obtain computing resources flexibly at the push of a button and benefit from all the advantages of a public cloud environment.” according to the site of T-Systems International GmbH

What are we trying to achieve

On a daily basis sysadmins and application developers face the same issue: Servers and services go down, applications raise exceptions and every single one of them is primarily logging their traces in the box they are hosted if no other cloud or on-prems logging consolidation and aggregation pipeline is in place. Checking every single log besides the fact that is asynchronous, requires massive manual effort to identify the correct location of the traces that are interesting for our debugging drill along with their correlated events.

This guide will show you how to build a real-time log monitor aggregation pipeline with the help of the ELK stack.

For this lab we are going to need 2 to 3 different virtual machines (depends on the services density you can tolerate). You can either use your favorite virtualisation software to create them locally (or if you already have a subscription to Open Telekom Cloud (OTC) you can create the necessary instances there with the Elastic Cloud Server (ECS) of Open Telekom Cloud, but will not pursue this scenario in this article):

We are going to build the following lab:

  1. a rsyslog Server — to collect the logs from rsyslog clients
  2. a rsyslog Client — to forward logs to rsyslog server (this virtual machine will emulate the rest of the boxes in your environment that you want eventually to monitor)
  3. a logstash Server — to receive the log traces as JSON messages from rsyslog server (we can safely combine this box with the rsyslog server)
  4. a Cloud Search Service (CSS) Cluster in Open Telekom Cloud — to receive data from the logstash instance.
Our log monitor aggregation and consolidation pipeline.

in this lab I am going to use CentOS 7 boxes, hosted locally and created via VirtualBox. Provisioning those VMs is out of the scope of this article.

“Cloud Search Service in short CSS, is a fully managed, distributed search service that enables you to perform quick, real-time search. It is fully compatible with open-source Elasticsearch and provides users with structured and unstructured data search, statistical analysis, and reporting capabilities.” as the official documentation states.

Provision a Cloud Search Service Cluster in OTC

Let’s create a new Cloud Search Service Cluster. In the Open Telekom Cloud console we can find the service under the Data Analysis category.

The dashboard will present us a list with the already existing clusters (we are going to need this one later) and the chance to create new cluster. Let’s click Create Cluster.

In the first step of the wizard, we have to choose the billing model, the availability zone where our new cluster will reside and the version of the Elasticsearch that we want to use. At the time of writing Open Telekom Cloud supported only version 7.6.2 that is based on Open Distro for Elasticsearch: https://opendistro.github.io/for-elasticsearch/

Next step is to choose the number of required cluster nodes (for this very lab you can dial the number down to 1) and the flavor of every node (keep the default values or choose the one that fits your budget constrains better).

The next is the most important step of the configuration. Define, or create if there are no available, the following resources :

  • a VPC and a Subnet that your cluster will be provisioned in
  • a Security Group (don’t forget to allow ports 9200 & 9300 in the inbound and outbound rules)

Enable Security Mode and provide an administrator password. It will be used both in Elasticsearch and Kibana.

Assign VPC, Subnet, Security Group, Credential and an Elastic IPv4 address for external access of the cluster.

Deactivate Cluster Snapshot we are not going to need it in this PoC.

Inspect your costs (always! never forget that) and click next to initiate the creation of the cluster. This will take some minutes, that’s a nice chance for a break.

Install logstash

Perform an update in all your local boxes before starting any other installation and configuration steps:

sudo yum update -y

As long as the upgrade is complete, logon to logstash server and prepare the installation of the logstash service.

Get sudo access:

sudo su

What we are going to need first, is to import the elasticsearch PGP signing key:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Next, we need to add the elasticsearch 7.x repo to our yum repos. Add and open for edit the following file /etc/yum.repos.d/elastic-7.x.repo. Paste the content below:

You can install first nano, if vi is not your cup of tea:

yum install nano -y

and then continue editing the file:

[elasticsearch-7.x] 
name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

After that we will continue with Java installation (minimum v8) which is a prerequisite of logstash. Let’s install it (in the time of writing this article I chose Java 11):

yum install java-11-openjdk.x86_64 -y

and update the system once again:

yum update -y

Now we are ready to install logstash:

yum install logstash-7.6.2 -y

Be sure to install a version of logstash that matches the version of Elasticsearch otherwise logstash will not be a happy camper.

We have to configure now the logstash server. We need to create a configuration file in /etc/logstash/conf.d/ , let’s call it cloudsearchservice.conf, the name doesn’t really matter, and paste in it the following contents:

input {
beats {
port => 5044
}
tcp {
host => "192.168.1.66"
port => 10514
codec => "json"
}
}
output {
elasticsearch {
hosts => ["https://[EIP of the ECS Jumpserver]:9200"]
user => "admin"
password => "[password for the admin user]"
cacert => "/usr/share/logstash/CloudSearchService.cer"
ssl => true
ssl_certificate_verification => false
ilm_enabled => false
}
}

You can split every segment of the file [input, filter, output] in separate config files for better clarity.

  • Input: We declare that logstash will utilize tcp to listen on port 10514 on a specific IP address that is the logstash instance’s IPv4 address (replace with yours in this case as 192.168.1.66 is the one my DHCP server arbitrarily assigned to this box) and will require to receive messages in JSON format.
  • Output: We define elasticsearch as the output target of logstash. As hosts we will use the Elastic IP of our CSS cluster. The parameters user and password were the configuration values we provided during the creation of the Cloud Search Service and cacert is the cluster certificate that we can download from the Open Telekom Cloud console for the specific cluster. Save it somewhere in your logstash instance and provide the relative path as value for cacert. Default index is “logstash-%{+YYYY.MM.dd}” but if you want to provide your own add an index parameter to the output section e.g index => “rsyslog-%{+YYYY.MM.dd}”.
ilm_enabled => false

The ilm_enabled option is necessary only if you are connecting your logstash instance with Cloud Search Service. Make sure to disable it! If you are integrating with an instance of Elasticsearch hosted on your premises you should omit this line — the variable’s default value is true.

At the time of writing this article, the auto-generated certificate points only to Private IPv4 Address and not to the Elastic IPv4 Address (the technical reason behind that needs to be investigated). For that reason, we need to disable the certificate validation:

ssl_certificate_verification => false

IMPORTANT: This has to be avoided at all costs for production workloads, logstash will give you a loud warning as well.

Download the auto-generated certificate from the Console

Configure rsyslog on the server

Let’s start configuring the rsyslog server. We need to create two additional configuration files in /etc/rsyslog.d/

  • Call the first file 10-json-template.conf and provide the following content:
JSON template to format incoming rsyslog traces in a structure that Logstash and Elasticsearch is happy with.

You can grab the contents of this file, from this github repo: https://github.com/akyriako/opentelekomcloud-rsyslog-css-integration

Call the second file 20-cloudsearch-service-output.conf and provide the following content:

*.* @@192.168.1.66:10514;json-template

Replace the respective values with your own logstash server IPv4 address and port. Prefix the IPv4 address with a single @ if you want to use UDP or @@ if TCP is the chosen protocol.

You can name the files how you see fit, as long as you stick to a .conf extension.

Next thing in the list is to change the configuration of the rsyslog itself. Edit the /etc/rsyslog.conf and uncomment the following 4 lines. Here we configure rsyslog to listen on port 514 (UDP/TCP)

Replace the respective values with your own logstash server IPv4 address and port. Prefix the IPv4 address with a single @ if you want to use UDP or @@ if TCP is the chosen protocol.

Save changes and restart the rsyslog service:

systemctl restart rsyslog

Configure rsyslog on the client(s)

Edit the /etc/rsyslog.conf and add the following rule in the forwarding rules region:

*.* @@192.168.1.153:514

Replace the respective values with your own rsyslog server IPv4 address and port. Prefix the IPv4 address with a single @ if you want to use UDP or @@ if TCP is the chosen protocol.

Save changes and restart the rsyslog service:

systemctl restart rsyslog

Take it for a test drive

Login to your logstash server and take sudo access.

cd /usr/share/logstash/bin/

Start logstash with the following command — it will take a minute or two

./logstash -f /etc/logstash/conf.d/cloudsearchservice.conf --verbose --path.settings /etc/log-stash

If you see something like that, it means that your configuration worked out fine and your logstash is now running (temporarily, we will set up the daemon later).

Now login in one of your boxes, either the rsyslog server or the rsyslog client preferably and issue the following command:

logger This is a test message

Configure a Kibana Dashboard

Now login to Open Telekom Cloud Console and and go to CSS Dashboard and click the option Access Kibana:

Access Kibana from the Console

Login using your admin credentials:

Login using the credentials you provided during the provisioning of the CSS cluster.

First let’s configure the necessary index. On the landing page of Kibana click Index Patterns:

Without registering a new index the data sent from logstash will not be visible in Kibana.

and click the button Create Index Pattern:

type logstash-* (if you haven’t configure your own one) as the index pattern name and complete the remaining steps of the wizard by clicking next.

If all went well then you are ready to go. Click in the vertical sidebar the very first option, “Discover” and then you will start seeing a big amount of incoming log traces from your boxes. Try to find the one you sent manually with the logger!

You should be able now to see traces in Kibana under “Discover”

Kibana has so many capabilities that exceeding this article’s purpose. One of them is the real time visualization of your data and metadata and then packing them in beautiful reusable dashboards:

A dashboard with real-time metrics that give us valuable information of the system activities of our lab.

I am not going to go step by step explaining how to bring those visualizations and dashboard to life, but you can find all the necessary artifacts in the github repo and then import them in your Kibana instance.

Finalize logstash Configuration

Go back to your logstash box. Terminate the logstash service we manually started in the previous steps. Let’s set up the daemon now:

sudo systemctl start logstash
sudo systemctl enable logstash

Summary

This PoC is using Open Telekom Cloud’s Cloud Search Service but nothing prevents you from using the latest ELK stack on your own servers or containers and take advantage of all the latest features and build a completely localized scenario, disconnected from the internet or any Public Cloud provider. The reproduction and configuration steps are identical!

All configuration files can be found in the following repo:

Hope you found this article useful, have fun with your centralized logs solution!

--

--

Akriotis Kyriakos

talking about: kubernetes, golang, open telekom cloud, aws, openstack, sustainability, software carbon emissions