Install Kubernetes on CentOS 8
Step by step installation of 3-nodes Kubernetes Cluster on CentOS 8
What is the goal ?
- After completing all the steps below, you will have a 3-node Kubernetes Cluster running on CentOS 8.
What do we need ?
- CentOS 8 images. You can find free images here https://www.linuxvmimages.com/images/centos-8/#centos-822004. Be sure to download the minimal version, as a desktop GUI will not be needed.
- 3 Virtual Machines on the virtualization software of your preference. Setup 3 VMs one for the master node (4GB, 2 CPUs), and two as workers (2GB, 1 CPU).
- You are going to need root priviledges (obviously), but the aformentioned images are already taking care of that, so read their release notes (You can
sudo su -
and off you go).
So lets get started…
Configure host names:
Configure the hostname of every box with the following command:
hostnamectl set-hostname master-01.k8s.rhynosaur.home
Insert the hostname of each node in /etc/hosts (run as well this command on every server):
cat <<EOF>> /etc/hosts
192.168.1.133 master-01.k8s.rhynosaur.home
192.168.1.134 worker-01.k8s.rhynosaur.home
192.168.1.135 worker-02.k8s.rhynosaur.home
EOF
Obviously you have to replace hostnames and IP addresses with the ones of yours. Remember that you need to set static IP addresses, so if you acquire those from a DHCP server in your network configure it accordingly.
Disable SELinux:
setenforce 0sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
Configure Firewall:
On master node open the following ports:
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --reload
On worker nodes open the following ports:
firewall-cmd --permanent --add-port=6783/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --reload
Enable br_netfilter kernel module:
br_netfilter kernel module is needed from Kubernetes in order the pods to be able to communicate with each other across the cluster.
modprobe br_netfilterecho '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
Disable swap:
Execute the following command:
swapoff -a
In order to make this permanent, open the file /etc/fstab and comment out the line that is related to the ‘swap’ file partition.
The OS images come only with vi pre-installed. If you have an alternative editor that you would like to work with, like vim or nano, you can install it manually with yum.
vi /etc/fstab
Don’t freak out: Press i in order to start editing the text. When you are done press ESC to exit editing mode, and in order to save & quit, type:wq (if you want to discard the changes just type :q)
Install Docker:
First install a bunch of prerequisites:
yum install -y yum-utils device-mapper-persistent-data lvm2
and then add the docker-ce repository to the system:
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Now you can install docker-ce with the following command via yum and wait for it to finish:
yum install -y docker-ce
Install Kubernetes:
Add the Kubernetes repository to the system:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Install the prerequites packages kubelet, kubedm and kubectl via yum:
yum install -y kubelet kubeadm kubectl
Wait the installation to finish and then reboot the machine:
shutdown -r now
Start the necessary services, docker and kubelet:
systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet
Kubernetes and Docker need to use the same cgroup. Make sure first that Docker is using cgroupfs as cgroup-driver:
docker info | grep -i cgroup
and then reconfigure Kubernetes cgroup-driver to cgroupfs:
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Reload due to configuration changes and restart the kubectl service:
systemctl daemon-reload
systemctl restart kubelet
Initialize Kubernetes Cluster:
kubeadm init --apiserver-advertise-address=192.168.1.133 --pod-network-cidr=10.244.0.0/16
IMPORTANT Use your own IP addresses and CIDRs. The API advertised IPv4 address should point to the IP of your master node. The POD network CIDR is the default one that is used for a flannel network, that we are going to install later on as our network component. If you wish using another one, don’t forget to amend as well the configuration file of flannel with the new CIDR value of your choice.
There will be additional installation instructions in another article that comes soon, that will illustrate how to deploy other networking solutions as Calico and Weaver.
Wait till kubeadm-init is over, and if that ended successfully, scroll down to the output and look for a line that should look like this:
kubeadm join 192.168.1.133:6443 --token v8b3p1.xdwz9zafwkf4oc1w --discovery-token-ca-cert-hash sha256:aa557ed289f0db77dc2e80b764e23a37f0a02a06790873b5e42a823188876eb4
IMPORTANT Copy this line and save it somewhere, as we are going to need it later while configuring our workers.
Let’t tidy up and bring the admin.config in a more cozy place:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install a network:
For this example we are going to use flannel virtual networking. It is the simplest one to start your journey to Kubernetes with.
In order to deploy a flannel network run the following command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Wait for it to finish, it will take some time so either be patient or take a small break. You can periodically check the status of the cluster and the pods with the following commands:
kubectl get nodes
kubectl get pods --all-namespaces
When your master node eventually shows up as Ready, check the pods’ status to make sure all pods instances are Running and that all the necessary pod instances are spawned, including one that contains the text kube-flannel-ds, and thatwould be an indication that our flannel virtual network is successufully deployed and running.
Add Worker Nodes:
Run the command we copied and saved earlier in this tutorial:
kubeadm join 192.168.1.133:6443 --token v8b3p1.xdwz9zafwkf4oc1w --discovery-token-ca-cert-hash sha256:aa557ed289f0db77dc2e80b764e23a37f0a02a06790873b5e42a823188876eb4
and run it in every box that will host a worker node. If you are interested having more information for the process or you wish to debug in case something went wrong, add the following flag to the statement above and run it again:
--v=5
You can run it as many times as you want in case of failure without a need to clean up something before you execute it again.
Wait for it to finish, it will take some time so be patient. Another chance for a quick break . You can periodically check the status of the cluster and the pods, from your master node’s terminal, with the following commands:
kubectl get nodes
kubectl get pods --all-namespaces
When all nodes appear in the list and their status settles to Ready, you are good to go. Now you have a 3-nodes Kubernetes cluster, go and have some fun.
In the next article we will see how to deploy step by step a load balancer for a bare-metal Kubernetes:
Make sure to follow me for more articles in Kubernetes ecosystem!