Create a multi-node Kubernetes 1.26 Cluster with Vagrant

Akriotis Kyriakos
5 min readAug 31, 2022

--

Install an Ubuntu-based multi-node Kubernetes 1.26 Cluster with Vagrant, VirtualBox & Kubeadm

❗️ Updated for Kubernetes v1.26 — without dockershim💥

What is the goal ?

  • After completing this lab, we will have a 3-node Kubernetes 1.26 Cluster running on Ubuntu 20.04 automatically created via Vagrant and Kubeadm.

What are we going to need ?

  1. Vagrant https://www.vagrantup.com/docs/installation, VirtualBox https://www.virtualbox.org and Vagrant Manager https://www.vagrantmanager.com, well that’s not mandatory but nice to have .
  2. Three (3) Virtual Machines. One for the master node (3072 MB RAM, 1 vCPU), and two as workers (3072 GB RAM, 1 vCPU) that will be automatically provisioned from Vagrant.
  3. An additional Host Network — I will be using VirtualBox for this lab.

As a first step install all the three apps mentioned in the first bullet, following the instructions for your host OS. Next we need to provision a new Host Network in VirtualBox:

By default Vagrant binds automatically a NAT network (caution — not a named one!) with the NIC 1 of every VM it provisions. We are going to instruct Vagrant to bind this Host Network with NIC 2 — we will see later how.

In a VirtualBox Host Network the first IP address is assigned to the host computer (our laptop in this case) and the second to the DHCP server of the network.

Last preparation step is to clone the repository holding the lab files:

git clone https://github.com/akyriako/kubernetes-vagrant-ubuntu.git

Analyze the Vagrant file

The steps of provisioning a Kubernetes cluster on Ubuntu boxes are analyzed thoroughly in the article below — let’s see how we can incorporate them in a Vagrant file:

First we want to choose the sizing of our nodes and assign new hostnames to them:

We don’t need to amend the machine-id in this case, because Vagrant makes sure every box is unique.

We make use of global or local variables to parameterize our script:

And then assign an IP (of the Host Network we create before) to each node:

Then we need to add those pairs in /etc/hosts. We are going to run one inline script for injecting the master’s IP and hostname and another one that inject’s the worker nodes’ via a loop.

In order to use the values of our global variables in scripts, we pass them to the scripts as environment variables:

Every node now has to be prepared with the Kubernetes prerequisites:

Our nodes are now ready and we can init the cluster with kubeadm:

Let’s have a look in the init-master.sh because there are some Vagrant specific hacks going around here! After the cluster is initiated, we need to fix the kubelet daemon and explicitly instruct it to use IP addresses from our Host and not from our NAT Network.

Reason is dual: Vagrant is using for its own purposes (port forwarding etc) the NAT network— which assigns to all VMs the same IP address 10.2.0.5 — and kubelet is choosing always the default interface. This combination will lead your kube-proxy and network pods in a mess (which will be installed in a later step) to get eternally stuck in a CrashLoopBackOff state). So we first instruct kubeadm to work with NIC 2 and then kubelet:

Then we deploy our POD network (either calico or flannel)

Vagrant is assigning the current working folder on the host, as a shared folder to all our VMs (and mounts it as /vagrant on every node) — this will come in hand during our next step that will we create the join command for our worker nodes. Instead of copying the command back and forth from the master to the nodes, we will save it in the share folder and let worker nodes pick it up from there!

The control plane is now ready, let’s go adding some nodes. Back to the Vagrant script. We run the join command we got from master and then we apply the same hack to kubelet daemon:

All you need to do now, is wait till all the pods settle to a Ready state. You can observe the progress by connecting with ssh to your master node, this is where Vagrant Manager makes thing a bit more comfortable:

You cluster will be ready within 3–5 minutes, depending on the performance of your host just with running a simple command — I nearly forgot this part :)

vagrant up

It cannot get easier than that!

I hope you found this article interesting and helpful. Follow me for more Kubernetes-related content as more comes very soon.

--

--

Akriotis Kyriakos

talking about: kubernetes, golang, open telekom cloud, aws, openstack, sustainability, software carbon emissions