Provision a containerized K3S Cluster with K3D
Create a multi-node K3S Cluster on Docker containers using K3D
Introduction
Rancher has created its own Kubernetes distributions, RKE and K3S. RKE, is a CNCF-certified K8S distribution that runs on any host, with sole requirement a running docker engine instance. K3S on the other hand, is the lightweight alternative, that consolidates everything that Kubernetes needs, in a small binary with a footprint of no more than 40MB, making it possible to host a Kubernetes cluster on whole new range of devices from a Raspberry Pi to an electric windmill turbine or on a fighter jet.
The miniaturisation of the binaries is an enormous leap, albeit leaves us with another burning issue — the one of density. K3S, as any other Kubernetes installation, needs to be installed on different nodes, and as we all know extra nodes translate to additional costs and operational overhead. This is where k3d comes into the picture. K3D is a lightweight wrapper to run k3s on Docker containers, and it is not an official Rancher product but a community driven project.
Install K3D
The installation in Linux, for the current latest release, is fairly simple:
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
and if you are working on a Mac, the thing is even easier as you can install it directly with brew:
brew install k3d
Let’s see now, how simple it is to spin single or multiple node containerized K3S clusters, for our local development need, with K3D. A single simple node first:
Provision a single-node cluster
k3d cluster create
and in less than 10 seconds we got a a cluster up and running with DNS, Traefik, Load Balancer and Metrics. And all that with zero configuration, just using the default values:
But this is not what this article is about, we want a HA cluster. Let’s bring this one down:
k3d cluster delete
If no cluster name is provided, the default one k3s-default will be used automatically instead.
Provision a HA cluster
Now let’s continue provisioning a new cluster with 3 control-plane nodes, in order to fulfil in that way a basic requirement of high availability:
k3d cluster create --servers 3
That will take 30–45 seconds (depending on your machine) and your highly available cluster is alive and kicking:
If you want to start a cluster with extra worker nodes, then extend the creation command like:
k3d cluster create --servers 3 --agents 5
and in case you want to explicitly define the listening ports of your Traefik instance you should add the following arguments (adjusting the ports to your liking):
k3d cluster create --servers 3 --agents 5 -p "80:80@loadbalancer" -p "443:443@loadbalancer"
We got a containerized Kubernetes cluster with 8 nodes! Let’s see how it looks at the Docker level:
docker ps --format "table {{.Image}}\t{{.Names}}\t{{.Ports}}\t{{.Status}}" | grep k3s
The new cluster’s connection details are automatically merged into your existing
~/.kube/config
file and additionally it switchs to the new context by itself after the end of the installation. Very convenient! Alternatively you can use the command below:
k3d kubeconfig merge {CLUSTER_NAME} --kubeconfig-switch-context
If you want to delve deeper in the configuration options of K3D please refer to the official documentation which is surprisingly extensive and rich:
Scale the cluster
It couldn’t be more easy to scale a cluster. Just issue the following command alternating the role argument value from server to agent if you need more worker nodes instead of control-plane ones:
k3d node create {NODE_NAME} --role=server
Mount a volume
k3d cluster create --servers 3 --agents 5 -p "80:80@loadbalancer" -p "443:443@loadbalancer" --volume '/tmp/data:/data@agent[*]'
This will bind mount your local directory
~/.k3d/data
to path/data
in all your agent nodes[*]
. If you replace*
with the node index you can specify explicitly to which node to mount your local folder.Make sure you mount your volumes while creating the cluster. Up to this time, k3d version v5.4.4, you can edit a cluster, as an experimental feature, but it is restricted to publishing ports only.
Enable Traefik dashboard
When we created the cluster we used the following command:
k3d cluster create --servers 3 --agents 5 -p "80:80@loadbalancer" -p "443:443@loadbalancer"
The port mappings in the command, forward ports 80 and 443 and enable Traefik to handle HTTP/S requests that are directed to these ports.
In order to figure out how to enable Traefik dashboard let’s have a look in its deployment:
kubectl get all -n kube-system
kubectl describe deploy traefik -n kube-system
If we have a look in the description of the resource, we can figure out that the the dashboard is already enabled by the default configuration — not bad at all! — so the only thing left to us now is to expose it via port forwarding to our local machine.
All we need to do is to establish a port forward to forward traffic to the Traefik dashboard, that as we see is using port 9000. To do this, you need to issue the command:
kubectl port-forward -n kube-system "$(kubectl get pods -n kube-system| grep '^traefik-' | awk '{print $1}')" 9000:9000
and now you can open the Traefik dashboard from your browser of choice at the following address:
http://localhost:9000/dashboard/
Don’t forget the trailing slash at the end of the URL address, it’s a lesson I learned the hard way!
Enable *a* Kubernetes dashboard
I am not a big fan of the Kubernetes Dashboard in general, and as long as we are discussing here for a development environment I root for robust, uncomplicated solutions that let me do my job faster and provide me easily with detailful information whenever I need it. This is why I recommend K9S to everyone that doesn’t want always to resort to kubectl and search for something really convenient and simple, though powerful enough.
Alternatively you can import the cluster in Rancher and take advantage of the nice web interaface of its Cluster Explorer. Go to Cluster Management, click the button Import Existing and then choose to import a Generic cluster :
Follow the registration instructions by executing the commands and wait until your newly imported cluster state transitions from Pending to Waiting and finally to Active:
You can now control and manage almost every aspect of the lifecycle of your infrastructure and your apps directly from the Rancher web interface!
Summary
And more or less, that was it. You can jumpstart now development clusters in zero time, emulating as many nodes you want (within the physical capacity of your hardware specs of course) and scale their nodes with zero administration effort!
Don’t forget to have a look, if you are a fan of those, at an interesting Visual Studio Code extension (they are not really my cup of tea, I prefer the terminal or Rancher UI) that displays your K3D clusters under the Kubernetes extension’s Cloud Explorer. You can use this to create and delete clusters, and to merge them into your kubeconfig.