Configure path-based routing with Nginx Ingress Controller

Akriotis Kyriakos
11 min readJun 6, 2023

--

Expose your homelab’s Kubernetes services to a public-facing domain with Nginx Ingress Controller and Nginx Proxy Manager as a reverse proxy.

TL;DR: By using Nginx Ingress Controller and Nginx Proxy Manager, you can make your Kubernetes services in a homelab accessible through a public-facing domain. Nginx Ingress Controller acts as a traffic manager, directing incoming requests to the appropriate services. Nginx Proxy Manager serves as a reverse proxy, handling SSL certificates and providing a user-friendly interface for managing domain routing. Combined together, they allow you to securely expose your homelab services to the internet with ease.

What is Nginx Ingress Controller?

Nginx Ingress Controller is an open-source project that provides an Ingress controller for Kubernetes, allowing you to manage external access to services within your Kubernetes cluster. In Kubernetes, an Ingress is an API object that defines rules for routing external HTTP and HTTPS traffic to internal services.

The Nginx Ingress Controller uses the Nginx web server as a reverse proxy and load balancer to route incoming traffic to the appropriate backend services based on the rules specified in the Ingress resource. It acts as the entry point for external traffic into your Kubernetes cluster.

Some key features of the Nginx Ingress Controller include:

1. Load balancing: Nginx can distribute incoming traffic across multiple backend services based on various load balancing algorithms.
2. SSL/TLS termination: It can handle SSL/TLS encryption and decryption, offloading the SSL/TLS processing from the backend services.
3. Path-based routing: You can define rules in the Ingress resource to route traffic based on the requested URL paths.
4. Name-based virtual hosting: Nginx can route traffic based on the requested hostname, allowing you to host multiple websites or services on the same IP address.
5. Dynamic configuration: The Nginx Ingress Controller monitors changes to the Ingress resources and automatically updates its configuration accordingly.

By using the Nginx Ingress Controller, you can easily expose your Kubernetes services to the external world, manage traffic routing, and apply SSL/TLS encryption without the need for manual configuration of individual Nginx instances. It provides a scalable and flexible solution for managing inbound traffic in Kubernetes clusters.

What is Nginx Proxy Manager?

Nginx Proxy Manager is a web-based management interface and reverse proxy server that simplifies the configuration and management of Nginx as a reverse proxy. It provides a user-friendly interface for setting up and managing reverse proxy, SSL/TLS termination, and load balancing for multiple backend services.

The main purpose of Nginx Proxy Manager is to make it easier for users with little or no experience in Nginx configuration to set up and manage Nginx as a reverse proxy. It abstracts away the complexities of Nginx configuration files and provides a graphical interface for managing the reverse proxy functionality.

Key features of Nginx Proxy Manager include:

1. Web-based interface: Nginx Proxy Manager provides a web-based management interface where you can configure and manage reverse proxy settings.
2. SSL/TLS termination: It supports easy configuration of SSL/TLS certificates for secure connections and offloads the SSL/TLS encryption and decryption from the backend services.
3. Domain and subdomain management: You can easily configure domain names and subdomains and map them to different backend services.
4. Load balancing: Nginx Proxy Manager allows you to configure load balancing for distributing incoming traffic across multiple backend servers.
5. Let’s Encrypt integration: It integrates with Let’s Encrypt, enabling automatic issuance and renewal of SSL/TLS certificates for your domains and subdomains.
6. Access control and authentication: You can set up access control rules to restrict access to your services based on IP addresses or use basic authentication.
7. Logging and monitoring: Nginx Proxy Manager provides logs and monitoring features to track the incoming traffic and monitor the performance of the proxy server.

Overall, Nginx Proxy Manager simplifies the management of Nginx as a reverse proxy by providing a user-friendly interface, making it accessible to users without extensive Nginx expertise. It is particularly useful for managing multiple websites or services running on the same server and simplifying the process of setting up SSL/TLS encryption and load balancing.

Deploy Nginx Proxy Manager

I strongly recommend you install Nginx Proxy Manager as a standalone docker stack, either in a separate box or even in the master node of your Kubernetes cluster if you are short on resources—but not as a Kubernetes workload.

Create a docker-compose.yml and copy the following content. Don’t forget to replace the password values for DB_MYSQL_PASSWORD, MYSQL_ROOT_PASSWORD and MYSQL_PASSWORD:

version: "3.3"
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
# These ports are in format <host-port>:<container-port>
- '80:80' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '81:81' # Admin Web Port
# Add any other Stream port you want to expose
# - '21:21' # FTP
environment:
DB_MYSQL_HOST: "db"
DB_MYSQL_PORT: 3306
DB_MYSQL_USER: "npm"
DB_MYSQL_PASSWORD: "{{PASSWORD}}"
DB_MYSQL_NAME: "npm"
# Uncomment this if IPv6 is not enabled on your host
# DISABLE_IPV6: 'true'
volumes:
- /var/lib/docker/volumes/nginx-proxy-manager/data:/data
- /var/lib/docker/volumes/nginx-proxy-manager/letsencrypt:/etc/letsencrypt
depends_on:
- db

db:
image: 'jc21/mariadb-aria:latest'
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: '{{PASSWORD}}'
MYSQL_DATABASE: 'npm'
MYSQL_USER: 'npm'
MYSQL_PASSWORD: '{{PASSWORD}}'
volumes:
- /var/lib/docker/volumes/nginx-proxy-manager/data/mysql:/var/lib/mysql

networks:
default:
external:
name: npm-external
docker-compose up -d

As soon as the containers are provisioned, you can visit the Admin Web Port (81) of Nginx Proxy Manager in order to get access to its Web UI:

Don’t forget at this point, to permit port-forwarding for the ports 80,443 in your router, only for the IP address of the box that hosts Nginx Proxy Manager. That will be essential for reaching your Ingress from the external world later.

You could forward port 81 for the Web UI as well, but I would by no means recommend doing so. Better not exposing administrative endpoints to the public internet and rather access them only by your internal network.

Provision a Kubernetes cluster

We are going to provision a Kubernetes cluster with the use of Vagrant and Kubeadm. Clone the following repo:

git clone https://github.com/akyriako/kubernetes-vagrant-ubuntu.git

This repo is assuming you will provision your VMs (using VirtualBox as hypervisor) in a Host-Only network, but in this lab we want to use a Bridge network instead, so we need to make the following changes to our Vagrantfile:

  1. Replace all instances of 192.168.57.10x with whatever IP address range depicts your internal homelab network — in my case 192.168.1.21x as the CIDR of my homelab is 192.168.1.0/24. Adjust according to your setup.
  2. Change master.vm.network and worker.vm.network values from private_network to public_network
  3. Add an extra bridge parameter to master.vm.network and worker.vm.network: e.g. bridge: ‘enp0s31f6’ and replace the value enp0s31f6 with the id of your NIC. You can find this out by running the command:

If you are working on a Mac, you are going to have issues in this point. MacOS is not playing nice (or at all!) when it comes to VirtualBox and bridge network interfaces. The whole lab is using Ubuntu Desktop 22.04 as a host machine.

ip a
Get information for every network interface in your host
domain = "kubernetes.lab"
control_plane_endpoint = "k8s-master." + domain + ":6443"
pod_network_cidr = "10.244.0.0/16"
pod_network_type = "calico" # choose between calico and flannel
master_node_ip = "192.168.1.210"
version = "1.26.0-00"

Vagrant.configure("2") do |config|
config.ssh.insert_key = false
config.vm.provision :shell, path: "kubeadm/bootstrap.sh", env: { "VERSION" => version }
config.vm.define "master" do |master|
master.vm.box = "ubuntu/focal64"
master.vm.hostname = "k8s-master.#{domain}"
master.vm.network "public_network", bridge: 'enp0s31f6', ip: "#{master_node_ip}"
master.vm.provision "shell", env: {"DOMAIN" => domain, "MASTER_NODE_IP" => master_node_ip} ,inline: <<-SHELL
echo "$MASTER_NODE_IP k8s-master.$DOMAIN k8s-master" >> /etc/hosts
SHELL
(1..3).each do |nodeIndex|
master.vm.provision "shell", env: {"DOMAIN" => domain, "NODE_INDEX" => nodeIndex}, inline: <<-SHELL
echo "192.168.1.21$NODE_INDEX k8s-worker-$NODE_INDEX.$DOMAIN k8s-worker-$NODE_INDEX" >> /etc/hosts
SHELL
end
master.vm.provision "shell", path:"kubeadm/init-master.sh", env: {"K8S_CONTROL_PLANE_ENDPOINT" => control_plane_endpoint, "K8S_POD_NETWORK_CIDR" => pod_network_cidr, "K8S_POD_NETWORK_TYPE" => pod_network_type, "MASTER_NODE_IP" => master_node_ip}
end
(1..3).each do |nodeIndex|
config.vm.define "worker-#{nodeIndex}" do |worker|
worker.vm.box = "ubuntu/focal64"
worker.vm.hostname = "k8s-worker-#{nodeIndex}.#{domain}"
worker.vm.network "public_network", bridge: 'enp0s31f6', ip: "192.168.1.21#{nodeIndex}"
worker.vm.provision "shell", env: {"DOMAIN" => domain, "MASTER_NODE_IP" => master_node_ip} ,inline: <<-SHELL
echo "$MASTER_NODE_IP k8s-master.$DOMAIN k8s-master" >> /etc/hosts
SHELL
(1..3).each do |hostIndex|
worker.vm.provision "shell", env: {"DOMAIN" => domain, "NODE_INDEX" => hostIndex}, inline: <<-SHELL
echo "192.168.1.21$NODE_INDEX k8s-worker-$NODE_INDEX.$DOMAIN k8s-worker-$NODE_INDEX" >> /etc/hosts
SHELL
end
worker.vm.provision "shell", path:"kubeadm/init-worker.sh"
worker.vm.provision "shell", env: { "NODE_INDEX" => nodeIndex}, inline: <<-SHELL
echo ">>> FIX KUBELET NODE IP"
echo "Environment=\"KUBELET_EXTRA_ARGS=--node-ip=192.168.1.21$NODE_INDEX\"" | sudo tee -a /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
sudo systemctl daemon-reload
sudo systemctl restart kubelet
SHELL
end
end
config.vm.provider "virtualbox" do |vb|
vb.memory = "3072"
vb.cpus = "1"
vb.customize ["modifyvm", :id, "--nic1", "nat"]
end
end

You can find at the link below an analytical description of all the moving parts of provisioning a Kubernetes cluster with Kubeadm and Vagrant using the aformentioned repo:

Finishing up our Kubernetes installationæ we need to provision a Load Balancer for our cluster. The address pool we are going to use, in order our exposed Kubernetes services to be reachable within our local home network, will be in the range of 192.168.1.220–192.168.1.225. Follow the instructions in the article below deploy and configure your MetalLB:

Deploy an Nginx Ingress Controller

At this point, we should have in place a working Nginx Proxy Manager installation, a Kubernetes cluster with 1 master and 3 worker nodes and a Load Balancer with an address pool pointing to a segment of our home network. Let’s go and install the Nginx Ingress Controller with Helm:

helm install nginx-ingress oci://ghcr.io/nginxinc/charts/nginx-ingress --namespace nginx-system --create-namespace

As you’ll notice, as soon as the installation is complete, along with your ingress controller, an additional service of type LoadBalanceris provisioned which has automatically being assigned with one of the available IPs of our address pool.

If you try now to access this endpoint from a browser, you should expect to see something like this:

Don’t get alarmed, that’s perfectly fine at this point. We have a service, assigned with an external (sort of, in the eyes of the cluster) IP address and we can access it from a browser (technically we can, although the error returned was 404 — an error is a valid response in that case).

Deploy and expose a demo workload

As workload, I am going to use an instance of traefik/whoami (the irony, I know!); a simple Golang web server that prints OS and HTTP request details to the response output. Let’s go an deploy our workload in the simplest way possible:

kubectl create deployment whoami --image=traefik/whoami

Next, we need to expose this workload by creating a service of type ClusterIP(in that way it will be reachable only within the cluster):

kubectl create service clusterip whoami --tcp=8080:80

We have now: an application and its service deployed in our cluster but we cannot access it from outside of the cluster! Let’s fix that assuming that:

  • we don’t want to burn another IP address from the pool (especially in the cloud that’s a pretty expensive no-go) by changing the type of whoami service to LoadBalancer from ClusterIP
  • we want to access all of our (future) workloads via the same domain URL but under different paths.

Remember, that’s the whole point of this article: You want to implement path-based routing by defining rules in the Ingress resource to route traffic based on the requested URL paths.

Configure an Ingress for our demo workload

Next thing we are going to need is an Ingress resource that will manage external access to the services (so far only the whoami service will be configured) in our cluster. Go on and create a file namedingress-whoami.yaml and copy the YAML below in it. Then replace www.example.com with your own domain, or subdomain, address:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-whoami
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
ingressClassName: nginx
rules:
- host: www.example.com
http:
paths:
- path: /whoami
pathType: Prefix
backend:
service:
name: whoami
port:
number: 8080

Remember, we mapped port 80 of whoami web server to port 8080 when we created the service for this deployment.

Now let’s create this resource in Kubernetes:

kubectl apply -f ingress-whoami.yaml

We are not done yet! If you’ll try to access your service from http://www.example.com/whoami nothing is going to happen. We need to redirect external requests to this URL back to the appropriate backend endpoint, and Nginx Proxy Manager is where comes into the picture playing the role of reverse proxy. Open the Web UI of Nginx Proxy Manager and add a new Proxy Host:

On the first tab (Details): As Domain Name, use www.example.com (obviously replace with your domain or subdomain), as Scheme choose http, as Forward Hostname/IP the External IP address of your Ingress Controller’s service — in my case 192.168.1.220 — and as Forward Port port 80.

On the third tab (SSL): Request a new SSL Certificate from Let’s Encrypt for your domain — or subdomain.

Wait the whole approval dance to complete and then activate Force SSL and click Save:

Now you can finally try your service again. Browse to www.example.com/whoami (http or https just doesn’t matter, because if you remember, we are forcing SSL and all http requests will be redirected automatically to https). You should see the a response similar to this:

Keep in mind that we do have a valid SSL certificate! Check the small padlock icon next to the URL address and inspect the details of the SSL Certificate we obtained via Let’s Encrypt.

Closing notes

If you reached that far, you might been thinking now: why the heck should I not use cert-manager for issuing and refreshing my certificates directly inside the ingress resource or why should I not use ExternalDNS for Kubernetes?

Well, you could but it is completely different setup, and in my case I host many services in my homelab among more than one Kubernetes clusters, as plain docker containers or stacks and/or in old school virtual machines. Reverse proxying to all that services heterogenous services without Nginx Proxy Manager would be a pain (additionally it makes it easier to issue and update my SSL certificates from there).

Concerning the second point: the usage of ExternalDNS. If you are living in DACH region (aka the German speaking region of Europe that includes Germany, Austria & Switzerland) and STRATO is where you buy and register your domains, then you are a bit unlucky because ExternalDNS is not supporting STRATO’s DynDNS yet (or ever, who knows!).

So in order to keep my domains — and subdomains — in sync with the periodic changes of the dynamic IP address of my ISP, I wrote my own Kubernetes custom Controller to update my domains’ DNS records on STRATO servers. You can have a look here if you are looking for a similar solution:

If you found this information useful, don’t forget to 👏 under this article and follow me for more Kubernetes content. Stay tuned.

--

--

Akriotis Kyriakos
Akriotis Kyriakos

Written by Akriotis Kyriakos

talking about: kubernetes, golang, open telekom cloud, aws, openstack, sustainability, software carbon emissions