While my NUC homelab handled what I considered “production” workloads using docker and compose, I wanted to play around with and learn Kubernetes. I understood most of the bird’s eye view concepts like pods, deployments, and ingress, but without much hands-on, most of it was fleeting. So I decided to put my spare RAM and CPU on my desktop to good use by installing a complete K3s cluster. A good part of my research was provided concisely on an excellent guide by Tom .

Multipass

Installing a cluster needs some platform, either bare metal or virtualized. I use Windows Subsystem for Linux (WSL), so I already had Hyper-V enabled. If you do not, follow the official instructions to do that. Now the task was to install multiple VMs with some OS to use as a platform for the cluster. Ubuntu server provides a clean minimal base, so I could do it the traditional way by downloading the ISO and installing n VMs and be done. But that can get tedious if I have to start clean while learning and messing things up. Here is where Multipass by Canonical comes in.

Multipass provides a command-line interface to launch, manage, and generally fiddle about with instances of Linux. It is a tool that can spin up cloud instances of Ubuntu taking full advantage of cloud-init, not unlike how you can spin up containers. The result is no more manually loading an ISO to the hypervisor, selecting parameters, sitting through the install process, and answering questions on what your name is and where you are. Multipass supports Hyper-V and Virtualbox on Windows, and I will be using Hyper-V because of WSL.

There are a lot of things that need to work well together for Multipass to function correctly. Hyper-V is a bit of a hassle in this regard. I took multiple installations and information available from Github issues like this to fix some of the Windows issues preventing Multipass daemon from starting up.

Cloud-Init

Cloud-init is the industry standard multi-distribution method for cross-platform cloud instance initialization. Cloud instances initialize from a disk image and instance data containing cloud metadata, optional user data, and vendor data. Multipass will include enough data to use the multipass CLI to exec into the instance from the command prompt. But if I would like to SSH directly, I would need to include that as a cloud-init configuration.

I created a file called multipass.yml with the below contents to add the SSH key. In Windows, the key would be in C:\Users\<user>\.ssh\id_[rsa/ed25519].pub. You can find instructions online on enabling the SSH client in Windows if yours does not have it already. Feel free to replace my username and key and add multiple keys if you have, but note that the VM IPs would only be accessible from your machine and not from the network.

1
2
3
4
5
6
7
#cloud-config
users:
- name: adyanth
  groups: sudo
  sudo: ALL=(ALL) NOPASSWD:ALL
  ssh_authorized_keys: 
  - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN3xmSbkQodRqlil2D09/J89Mqz4JNGxXIsyY3y/rs70 adyanth

Launch the VMs

With the cloud-init handy, I proceeded to spin up the VMs using the below commands. You can mess around with the CPU, memory, and disk allocated and choose to deploy more VMs or multiple nodes for the control plane as needed.

1
2
3
multipass launch --cpus 1 --mem 1G --disk 2G --name master  --cloud-init multipass.yml
multipass launch --cpus 1 --mem 1G --disk 2G --name worker1 --cloud-init multipass.yml
multipass launch --cpus 1 --mem 1G --disk 2G --name worker2 --cloud-init multipass.yml

Running multipass list then gives the status as below.

1
2
3
4
5
> multipass list
Name                    State             IPv4             Release
master                  Running           172.26.126.48    Ubuntu 20.04 LTS
worker1                 Running           172.26.126.55    Ubuntu 20.04 LTS
worker2                 Running           172.26.126.72    Ubuntu 20.04 LTS

At this point, I can ssh into the VMs by using ssh adyanth@master.mshome.net, but I do not need to do that.

k3sup

k3sup is a lightweight utility to get from zero to KUBECONFIG with k3s on any local or remote VM. All you need is SSH access and the k3sup binary to get kubectl access immediately. Due to the requirement of SSH access, we added the cloud-init while deploying our VMs.

Regarding the installation of k3sup, it is distributed as a single binary, so I promptly dropped it to my scripts folder which is in my PATH.

I loved the simplicity and speed with which I was able to get the cluster up and running, without the need for ansible and hosts files and such. I ran the below commands, which took five minutes at most. The first command installs and initiates the k3s control plane on the master whereas the second and third joins the workers to the cluster.

1
2
3
4
k3sup install --context k3s-cluster --user adyanth --host master.mshome.net

k3sup join --user adyanth --server-host master.mshome.net --host worker1.mshome.net
k3sup join --user adyanth --server-host master.mshome.net --host worker2.mshome.net

If you want a multi-master cluster with embedded etcd, it is as simple as running the below commands for the cluster masters. Note the extra --cluster parameter provided for the first master node and the extra --server for joining additional master nodes. The github readme has detailed command-line options for k3sup.

1
2
k3sup install --cluster --context k3s-cluster --user adyanth --host master1.mshome.net
k3sup join --server --user adyanth --server-host master1.mshome.net --host master2.mshome.net

Once the commands finish, it will generate a kubeconfig file in the current directory. I copied over that file as C:\Users\<user>\.kube\config and proceeded to use the usual kubectl commands to talk to the cluster.

At this point, I downloaded kubectl from here and proceeded to run commands against my brand new Kubernetes cluster! 🎉

1
2
3
4
5
> kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
master    Ready    master   1h    v1.19.13+k3s1
worker1   Ready    <none>   1h    v1.19.13+k3s1
worker2   Ready    <none>   1h    v1.19.13+k3s1

Rancher

Rancher is a lot of things, they have their own kubernetes engine called the RKE and the lightweight k3s distribution we used for the cluster above is also from them. They also have a cluster management platform which we can deploy on our existing kubernetes cluster as shown on their documentation , which I will summarize below on what I ran, for the sake of completeness.

Installing Helm

Helm Charts helps us define, install, and upgrade even the most complex Kubernetes applications with ease. Rancher provides a helm chart for the installation as well. Many kubernetes applications are packaged as Helm Charts. So I proceeded to install helm to make use of it.

Download the latest release from Github , extract and drop it in a directory which is in PATH and we are off to the races.

Installing Rancher

Let us proceed to install Rancher using the official helm charts. Rancher also needs cert-manager which is a Kubernetes addon to automate the management and issuance of TLS certificates from various issuing sources. I installed both of them as shown below.

Install cert-manager

1
2
3
4
5
helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl create namespace cert-manager
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.0.4 --set installCRDs=true --wait
kubectl -n cert-manager rollout status deploy/cert-manager

Install Rancher

Substitute the hostname you need in the 4th command.

1
2
3
4
5
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
helm repo update
kubectl create namespace cattle-system
helm install rancher rancher-latest/rancher --namespace cattle-system --set hostname=rancher.adyanth.lan --wait
kubectl -n cattle-system rollout status deploy/rancher

Once both of them show the rollout status as complete, Rancher is installed in our cluster. For accessing it, we need to use the hostname specified. Since this is a learning environment, I added the below entries to my hosts file at C:\Windows\System32\drivers\etc\hosts, where the IP is the IP address of the master VM, as shown in multipass list.

1
172.26.126.48 rancher.adyanth.lan

I opened a browser, pointed it to rancher.adyanth.lan and voilà, I was asked to create an admin account and dropped into the Rancher UI showing all three nodes!

Rancher UI

In my next post , I discuss on how I moved a docker application to run on multi node kubernetes cluster.🚀