Now that I have a Kubernetes cluster handy, I wanted to get my hands dirty by deploying my first application on it. What else is better than running an application I wrote myself to learn something to double up on the investment? If you want to read up on Short{Paste} , the application I wrote, you may read my blog post on it.
App deployment
I started by switching from Global to the Local cluster’s Default namespace/project. From the Workload tab, clicking on Deploy takes me to the deploy workload page. Here, I start by naming the workload as shortpaste and selecting the docker image adyanth/shortpaste
. I leave the port mapping empty for now since I plan to add a layer 7 Ingress rather than a layer 4 NodePort or a Layer 4 load balancer. Under environment variables, I add what my app expects. Finally, for now, I let the scale default to 1 and hit Launch.
After a minute for it to download the image, I can see my app show up active! Note that there is no way for me to access it right now. So, I proceed to make it accessible by adding an Ingress rule.
Adding Ingress
I head over to the Load Balancing tab and proceed to Add Ingress. The configuration here is quite simple for our needs. I provide a name, set a hostname to use, specify the workload to target for this Ingress and the port on which my app listens. Remember that K3s uses Traefik for routing, so it can do Let’s Encrypt SSL/TLS certificates and everything, but for now, we don’t need anything fancy; the default self-signed certificate is good enough. After saving, the Ingress is ready!
With this, the app should be working. I add an entry to my hosts
file (for now. You can configure a DNS server to CNAME this to any of your cluster nodes), similar to what I did for Rancher itself in the previous post. That finishes deploying my application on Kubernetes!
Storage
Now that I have it working, I can go deeper into the configuration of other things like storage. My application saves its SQLite database and the actual data in the directory specified by the environment variable. But this directory inside the Pod/container is ephemeral, meaning it only lasts as long as the Pod is running. If the app restarts or the Pod moves to another node, it loses all the data. So, we need a shared storage location in the cluster. There are multiple options available here, such as AWS S3 buckets and others, but I want everything on my server, mainly to save costs and bandwidth. I chose to use NFS or the Network File System. You can set up the cluster master as the NFS server or another dedicated file server for it. The prerequisite to using NFS is that all nodes should have the nfs-common
package installed to mount NFS shares, so I proceeded to SSH into each node and ran the apt command to install it.
|
|
Background on PV, PVC and Storage Classes
A Persistent Volume (PV) is a storage medium that can save the application’s data even if the Pod is destroyed and recreated. The simplest form of PV is a local volume which is local to a node. The issue with using a local PV is that if (more like when) the pods are migrated to other nodes, that node’s filesystem would not have the application’s data unless we manually mount a shared storage at the same location on all nodes or use something like Gluster for replicated/distributed storage.
A Persistent Volume Claim (PVC) is how an application requests the storage it needs. It would contain the size it needs and other customization parameters. It would also specify a Storage Class to use.
A Storage Class is what takes the PVC and dynamically provisions a PV. It will create a PV of the requested size and bind it to the PVC. Now the application that generated the PVC can mount it to the Pod and use it for storage.
Official docs over at kubernetes.io has more details if needed.
OpenMediaVault for NFS
If you read my previous post here , I have a file server in my ESXi, a VM running OpenMediaVault. I won’t go in-depth here on how to set up an NFS share on OMV here.
I created a new shared folder dedicated for K8s, and a crucial step here is to select Read/Write/Execute permission for Owner, Group and Others. The reason we need Others is that NFS (v3) does not have a concept of users. It is a host-level authentication based on IP addresses.
Then, I went to the NFS service, enabled it, and added the newly created shared folder to be accessible from my desktop PC’s IP address with read/write privileges. The K3s cluster is running on Hyper-V’s default vSwitch using NAT. So the IP would be the same as the host.
Statically creating the PV
Using NFS for storage in Rancher needs me to manually create the Persistent Volume (PV) since there is no built-in Storage Class. I’ll come to this later on auto provision for NFS, but now I go to Cluster Explorer on the top right in Rancher, Persistent Volumes under Storage, and Create a new one. After filling in a name, type and capacity, I configure the NFS section by providing the NFS server IP and the path where I would like to save the files. OMV creates the NAS share under /export/
, so I am specifying a directory inside my K8s share that I set up before. In Customize, I select all the three Access Modes since NFS allows multiple devices to access the export.
Editing the workload to use a PVC
I now head back to the Cluster Manager in Rancher and switch from Global to the default namespace in the local cluster. I select the workload I created before and set the config scale to 0 to stop all running pods. There is no need to do this since any config changes apply in a rolling fashion, but I did not want any errors and be on the safer side.
I add a volume using “Add a new persistent volume (claim)” or a PVC. Here, I give it a name, select Use an existing PV for the source and select the correct PV. Under Customize, I choose all three and hit Define.
Then, select the mount point for the app, /shortpaste
as defined in the environment variable for my app, and click Save. You can set a sub-path inside the PV to use, but I plan to use the root of the PV for this, so I leave it empty.
After this, I increase the Config Scale from 0 to 2. With this, there are two pods now, one on worker1 and one on worker2. I can validate that it works by checking to see files present on my NFS share and seeing that now my app preserves its state and data if I scale it back to 0 and 2 again, which did not happen before.
Dynamic provisioning using Storage Classes for NFS
NFS SubDir External Provisioner is an automatic provisioner that uses your existing and already configured NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Installing this is very simple as they provide helm charts for the same. Use the correct NFS server and path and run the helm commands below to complete the process.
|
|
With that done, going to Cluster Explorer > Storage > Storage Classes shows me the newly created nfs-client storage class. The storage classes can automatically provision PVs when the applications need them. All the data will be under the NFS path specified during the helm chart installation.
I first delete the current PVC and PV from our app by scaling the service to 0 and editing the workload to remove the volume and save. Now, I can go back to the Cluster Explorer, delete the PVC and then delete the released PV.
Now, I go back to editing my app, add volume, Add a new persistent volume claim. Here, I give it a name, select Use a storage class to provision a new PV under source and select the nfs-client as the storage class. Further, I fill in the storage size and choose all access modes. Once saved, I can go to the Cluster Explorer similar to what I did before to see the newly created PVC and PV. Scaling the service back to 2, I ensure that it still works fine. No more manual creation of PVs for me!
Health Checks
The last topic I touch upon here is health checks. Having a health check is a great way to let Kubernetes monitor the health and automatically remedy it if something goes wrong. In my case, all I need to validate is if the webserver is up. I can do that by defining an HTTP request that should return a 2xx status. I configure the correct port (8080 in my case) and leave the rest to default. Any additional HTTP headers needed can be added here.
Closing thoughts
I can now say I successfully deployed an application that I wrote in a new language (for me) on Kubernetes, something relatively new to me. The whole process was a remarkable experience and insight into how production workloads are deployed and managed at scale. Knowing this helps in more ways than one might think. One can use this information to structure a new application that conforms to and becomes easy to run and manage using containers and microservices architecture.