Building the Software Stack for My Kubernetes Home Lab

Apr 6, 2023 by Sameer Kulkarni
Building the Software Stack for My Kubernetes Home Lab

Overview

This is the third post in the series about my home lab setup. If you’re following the series, you know that this series focuses on my effort to set up a new home lab for myself. If you haven’t already, be sure to check out the first two posts in my home lab setup series. The first post, Building a Kubernetes Home Lab from the Ground Up, covers the initial steps in setting up a home lab and provides some helpful tips for anyone looking to do the same. The second post, Hardware and Networking Setup for My Home Lab, as the name suggests, focuses on the hardware and networking aspects of my setup.

In this post, I will talk about the OS and Software setup I did so that I can run some applications on it. As I mentioned in the first post, I intend on running my applications on Kubernetes. Hence in this post we will briefly touch on what Kubernetes is? What are some of the popular Kubernetes distributions? As well as how to install a Kubernetes cluster of my choice on bare metal servers.

Choosing the OS

The OS of choice would largely depend on two factors.

  1. Your home lab hardware
  2. Your use case

If you’ve gone with ARM-based SBCs like Raspberry Pi, you would probably want to go with a compatible OS like the Raspberry Pi OS. You may also choose other Linux distros like Ubuntu, Fedora, or CentOS, which either have a compatible version or have instructions available or both for installing the same. You also have the option to choose other usage-specific OSes such as LibreELEC to run KODI, RetroPie to run Retro games, Kano OS which is meant for Kids’ use, etc.

If you’ve chosen a mini PC or a server for your home lab machines, then you can choose any Linux or Windows-based OS, depending on the use case and preference. Both of them have desktop and server versions. The main differences between desktop and server versions include different GUIs or lack thereof and different software packages to suit desktop or server use cases. Linux server versions typically do not come with GUI.

I chose to go with Ubuntu desktop for my setup, which is arguably the most popular Linux-based OS. While I’m comfortable with the shell commands on Linux, I do like to have a GUI available for me if and when required. As for the difference in the included packages, that wasn’t relevant for my use case since I am planning to install Kubernetes on them and would install the relevant packages as part of the same.

Installing Ubuntu

Installing Ubuntu is pretty straightforward. You can refer to the easy-to-follow tutorial on the Ubuntu website to do the same.

One of the things I did differently while installing Ubuntu though, was to choose manual partitioning, instead of the default one. I did it for two reasons. One, it would allow me to spare dedicated space, if I want to go with a dual boot setup, in the future. Two, since my boot disk is rather large for Linux OS, I can repurpose it for storage.

I also made sure to connect my machines to the “Internal Router”, while installing the OS, out of my current two router networking setup.

Enabling SSH

Once I finished installing Ubuntu, the next step was to enable SSH access for them. This would enable me to work on them remotely using my daily-use laptop, instead of having to physically connect to them. Although, you’d have to follow the below steps by physically connecting to each machine, one after the other. Hence it’s better to do it, while you’re already connected to each, for installing the OS.

To enable SSH, you first need to install an ssh server on the machine. Note that the procedure below is for installing it on an Ubuntu machine. You may need to use a different procedure to suit your OS.

Run the following commands by opening the terminal on each machine.

$ sudo apt-get update # Update available packages $ sudo apt-get upgrade # Upgrade the packages $ sudo apt-get install openssh-server # install ssh server

Enable the SSH service to make it start automatically when the machine boots up and to keep it running in the background.

$ sudo systemctl enable ssh # enable ssh service $ sudo systemctl status ssh # check ssh service status

If your Ubuntu firewall is enabled, then you would also need to open the SSH port in the firewall. Otherwise it will block all incoming SSH connections. Run the following commands to open the SSH port, in case you’ve enabled the firewall. The firewall is off, by default.

$ sudo ufw status # Check your firewall status $ sudo ufw allow ssh # Open SSH port $ sudo ufw reload # Reload firewall

Installing Kubernetes

Kubernetes is the most famous container orchestration solution right now. It is widely used to run enterprise applications on-premise as well as on the cloud. Some of the important advantages of hosting applications on Kubernetes include High Availability, Auto Scaling, Rollout management and Service Discovery.

As mentioned in the first post of the series, writing containerized applications and running them on Kubernetes is a big part of my daily activities. Hence I thought I would like to deploy a multi-node Kubernetes cluster locally on my home lab and run applications on it. Thus, the next step in setting up my home lab was to install a Kubernetes cluster on it.

Choosing the Kubernetes distribution

If you too are familiar with Kubernetes, you would know that there are many Kubernetes distributions to choose from, in addition to the vanilla Kubernetes. A few of the popular Kubernetes distributions are as below:

Red Hat OpenShift

Red Hat OpenShift is a platform based on Kubernetes which also has supplementary services for CI/CD, Observability, and Container runtime, among others. The community distribution of OpenShift is called OKD. Red Hat also offers an OpenShift distribution for local development called OpenShift Local, formerly known as CodeReady Containers.

AWS EKS

Although AWS EKS is a popular managed Kubernetes distribution available to use on the AWS cloud platform, it also has a free and open source version called EKS Anywhere. You can use it to install EKS on bare-metal, on-premise, or virtual machines. You’ll only have to pay If you want to use AWS support for it or use any additional features that are available from AWS.

minikube

minikube is a local Kubernetes cluster that focuses on making it easy to learn and develop for Kubernetes. It’s quite common to use minikube to create a single node cluster on your local machine. However, you can also use it to create a multi-node cluster.

Rancher

Rancher is a Kubernetes management tool, used to deploy, run and manage Kubernetes clusters anywhere, including on-premise and on the cloud. Rancher also offers a Kubernetes distribution, called RKE (Rancher Kubernetes Engine), which can be installed on bare-metal or virtualized servers. The complexity around Kubernetes installation is one of the main problems that RKE solves.

Rancher also has a stripped-down version of Kubernetes offering, called K3s. It’s a CNCF-certified, lightweight Kubernetes which is great for IoT, ARM, Edge and many similar use cases where compute and storage capacity is limited. Plus it’s also great for development purposes since it’s a fully compliant Kubernetes distribution.

Although my home lab setup is more than capable of running a full Kubernetes distribution, running a fully compliant Kubernetes cluster at a fraction of the compute and memory footprint is even better. Hence I decided to proceed with K3s as my Kubernetes distribution of choice.

Installing K3s on my home lab

You can install K3s with a number of configurations. One of the major configuration options is to install it as a single server with an embedded DB or High-Availability server with an external DB. In the single server with embedded DB mode, you can either choose to run it with the default Sqlite or with an etcd database. If you choose to install it with an external DB, you can connect the K3s server to Sqlite, etcd, MySql or Postgres.

I’m going to install K3s on a single server with an embedded etcd DB. K3s recommends using three server nodes in HA mode. Since I have a total of four machines in my home lab, that’ll leave only one node to use as a worker node.

First I opened SSH connections to all four of my machines. I then ran the following command on one of the machines to install K3s server. The same command will also install all the dependencies such as container runtime i.e. containerd and the supporting utilities such as kubectl.

curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server --cluster-init

In the above command the K3S_TOKEN is a shared secret that’ll be used by subsequent servers/agents to join the cluster, server is a command option to run management server and --cluster-init is a flag to start a new cluster using embedded etcd database. Once the server is up, it will create a kubeconfig file at /etc/rancher/k3s/k3s.yaml. You can start using it to query Kubernetes as below.

K3s server node ready B01.png

Run the following command on the remaining three nodes to make them join the cluster as Agents.

curl -sfL https://get.k3s.io | K3S_URL=https://<Server_IP>:6443 K3S_TOKEN=SECRET sh -s -

Replace the <Server_IP> in above command with the actual IP address of the server node. Once the above command finished executing, again run the same kubectl command as above to make sure all nodes have joined the cluster successfully with the right roles.

All K3s nodes ready with expected roles assigned B02.png

After verifying that all the nodes have joined the cluster and have the right roles assigned to them, my K3s cluster is ready. The only thing remaining now is to copy the kubeconfig file over to my daily-use laptop so that I can access the cluster without connecting the master node.

First copy the k3s.yaml file which is owned by root to the regular user’s home directory and change the file owner to the same.

k3sServer$ mkdir -p ~/.kube/ k3sServer$ sudo cp /etc/rancher/k3s/k3s.yaml /home/<k3s_server_user>/.kube/config k3sServer$ sudo chown <k3s_server_user>:<k3s_server_user> /home/<k3s_server_user>/.kube/config

Replace the <k3s_server_user> with the actual user name for your K3s server. Now copy the file over to the daily-use laptop.

sameer@laptop$ scp <k3s_server_user>@<Server_IP>:~/.kube/config /tmp/config sameer@laptop$ sed -e 's/127\.0\.0\.1/<Server_IP>/g' /tmp/config > ~/.kube/config

This completes our Kubernetes setup on home lab and we’re also set to access it from my laptop. You can verify if you’re able to access the cluster from the laptop by running the same kubectl command as before.

K3s cluster accessible from laptop B03.png

Conclusion

In this post we saw the Ubuntu OS installation on the machines for my home lab. We also briefly saw what Kubernetes is and went over some of the popular Kubernetes distributions. We went over why K3s made sense for my home lab and how to set it up on your home lab machines.

The next post will talk about deploying applications to the home lab and the Kubernetes cluster using Acorn. We’ll be installing a media backup and streaming application on it.

  1. Building a Kubernetes Home Lab from the Ground Up
  2. Hardware and Networking Setup for My Home Lab
  3. Software Setup for My Home Lab
  4. Hosting Plex Media Server On My Home Lab
  5. Hosting Pi-hole on My Home Lab
  6. Creating an Application Dashboard for your Home Lab
  7. DIY Cloud Storage: Running Nextcloud on a Kubernetes Home Lab

Sameer Kulkarni is a software engineer with 14+ years of experience and is working as a Principal Engineer at InfraCloud. You can chat with him on Twitter and read more of his work on Medium.

Header Photo by Farzad on Unsplash