Acorn can be installed in environments such as Docker Desktop, Minikube, or MicroK8s running on a developer workstation. There are instances where you may want to emulate a multi-node cluster that mimics the production environment to deploy and test Acorn apps.
This tutorial walks you through the steps involved in creating a multi-node Kubernetes cluster based on Multipass and K3s that expose the services through MetalLB and Nginx Ingress.
You can configure the cluster on macOS based on the Intel chip or Apple Silicon. It works identically in both architectures.
Step 1 – Installing Multipass and Launching Ubuntu VMs
Multipass is a tool from Canonical to launch Ubuntu virtual machines in Windows, macOS, and Linux OS. It abstracts the steps involved in downloading the images, configuring the VM, and launching it.
Multipass uses Hyper-V on Windows, QEMU and HyperKit on macOS, and LXD on Linux for minimal overhead and the fastest possible start time.
Assuming you have Brew installed on your Mac, run the below command to install Multipass:
brew install --cask multipass
Once Multipass is installed, it’s time to launch three Ubuntu 20.04 VMs with 4 CPUs, 4GB of RAM, and 20G of storage.
multipass launch 20.04 --name node-1 --cpus 4 --disk 20G --mem 4G
multipass launch 20.04 --name node-2 --cpus 4 --disk 20G --mem 4G
multipass launch 20.04 --name node-3 --cpus 4 --disk 20G --mem 4G
Let’s verify that the VMs are ready.
multipass list

Step 2 – Installing and Configuring K3s Cluster
With the VMs already launched, let’s install K3s. Node-1 will act as the server, while the remaining two runs the agent.
To SSH into the first node, run the below command:
multipass shell node-1
Let’s install the K3s server in this node. Since we plan to use Nginx as Ingress and MetalLB as the load balancer, we will disable Traefik and ServiceLB.
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik --disable servicelb" K3S_TOKEN=acorn-dev-env sh -s -
SSH into the remaining two nodes and run the following command:
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.64.7:6443 K3S_TOKEN=acorn-dev-env sh -
The IP address 192.168.64.7
corresponds to node-1, that’s running the server.
On node-1, run the following command to verify that the K3s cluster is up and running:
sudo kubectl get nodes

While you are still within the SSH session on node-1, copy the contents K3s configuration file to access the cluster from your workstation through kubectl.
Create the kubeconfig
file on your Mac by replacing 127.0.0.1 with the IP address of node-1.
Set the KUBECONFIG environment variable to the file you updated. On my machine, I copied k3s.yaml to k3s-kubeconfig.
export KUBECONFIG=$PWD/k3s-kubeconfig
Verify that you can access the K3s cluster from Mac.
kubectl cluster-info

Let’s install MetalLB on our cluster.
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
We need to configure and advertise the IP pool in the L2 network. Create and apply the below YAML file:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.64.200-192.168.64.240
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: first-pool-adv
namespace: metallb-system
We created a pool of IP addresses ranging from 192.168.64.200
to 192.168.64.240
. Any service of type Load Balancer will get an IP address from this range.
It’s time to create the NGINX ingress controller.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace \
--namespace ingress-nginx
The NGINX ingress controller service exposed as type Load Balancer should get an IP address from the MetalLB pool we created earlier. Let’s verify it.
kubectl get svc -n ingress-nginx

Our ingress is available at 192.168.64.200
, which is the first IP address in the pool.
With the Kubernetes environment and the prerequisites in place, let’s go ahead and install Acorn.
Step 3 – Installing Acorn and Running Apps
Assuming you have the latest version of Acorn CLI, run the below command to install it.
acorn install --ingress-class-name nginx

Let’s run a simple web server based on the below Acornfile to check the installation:
containers: {
"default": {
image: "nginx"
ports: publish: "80/http"
files: {
// Simple index.html file
"/usr/share/nginx/html/index.html": "<h1>My First Acorn!</h1>"
}
}
}
acorn run --name myweb .
Wait for the service to get associated with the ingress.
acorn apps

Hit the endpoint with curl to test the service.
curl -H "host: default-myweb-c58103fc.wrnjwh.alpha.on-acorn.io" http://192.168.64.200
We have successfully configured a multi-node development and test environment to deploy Acorn apps. To learn more about using acorn, visit our getting started guide, or join us for an upcoming meetup or training.
Janakiram is a practicing architect, analyst, and advisor focusing on emerging infrastructure technologies. He provides strategic advisory to hyperscalers, technology platform companies, startups, ISVs, and enterprises. As a practitioner working with a diverse Enterprise customer base across cloud native, machine learning, IoT, and edge domains, Janakiram gains insight into the enterprise challenges, pitfalls, and opportunities involved in emerging technology adoption. Janakiram is an Amazon, Microsoft, and Google certified cloud architect, as well as a CNCF Ambassador and Microsoft Regional Director. He is an active contributor at Gigaom Research, Forbes, The New Stack, and InfoWorld.