Acorn is a lightweight deployment framework that makes developers productive and efficient. When organizations adopt Acorn as the standard platform to run microservices, they have a consistent mechanism for packaging and deploying workloads.
With edge computing becoming mainstream, many enterprises are running Kubernetes at the edge. Application lifecycle management becomes easy when Acorn is deployed across all the edge clusters. DevOps can build highly optimized and efficient CI/CD pipelines to roll out new versions of microservices effortlessly by combining the power of Acorn with GitOps.
This tutorial will explore how to leverage GitOps to deploy the Acorn framework in multiple Kubernetes clusters running at the edge. This mechanism can scale to tens of thousands of clusters with no change to the configuration. In part two of the tutorial, we explore how to deploy applications on the infrastructure we’ve created.
Prerequisites
- Access to one or more Kubernetes clusters
- Latest version of Acorn CLI
- Flux CLI
- GitHub account with a personal Token
Step 1 – Exploring and verifying the environment
If you want to spin up multiple clusters on your development workstation for this tutorial, consider Multipass. You can refer to one of my previous tutorials to setup Multipass on macOS.
Once you install Multipass, the script below provisions three VMs and installs K3s in each host to create three independent clusters.
for i in {1..3};
do
multipass launch focal --name cluster-$i --cpus 2 --disk 10G --memory 4G && \
multipass exec cluster-$i -- sh -c 'curl -sfL -o /home/ubuntu/setup.sh https://get.k3s.io' && \
multipass exec cluster-$i -- sh -c 'chmod +x /home/ubuntu/setup.sh' && \
multipass exec cluster-$i -- sh -c 'mkdir -p /home/ubuntu/.kube' && \
multipass exec cluster-$i -- sh -c 'INSTALL_K3S_EXEC="server --disable traefik --disable servicelb" K3S_KUBECONFIG_MODE="644" K3S_KUBECONFIG_OUTPUT="/home/ubuntu/.kube/config" /home/ubuntu/setup.sh' && \
multipass transfer cluster-$i:/home/ubuntu/.kube/config cluster-$i-kubeconfig;
done
Update the downloaded kubeconfig
files for each cluster with the corresponding IP addresses of the hosts.

Make sure you can access the clusters by switching to an appropriate kubeconfig
file.

Finally, verify you have the latest version of Acorn and Flux.

Step 2 – Bootstrapping the first cluster with Flux
In this step, we will initialize the first cluster with Flux by pointing it to a new GitHub repository. Once we configure the first cluster, it’s easy for us to repeat that across all the clusters.
Create a personal token at GitHub and use it to initialize the below environment variable:
export GITHUB_TOKEN=YOUR_GITHUB_TOKEN
Then, create a blank repository that Flux will use to commit the GitOps toolkit artifacts. Run the below command to bootstrap the cluster.
flux bootstrap github \
--owner=YOUR_GITHUB_USERNAME \
--repository=YOUR_GITHUB_REPO \
--path=clusters/cluster-1 \
--personal

Clone the GitOps repo locally with the git clone
command. This directory contains the essential artifacts needed by Flux.
Step 3 – Configuring and deploying Acorn prerequisites
Acorn requires an ingress as a prerequisite. Since we may be running Kubernetes at the edge, installing a load balancer is a good idea. For this tutorial, let’s configure and install MetalLB.
Create a new directory called infrastructure
which contains the artifacts for MetalLB and NGINX ingress.
The first step is to install the MetalLB Helm chart. Under the metallb
directory, create the below files – helmrepo.yaml
and helmrelease.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: metallb
namespace: flux-system
spec:
interval: 1m0s
url: https://metallb.github.io/metallb
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: metallb
namespace: metallb-system
spec:
interval: 3m
targetNamespace: metallb-system
chart:
spec:
chart: metallb
sourceRef:
kind: HelmRepository
name: metallb
namespace: flux-system
interval: 1m
These two definitions are responsible for installing the Helm chart in the target cluster. For MetalLB, we also need to define the IP pool from which the addresses are drawn for services. Since each cluster needs a separate pool, we will customize it per cluster at a later stage.
Create another directory under infrastructure called ip-pool
and create the below YAML file:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.64.221-192.168.64.230
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: first-pool-adv
namespace: metallb-system
To wrap up the MetalLB configuration, let’s also create a kustomization.yaml
pointing to the above definition. This will help us create a per-cluster configuration later.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ip-pool.yaml
At this point, your local repo directory should look like this:

To apply this to the target cluster, we need to add a YAML called infrastructure.yaml
under clusters/cluster-1
directory.
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: metallb
namespace: flux-system
spec:
interval: 1h
retryInterval: 1m
timeout: 5m
sourceRef:
kind: GitRepository
name: flux-system
path: ./infrastructure/metallb
prune: true
wait: true
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: metallb-ip-pool
namespace: flux-system
spec:
dependsOn:
- name: metallb
patches:
- patch: |-
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.64.201-192.168.64.210
target:
kind: IPAddressPool
interval: 1h
retryInterval: 1m
timeout: 5m
sourceRef:
kind: GitRepository
name: flux-system
path: ./infrastructure/ip-pool
prune: true
wait: true
Notice how we leverage Kustomization to create a patch to override the MetalLB IP pool. The first edge cluster gets the range between 192.168.64.201 and 192.168.64.210. This is one of the techniques to have a base configuration but override it to create a per-cluster configuration.
Commit and push the changes to the remote GitHub repository.
git add .
git commit -m "Added MetalLB"
git push
To force Flux to synchronize, run the following command:
flux reconcile source git flux-system
In a few minutes, you should see the metallb-system
namespace and the pods running within it.

We have successfully pushed the MetalLB Helm chart through GitOps. It’s time to do the same with NGINX ingress.
Create a new directory under infrastructure called ingress-nginx
and create the below YAML files:
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: ingress-nginx
namespace: flux-system
spec:
interval: 1m0s
url: https://kubernetes.github.io/ingress-nginx
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
interval: 3m
targetNamespace: ingress-nginx
chart:
spec:
chart: ingress-nginx
sourceRef:
kind: HelmRepository
name: ingress-nginx
namespace: flux-system
interval: 1m
Like MetalLB, let’s add an entry for ingress for cluster-1. Add the below lines to infrastructure.yaml to point it to the ingress Helm chart:
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: ingress-nginx
namespace: flux-system
spec:
interval: 1h
retryInterval: 1m
timeout: 5m
sourceRef:
kind: GitRepository
name: flux-system
path: ./infrastructure/ingress-nginx
prune: true
wait: true
Commit and push the changes to GitHub. In a few minutes, the NGINX ingress controller is running and is exposed as load balancer service. Notice the public IP address of the ingress controller is from MetalLB’s IP pool that we defined for cluster-1.

The below image shows the current directory structure with the prerequisites configured:

Step 3 – Deploying Acorn through GitOps with Flux
With the prerequisites in place, let’s go ahead and install Acorn on the target cluster. For this, let’s create a directory called platform/acorn under the root of the repo to store the Acorn artifacts.
mkdir -p platform/acorn
cd platform/acorn
The Acorn CLI lets us generate the static YAML files through a switch. Let’s use that to populate the platform directory.
acorn install -o yaml > acorn.yaml
We also need a kustomization.yaml
file to point each cluster. This is a simple entry with the reference to the acorn.yaml
.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- acorn.yaml
Let’s return to the cluster-1
directory and add a YAML file pointing to the platform artifacts.
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: acorn
namespace: flux-system
spec:
dependsOn:
- name: ingress-nginx
interval: 1h
retryInterval: 1m
timeout: 5m
sourceRef:
kind: GitRepository
name: flux-system
path: ./platform/acorn
prune: true
wait: true
We are now done with the complete configuration. The repo directory structure should look like the below:

Commit and push the changes to GitHub. Within a few minutes, Acorn should be up and running in cluster-1.

You can verify the status with the below command:
flux get kustomizations

Step 4 – Extending the configuration to other clusters
With the boilerplate configuration in place for cluster-1, you can easily extend it to any number of clusters.
First, bootstrap the cluster with the Flux CLI. After that, copy the infrastructure.yaml
and platform.yaml
files to the target cluster directory. Modify infrastructure.yaml
to change the IP range to avoid any overlapping addresses.
If you want to start with a pre-configured repository, clone my demo repo from GitHub. You can learn more about Acorn in the getting started workshop.
In the next part of this tutorial, we explore deploying Acorn applications through GitOps.
Janakiram is a practicing architect, analyst, and advisor focusing on emerging infrastructure technologies. He provides strategic advisory to hyperscalers, technology platform companies, startups, ISVs, and enterprises. As a practitioner working with a diverse Enterprise customer base across cloud native, machine learning, IoT, and edge domains, Janakiram gains insight into the enterprise challenges, pitfalls, and opportunities involved in emerging technology adoption. Janakiram is an Amazon, Microsoft, and Google certified cloud architect, as well as a CNCF Ambassador and Microsoft Regional Director. He is an active contributor at Gigaom Research, Forbes, The New Stack, and InfoWorld. You can follow him on twitter.
Photo by Karsten Würth on Unsplash