Deploying Harbor as a Private Registry – Part 2

by | Sep 15, 2023

Spread the word

Setting Up and Configuring Harbor: Your Guide to Establishing a Secure Private Registry

This is the second part in a series on selecting a private container registry. In the first part of the series we looked at different options for private container registries. In this part we will discuss best practices for deploying Harbor as a private registry.

Harbor is an open-source cloud-native registry that stores, signs and scans content. This enterprise-grade artifact repository management solution, which graduated from the CNCF in 2020, provides a robust, scalable, and secure environment for managing Docker images, Helm charts, and other types of artifacts. Built with an eye towards multi-tenancy, Harbor includes robust features such as role-based access control, vulnerability scanning, and audit logging, making it an essential tool in the realm of cloud and container management.

Harbor safeguards your proprietary images and charts as a private registry, ensuring they are not exposed to public registries. Harbor’s integrated security features, including vulnerability scanning via Trivy and policy checks via Open Policy Agent, allow for constant, automated security monitoring of your images, significantly enhancing your security posture. Moreover, its support for replication allows you to synchronize selected images or Helm charts between multiple registries, enabling cross-datacenter, high-availability deployments.

Harbor also allows for extensibility and compatibility with various systems, thanks to its compliance with the OCI (Open Container Initiative) standards. Its extensive API coverage enables seamless integration with existing CI/CD systems, making it a versatile tool for any DevOps pipeline. With its rich features and robust architecture, Harbor is a comprehensive solution for the end-to-end management of private container registries.

In this tutorial, we will install Harbor in a Google Compute Engine instance, which is restricted to specific IP addresses running in on-premises environments. The same steps can be easily extended to install Harbor in a bare metal or virtual machine running within the data center. 

Step 1 – Launch an Ubuntu Instance and Configure DNS

The first step is to create a static ip address in Google Cloud.

gcloud compute addresses create registry \
   --region=asia-south1

Let’s store the IP address value in an environment variable.

IP=`gcloud compute addresses describe registry --region=asia-south1 --format=json | jq -r .address`

Let’s use this IP address to launch a GCE VM based on Ubuntu 20.04.

gcloud compute instances create registry \
    --image-family=ubuntu-2004-lts \
    --image-project=ubuntu-os-cloud \
    --machine-type=e2-standard-4 \
    --zone=asia-south1-a \
    --address=$IP \
    --tags=http-server,https-server,registry

Finally, let’s restrict access to this VM only to the outbound IP address of the organization to access the registry securely. 

gcloud compute firewall-rules create deny-registry-access \
    --network default \
    --action deny \
    --direction ingress \
    --rules tcp \
    --source-ranges 0.0.0.0/0 \
    --priority 900 \
    --target-tags registry
gcloud compute firewall-rules create allow-registry-access \
    --network default \
    --action allow \
    --direction ingress \
    --priority=500 \
    --rules=tcp:22,tcp:80,tcp:443 \
    --source-ranges=YOUR_WAN_IP \
    --target-tags registry

We deny all ingress access except for the IP address associated with your WAN. For more information on GCE and firewall rules, refer to Google Cloud documentation. 

We now have a VM associated with a public IP address running in the Google Cloud Platform. 

Let’s add a DNS entry to point the subdomain, registry.edgenative.in, to this VM. I am using GoDaddy as the DNS provider, but you can do the same with any domain registrar.

We should be able to SSH into this instance using the DNS name.

Step 2 – Configure Lets Encrypt Certificate

To generate the Lets Encrypt TLS certificates, we need to disable the firewall rules temporarily. Run the below command to do this:

gcloud compute firewall-rules delete deny-registry-access

SSH into the VM and run the below commands to generate the TLS certificates:

sudo snap install certbot --classic
sudo certbot certonly --standalone -d registry.edgenative.in

Verify that the certificates are created at /etc/letsencrypt/live/registry.edgenative.in

Recreate the firewall rule to restrict access.

gcloud compute firewall-rules create deny-registry-access \
    --network default \
    --action deny \
    --direction ingress \
    --rules tcp \
    --source-ranges 0.0.0.0/0 \
    --priority 900 \
    --target-tags registry

We are ready to install Harbor. The certificates that we generated will be used to configure the TLS connection. 

Step 3 – Install and Configure Harbor Private Registry

Before installing Harbor, we need to install Docker Engine and Docker Compose. Execute the below commands to install both:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update && sudo apt install -y docker-ce
sudo groupadd docker
sudo usermod -aG docker $USER
DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
mkdir -p $DOCKER_CONFIG/cli-plugins
curl -SL https://github.com/docker/compose/releases/download/v2.18.1/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose

Verify the version of Docker before proceeding.

Let’s also check the version of Docker Compose.

Let’s download the latest version of Harbor and expand the tarball in the /opt/harbor directory:

sudo mkdir /opt/harbor && cd /opt/harbor

sudo wget https://github.com/goharbor/harbor/releases/download/v2.8.1/harbor-offline-installer-v2.8.1.tgz

sudo tar xzvf harbor-offline-installer-v2.8.1.tgz\

Switch to the harbor directory and copy the file harbor.yml.tmpl to harbor.yml. This file contains the configuration needed by Harbor. 

Edit the file, harbor.yml, and configure the values of the below settings:

We haven’t changed other settings, such as the admin username and password. Feel free to modify those values but ensure that you are making a note of those. 

After saving the file, run the prepare tool available in the same directory. This downloads the Docker images needed by Harbor.

sudo ./prepare

Now is the time to run Harbor. To do this, execute the BASH script, install.sh.

sudo ./install.sh

In a few minutes, Harbor should be up and running. 

Try accessing the Harbor UI from the browser with the user name and password set in the harbor.yml file. 

You should see the default project called library.

We are now ready to use Harbor as the private registry to push and pull images. In the next part of this tutorial, we will explore how to utilize this with K3s and Acorn. Stay tuned.

Janakiram is a practicing architect, analyst, and advisor focusing on emerging infrastructure technologies. He provides strategic advisory to hyperscalers, technology platform companies, startups, ISVs, and enterprises. As a practitioner working with a diverse Enterprise customer base across cloud native, machine learning, IoT, and edge domains, Janakiram gains insight into the enterprise challenges, pitfalls, and opportunities involved in emerging technology adoption. Janakiram is an Amazon, Microsoft, and Google certified cloud architect, as well as a CNCF Ambassador and Microsoft Regional Director. He is an active contributor at Gigaom Research, Forbes, The New Stack, and InfoWorld. You can follow him on twitter.

Header photo by 🇸🇮 Janko Ferlič on Unsplash


Spread the word