Tutorials

Boostrapping a Kubernetes cluster using AI with GPTScript

March 28, 2024 by luc juggery

GPTScript is a new scripting language designed to facilitate interactions with Large Language Models (LLMs). Initially compatible with OpenAI models, plans are underway to extend its utility to other platforms, including local models and solutions from other vendors.

Using LLM

The primary method for most users to interact with LLMs today involves querying through web interfaces of various solutions like:

For advanced AI application development, APIs provided by these platforms enable programmatic interactions. Developers can leverage these APIs in various programming languages, including Python and Node.js, to integrate LLM functionalities into their applications.

About Function calling

OpenAI models support function calling, enabling them to recognize when a function call is necessary and provide the client application with the required function name and parameters. This capability allows client applications to execute commands or functions as determined by the LLM, enriching the interactive experience. This feature is not universal across all models; some may require additional training (fine tuning) to acquire function-calling abilities.

GPTScript

GPTScript enhances LLM interactions by incorporating predefined tools that leverage OpenAI models’ function-calling capabilities. These tools include:

  • sys.exec: Executes a command and returns its output
  • sys.download: Downloads content from a URL to a specified location
  • sys.getenv: Retrieves the value of an environment variable
  • sys.http.get: Downloads content from a specified HTTP or HTTPS URL
  • sys.http.post: Sends data to a specified HTTP or HTTPS URL using the POST method
  • And more, which can be discovered using gptscript –list-tools

On top of the native GPTScript’s tools, users can create additional tools such as Node.js, bash, or Python script.

GPTScript essentially serves as an intermediary, interpreting function call needs from LLM responses and executing the corresponding actions.

Bootstrapping a Kubernetes cluster

Using GPTScript, we will demonstrate the creation and bootstrapping of a Kubernetes cluster on the Civo cloud platform, employing a GitOps approach for application deployment.

The process involves:

  • Creating a Civo cluster and waiting for activation
  • Configuring kubectl to interact with the new cluster
  • Using Flux to initialize the cluster and deploy applications
  • Monitoring the deployment process through a web interface

This approach illustrates the potential of LLMs to simplify and automate cloud operations.

Notes:

  • While it’s possible to execute these actions using scripting, exploring how LLMs can assist us presents an interesting insight
  • The cluster bootstrap step will utilize content from the GitLab repository https://gitlab.com/techwhale/config-fluxcd and deploy Helm applications and infrastructure components defined in this one. For details on managing Helm charts through Flux, visit https://fluxcd.io/flux/use-cases/helm/.

Directly sending these instructions to an LLM without using GPTScript might not be relevant due to the specificity of the commands required. GPTScript excellently demonstrates its value by utilizing the LLM’s function calling capability to execute the necessary commands.

To ensure the LLM correctly understand the required actions (and which commands to execute), additional details are added to our instructions:

Perform the following actions in the following order:

1. Create a Civo cluster named demo version v1.28.2-k3s1, with 2 g4s.kube.medium nodes, disabling traefik2-nodeport application but does not wait for it to be ready
2. Wait for the Civo demo cluster to be active
3. Set the kubectl context to the demo cluster, make sure to use the --save and --switch flags
4. Create a flux-system namespace
5. Create a generic secret named sops-age in the flux-system namespace, this secret has the key age.agekey which value is the content of the file which path is specified in the SOPS_AGE_KEY_FILE environment variable
6. Use flux to bootstrap the dev environment using the repository named config-fluxcd and wait for the command to be finished
7. Wait for 60 seconds
8. Run a process that uses kubectl port-forward to expose the port 9001 of the service weave-gitops in namespace flux-system making sure it runs in the background and use >/dev/null 2>&1
9. Open a browser on the http://localhost:9001

Notes:

  • These instructions are the result of iterative testing
  • Some instructions may appear verbose; however, as LLMs evolve, their ability to understand and execute commands could reduce the need for such verbosity

We provide the LLM with four tools to facilitate these actions:

  • sys.exec: execute a command
  • wait-cluster: ensure the cluster is ready
  • flux: execute the bootstrap command with the correct parameters
  • browser: launch a web browser for real time monitoring

Note: wait-cluster, flux and browser are 3 user-defined tools

The bootstrap.gpt file outlines the entire process:

Tools: sys.exec, wait-cluster, flux, browser

Perform the following actions in the following order:

1. Create a Civo cluster named demo version v1.28.2-k3s1, with 2 g4s.kube.medium nodes, disabling traefik2-nodeport application but does not wait for it to be ready
2. Wait for the Civo demo cluster to be active
3. Set the kubectl context to the demo cluster, make sure to use the --save and --switch flags
4. Create a flux-system namespace
5. Create a generic secret named sops-age in the flux-system namespace, this secret has the key age.agekey which value is the content of the file which path is specified in the SOPS_AGE_KEY_FILE environment variable
6. Use flux to bootstrap the dev environment using the repository named config-fluxcd and wait for the command to be finished
7. Wait for 60 seconds
8. Run a process that uses kubectl port-forward to expose the port 9001 of the service weave-gitops in namespace flux-system making sure it runs in the background and use >/dev/null 2>&1
9. Open a browser on the http://localhost:9001

---
name: wait-cluster
description: wait for a civo cluster to be ready
args: cluster: name of the cluster to wait for

#!/bin/bash

while true; do
    status=$(civo k3s show ${cluster} -o json | jq -r '.status')
    if [[ $status == "ACTIVE" ]]; then
        echo "cluster is ACTIVE"
        exit 0
    fi
    sleep 30
done

---
name: flux
description: bootstrap a cluster using flux
args: environment: environment to bootstrap
args: repository: name of the git repository to use

#!/bin/bash

flux bootstrap gitlab 
  --deploy-token-auth 
  --owner=techwhale 
  --repository ${repository} 
  --branch=main 
  --path clusters/${environment}

---
name: browser
description: browser stuff
args: url: domain to open

#!python

import sys
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

# Chrome options to bypass the SSL error for self-signed certificate
chrome_options = Options()
chrome_options.add_argument("--ignore-ssl-errors=yes")
chrome_options.add_argument("--ignore-certificate-errors")

# Initialize the WebDriver
chromedriver_path = '/opt/homebrew/bin/chromedriver'
service = Service(executable_path=chromedriver_path)
driver = webdriver.Chrome(service=service, options=chrome_options)

try:
    # Open the URL
    driver.get(url)
    
    # Wait for login form to be there
    WebDriverWait(driver, 120).until(
      EC.presence_of_element_located((By.ID, "email"))
    )

    # Fill credentials
    email_input = driver.find_element(By.ID, "email")
    email_input.send_keys("admin")
    password_input = driver.find_element(By.ID, "password")
    password_input.send_keys("flux")

    # Validate form
    button = driver.find_element(By.CLASS_NAME, "MuiButton-label")
    button.click()

except Exception as e:
    print("An unexpected error occurred. ", e)
finally:
    time.sleep(180)
    driver.quit()

The content can be split into:

  • Instructions for the LLM and the available tools.
  • Definitions for each tool (wait-cluster, flux, browser)

GPTScript interacts with the LLM by sending instructions and acting on the LLM’s responses to call the appropriate tool with the necessary parameters.

Below is the output obtained when running the bootstrap.gpt script:

$ gptscript ./bootstrap.gpt
18:52:44 started  [main]
18:52:44 sent     [main]
18:52:44 started  [sys.exec(2)] [input={"command":"civo kubernetes create demo --version v1.28.2-k3s1 --nodes 2 --size g4s.kube.medium --remove-applications traefik2-nodeport"}]
18:52:44 sent     [sys.exec(2)]
18:53:28 ended    [sys.exec(2)]
18:53:28 continue [main]
18:53:28 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call wait-cluster -> {"cluster":"demo"}
18:53:30 started  [wait-cluster(3)] [input={"cluster":"demo"}]
18:53:30 sent     [wait-cluster(3)]
18:55:00 ended    [wait-cluster(3)]
18:55:00 continue [main]
18:55:00 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call sys -> {"command":"civo kubernetes config demo --save --switch"}
18:55:01 started  [sys.exec(4)] [input={"command":"civo kubernetes config demo --save --switch"}]
18:55:01 sent     [sys.exec(4)]
18:55:18 ended    [sys.exec(4)]
18:55:18 continue [main]
18:55:18 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call sys -> {"command":"kubectl create namespace flux-system"}
18:55:19 started  [sys.exec(5)] [input={"command":"kubectl create namespace flux-system"}]
18:55:19 sent     [sys.exec(5)]
18:55:19 ended    [sys.exec(5)]
18:55:19 continue [main]
18:55:19 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call sys -> {"command":"kubectl create secret generic sops-age --namespace=flux-system --from-file=age.agekey=${SOPS_AGE_KEY_FILE}"}
18:55:21 started  [sys.exec(6)] [input={"command":"kubectl create secret generic sops-age --namespace=flux-system --from-file=age.agekey=${SOPS_AGE_KEY_FILE}"}]
18:55:21 sent     [sys.exec(6)]
18:55:22 ended    [sys.exec(6)]
18:55:22 continue [main]
18:55:22 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call flux -> {"environment":"dev","repository":"config-fluxcd"}
18:55:23 started  [flux(7)] [input={"environment":"dev","repository":"config-fluxcd"}]
18:55:23 sent     [flux(7)]
► connecting to https://gitlab.com
► cloning branch "main" from Git repository "https://gitlab.com/techwhale/config-fluxcd.git"
✔ cloned repository
► generating component manifests
✔ generated component manifests
✔ component manifests are up to date
► installing components in "flux-system" namespace
✔ installed components
✔ reconciled components
► checking to reconcile deploy token for source secret
✔ configured deploy token "flux-system-main-flux-system-./clusters/dev" for "https://gitlab.com/techwhale/config-fluxcd"
► determining if source secret "flux-system/flux-system" exists
► generating source secret
► applying source secret "flux-system/flux-system"
✔ reconciled source secret
► generating sync manifests
✔ generated sync manifests
✔ sync manifests are up to date
► applying sync manifests
✔ reconciled sync configuration
◎ waiting for GitRepository "flux-system/flux-system" to be reconciled
✔ GitRepository reconciled successfully
◎ waiting for Kustomization "flux-system/flux-system" to be reconciled
✔ Kustomization reconciled successfully
► confirming components are healthy
✔ helm-controller: deployment ready
✔ kustomize-controller: deployment ready
✔ notification-controller: deployment ready
✔ source-controller: deployment ready
✔ all components are healthy
18:55:50 ended    [flux(7)]
18:55:50 continue [main]
18:55:50 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call sys -> {"command":"sleep 60"}
18:55:52 started  [sys.exec(8)] [input={"command":"sleep 60"}]
18:55:52 sent     [sys.exec(8)]
18:56:52 ended    [sys.exec(8)]
18:56:52 continue [main]
18:56:52 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call sys -> {"command":"kubectl port-forward svc/weave-gitops -n flux-system 9001:9001 >/dev/null 2>&1 &"}
18:56:54 started  [sys.exec(9)] [input={"command":"kubectl port-forward svc/weave-gitops -n flux-system 9001:9001 >/dev/null 2>&1 &"}]
18:56:54 sent     [sys.exec(9)]
18:56:54 ended    [sys.exec(9)]
18:56:54 continue [main]
18:56:54 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call browser -> {"url":"http://localhost:9001"}
18:56:55 started  [browser(10)] [input={"url":"http://localhost:9001"}]
18:56:55 sent     [browser(10)]
18:58:57 ended    [browser(10)]
18:58:57 continue [main]
18:58:57 sent     [main]
         content  [1] content | Waiting for model response...         content  [1] content | The process has been completed
18:58:59 ended    [main]

For each step, GPTScript sends instructions to the LLM and receives a response. Based on this response, GPTScript then executes a corresponding tool. Let’s examine two steps in detail:

  • Creation of the cluster
18:52:44 started  [main]
18:52:44 sent     [main]
18:52:44 started  [sys.exec(2)] [input={"command":"civo kubernetes create demo --version v1.28.2-k3s1 --nodes 2 --size g4s.kube.medium --remove-applications traefik2-nodeport"}]
18:52:44 sent     [sys.exec(2)]
18:53:28 ended    [sys.exec(2)]

This step demonstrates that, based on our instructions, the LLM identified the need to execute the command: civo kubernetes create demo --version v1.28.2-k3s1 --nodes 2 --size g4s.kube.medium --remove-applications traefik2-nodeport It then sent this command back, enabling GPTScript to utilize the native sys.exec function for execution.

  • Bootstrapping the cluster
18:55:22 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call flux -> {"environment":"dev","repository":"config-fluxcd"}
18:55:23 started  [flux(7)] [input={"environment":"dev","repository":"config-fluxcd"}]
18:55:23 sent     [flux(7)]
► connecting to https://gitlab.com
…
✔ all components are healthy
18:55:50 ended    [flux(7)]

This illustrates that, upon receiving our instructions, the LLM understood the flux tool needed to be invoked with the parameters for environment and repository. It then provided this information back so GPTScript can deploy the flux tool accordingly.

The last intruction of the process utilizes the browser tool to automatically launch a Chrome browser and navigate to http://localhost:9001, the Weave-GitOps UI. Shortly after, it confirms that all applications are operational.

weave-gitops.png

Upon inspecting the Civo dashboard, we observe the creation of a cluster named "demo" featuring 2 Medium – Standard nodes, configured to Kubernetes version 1.28.2 as specified.

civo.png

The two applications defined in the GitLab repository have been successfully deployed and are now fully operational.

  • Webhooks

webhooks.png

  • Voting Application

vote.png

result.png

Key takeaways

This blog post illustrates a use case for GPTScript in automating Kubernetes cluster management on the Civo platform. Through detailed instructions and a step-by-step approach it outlined the interactions between GPTScript and the LLM focusing on the tools GPTScript offers to leverage the LLM’s function calling capability.

Related Articles