Tutorials

Using OpenAI and GPTScript for kubectl automation

May 1, 2024 by luc juggery

In this guide, we’ll show how to use OpenAI via GPTScript to automate a series of commands to deploy and expose an application on a Kubernetes cluster.

High level overview view

We’re going to use OpenAI via GPTScript to ask an LLM to perform the following tasks:

  1. Deploy the VotingApp into a dedicated namespace
  2. Install the Traefik Ingress Controller
  3. Create an Ingress resource to expose the VotingApp to the internet
  4. Open a browser to access the application

We cannot provide these instructions as-is because they are too high level, the LLM will not be able to understand all the hidden details. To accomplish these steps, we’ll create a GPTScript configuration file which provides detailed instructions for the LLM to follow, along with the necessary tools to complete the tasks.

Creating the GPTScript configuration file

We start by creating a file named kubectl.gpt. This file contains the detailed steps that GPTScript will use to run the Kubernetes-related tasks. Because the LLM can’t know all the required details (like the URL for the VotingApp’s YAML specification), we need to provide additional information.

Here’s what the enriched instruction set looks like:

1. Create a Namespace named vote but do not fail if it already exists
2. Deploy in the vote namespace the application which yaml specification is available at https://luc.run/vote.yaml
3. Use a single command to wait for all the Pods in the vote namespace to be ready
4. Install Traefik ingress controller in kube-system namespace with helm only if it is not already installed in this namespace
5. Make sure the Traefik Pod is in running status
6. Wait for the IP address of the traefik Service to be available and save it in the file ./lb-ip.txt
7. Create the file ./ingress.yaml and make sure it contains the yaml specification of an Ingress resource which exposes the vote-ui Service on vote.LBIP.nip.io and the result-ui Service on result.LBIP.nip.io, first making sure to replace the LBIP placeholders with the content of the file ./lb-ip.txt
8. Create the Ingress resource specified in ./ingress.yaml
9. Open a browser on vote.LBIP.nip.io but make sure to replace the LBIP placeholder with the content of lb-ip.txt in this URL first

The steps are sequential, and each task depends on the successful completion of the previous ones. To ensure proper execution, we’ll need to specify that function calls should be made one at a time and in the order provided. We’ll also need to provide the set of tools (definitions of functions) that the model should be aware of.

Specifying tools and constraints

GPTScript allows use to define tools to perform specific tasks. In our example, we use four tools:

  • sys.write: to write to local files
  • kubectl: to run Kubernetes commands
  • helm: to manage Helm charts
  • browser: to open a web browser.

The first section of kubectl.gpt specifies these tools and adds additional constraints:

tools: sys.write, kubectl, helm, browser

Do not make parallel function calls. Only call one function at a time.
Perform the following tasks in order:

Next, we define the user-defined tools at the end of the file. Below is the definition of the kubectl tool:

---
name: kubectl
tools: sys.exec
description: use kubectl command to manage k8s resources
args: command: the command kubectl needs to run

You are a kubernetes operator which can run kubectl commands to manage clusters and applications
The only reason you use sys.exec tool must be to use kubectl to run the command provided, this command must start with kubectl

Similarly, the definition of the helm tool looks like this:

---
name: helm
tools: sys.exec
description: use helm command to manage k8s charts
args: command: the command helm needs to run

You are a kubernetes operator which can run helm commands to manage charts
The only reason you use sys.exec tool must be to use helm to run the command provided, this command must start with helm

Both the kubectl and helm tools await for a command argument containing the list of parameters to be provided to kubectl and helm binaries respectively.

The browser tool definition is the following one:

---
name: browser
tools: sys.exec
args: url: the url to open
description: open a browser window

You are only in charge of opening a browser window on the requested url
You can only use the sys.exec tool to open a browser window

Running GPTScript configuration file

First we need to ensure we have the kubectl and helm binaries installed and properly configured to communicate with our Kubernetes cluster. Also, we set the OPENAI_API_KEY environment variable with our OpenAI API key (when no model is defined in a tool, the gpt-4-turbo-preview model is used by default).

gptscript ./kubectl.gpt

After a few tens of seconds GPTScript should open a browser on the VotingApp

votingapp.png

Verifying the underlying cluster

To confirm everything is running fine, we can check the status of the application’s Pods:

kubectl get po -n vote
NAME                         READY   STATUS    RESTARTS   AGE
db-647c8f548b-jwbl5          1/1     Running   0          3m52s
redis-6f95f75d56-pk8cp       1/1     Running   0          3m53s
result-658b9b48cd-v44tc      1/1     Running   0          3m52s
result-ui-6965585c9f-5kc2x   1/1     Running   0          3m52s
vote-8588dfc9dc-xvrch        1/1     Running   0          3m53s
vote-ui-6bfbd99b58-mpwfx     1/1     Running   0          3m53s
worker-7744685d89-mwxs5      1/1     Running   0          3m52s

We can ensure the Traefik ingress controller is running:

helm list -n kube-system
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART            APP VERSION
traefik kube-system     1               2024-04-21 18:13:39.536214 +0200 CEST   deployed        traefik-27.0.2   v2.11.2   

We can also verify the 2 local files created in the process:

  • lb-ip.txt contains the IP Address of the LoadBalancer exposing the Traefik Ingress Controller:
% cat lb-ip.txt 
194.182.168.164
  • ingress.yaml contain the yaml specification of the Ingress ressource exposing the VotingApp web ui:
% cat ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: vote-result-ingress
  namespace: vote
spec:
  rules:
  - host: vote.194.182.168.164.nip.io
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: vote-ui
            port:
              number: 80
  - host: result.194.182.168.164.nip.io
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: result-ui
            port:
              number: 80  

Key takeaways

Leveraging GPTScript and OpenAI can simplify Kubernetes deployments, making it easier to perform complex tasks with a few simple instructions. By defining custom tools and following a clear sequence of tasks, we can automate various deployment processes. Also, the tools defined and used in this example are generic ones, they could be shared and used in other GPTScript applications.

Related Articles