Introduction
If you have a Kubernetes cluster that uses Prometheus for monitoring, at some point, you must have felt overwhelmed by the number of alerts you get. If you are monitoring multiple clusters, the feeling would have been even stronger.
In many cases, the alerts you receive are just that, an alert notification, that doesn’t need you to do anything about it. Though once in a while you do receive a notification which is important for you to take a quick action. On receiving such notifications it’s rarely as simple as just knowing what’s wrong right away and just doing that to fix it. You need to analyze the alert, see the current configuration, logs, determine the course of action and then act accordingly. These prior steps can take very precious minutes, if not a few hours before you can solve the actual issue. That time spent can sometimes cost very high in a production environment.
What if you could use the power of ChatGPT not only to quickly analyze and solve such alerts but also to reduce the number of alerts you receive on a daily basis? Good news, with Robusta, you can’t just search ChatGPT for a solution, but you can do so with a click of a button in the same Slack/Teams channel where you’ve received that alert.
Robusta is an open source tool which extends Prometheus to improve Kubernetes observability. In addition to the OpenAI search ability mentioned above, it also enriches the alert data with relevant information such as pod logs, pod events, last job run data, etc. It also monitors the cluster for events other than Prometheus alerts, such as rollouts, changes, etc. and can notify you on multiple platforms such as Slack, MS Teams, PagerDuty and more.
Overview
In this blog post, we will learn how you can install Robusta on your cluster with or without Prometheus and configure it to send notifications to Slack. We’ll see the examples of Slack notifications you’ll get and what you can do with those. We’ll also go through how to configure Robusta to search solutions to the alerts using ChatGPT with a click of a button right in your Slack channel.
Setup
On any Kubernetes cluster, you can install Robusta in three modes
- Robusta + Prometheus
- Robusta with existing Prometheus
- Barebones installation (with no Prometheus at all)
You can choose to install using any of the first two modes, depending on whether the target cluster has Prometheus installed or not. The third mode is a special use case where it can notify you of Pod crashes and OOM Kills, even without Prometheus. Although using it with Prometheus makes it much more useful.
In this demo, we’ll install Robusta with Prometheus on a Kubernetes cluster and analyze the alerts that it sends using ChatGPT.
Install Robusta CLI
Although you install Robusta using helm, you first need to generate a values.yaml file for the Helm chart. You can easily generate the required file using Robuta CLI. The recommended way to install the CLI is using pip. Run the following command to install the Robust CLI using the same.
$ pip install -U robusta-cli --no-cache
Once it is installed use the below command to generate the values.yaml file in your current directory.
$ robusta gen-config --enable-prometheus-stack
Use the above command if you want to install Robusta and Prometheus together. If you want to use Robusta with a preinstalled Prometheus, use the below command instead.
$ robusta gen-config --no-enable-prometheus-stack
After you run the command, it’ll ask you a few questions such as whether you want to configure Slack integration, which Slack channel to send notifications to, whether to enable the UI sink, etc. Enable Slack integration and provide a channel name to receive notifications. Also, enable the UI sink to view to manage alerts in a web browser as well as accept the End User License Agreement.
After going through the questions, it will generate a file in the current directory named generated_values.yaml
, which we will use while installing the Robusta chart.
Install Robusta + Prometheus
We can now use the generated_values.yaml
file and install Robusta on our Kubernetes cluster. Run the below command to do the same.
$ helm repo add robusta https://robusta-charts.storage.googleapis.com && helm repo update
$ helm install robusta robusta/robusta -f ./generated_values.yaml \
--set clusterName=<YOUR_CLUSTER_NAME>
Replace the placeholder <YOUR_CLUSTER_NAME>
with a logical name for your cluster. This is what you will see in your Slack channel as “Source” for the alert notifications. See the screenshot below. So if you plan to receive alerts from multiple clusters in the same channel, make sure you set an easily distinguishable cluster name for each installation.
Verify Installation
Robusta just runs two pods for itself, viz. robusta-runner
and robusta-forwarder
. The Robusta forwarder retrieves data from the Kubernetes API Server to feed it into the Robusta runner. Robusta runner sends notifications using the data from the forwarder and Prometheus alerts.
Run the below command and make sure both pods are running successfully.
$ kubectl get pods -A | grep robusta
View robusta logs with the below command.
$ robusta logs
This completes the Robusta installation and you might even start seeing Robusta sending alert notifications to the configured Slack channel as below.

Robusta Alert in Slack Channel
As you can see above, you see the alert with its source, alert labels and a way to either investigate it further by opening it in the Robusta web UI or to silence similar alerts in the future. Though this is quite useful, we can take it a step further to find solutions for alerts using OpenAI.
Although at this point we don’t see any way to use the Intelligence of ChatGPT to solve them. In order to use ChatGPT with Robusta, you need to update your installation.
Add ChatGPT Bot to the Installation
To get AI-generated solutions to our Prometheus alerts, we need to use a Kubernetes ChatGPT bot with Robusta. It uses a webhook receiver to receive alerts from Prometheus. It forwards those alerts to OpenAI with the API key that you’ve configured and fetches the solution for the same.
To update the Robusta installation with Kubernetes ChatGPT Bot, we need to update the generated_values.yaml
file with two things. First, add the below lines to the end of the file.
playbookRepos:
chatgpt_robusta_actions:
url: "https://github.com/robusta-dev/kubernetes-chatgpt-bot.git"
customPlaybooks:
# Add the 'Ask ChatGPT' button to all Prometheus alerts
- triggers:
- on_prometheus_alert: {}
actions:
- chat_gpt_enricher: {}
Secondly, create a new OpenAI API key and add it in the global variables section of the file as shown below.
globalConfig:
chat_gpt_token: YOUR KEY GOES HERE
Once the file is updated, update your Robusta chart installation with the updated values file using the below command.
$ helm upgrade robusta robusta/robusta -f ./generated_values.yaml \
--set clusterName=<YOUR_CLUSTER_NAME>
That’s it. This completes your Robusta configuration with Prometheus and ChatGPT. You’ll now see an additional button in your Slack notification to search ChatGPT for solutions to the alerts.
ChatGPT in Action
Let’s deploy a failing pod to see the ChatGPT feature in action. Run the below command to deploy a crashing pod in your cluster.
$ kubectl apply -f https://gist.githubusercontent.com/robusta-lab/283609047306dc1f05cf59806ade30b6/raw
You can see that the pod keeps crashing instead of starting successfully.
$ kubectl get pods -A | grep crashpod
NAME READY STATUS RESTARTS AGE
crashpod-64b874bbbc-vk4k6 0/1 CrashLoopBackOff 1 7s
Soon you’ll receive a notification in the slack channel about the same.

Robusta Slack Alert with Ask ChatGPT button
You can see that the alert has an additional button to ask ChatGPT for a solution. If you click it, you’ll receive a response in the same channel after a few seconds, as below.

ChatGPT Response with Possible Solutions
The results above are from ChatGPT 3.5-turbo because I set up a key for that version. As of the date of writing this post, ChatGPT 4 is already available. You can create a key for the newer version and maybe get some different results.
This is how you can get your cluster configured with Prometheus for monitoring, Robusta forwarding those monitoring alerts to your Slack/Teams and OpenAI helping you solve those alerts. A truly complete solution.
Cleaning up
Run the below commands to clean up everything we’ve created in this demo.
$ kubectl delete deployment crashpod
$ helm uninstall robusta
Conclusion
In this blog post we went over what Robusta is, how it helps with faster Prometheus alert analysis and solution. We also saw how to deploy Robusta with or without Prometheus and configured it with Kubernetes ChatGPT bot to search OpenAI for solutions to particular alerts.