Tutorials

Harnessing AI to Become a DevOps Pro – Part 4: Orchestration

October 24, 2024 by stein ove helset

Welcome to the third part of the tutorial series: Harnessing AI to Become a DevOps Pro! In this series we enable you to master DevOps fundamentals using the help of our AI assistant Clio as your instructor.

We will be exploring 8 key areas of DevOps:

  1. Version control Systems (VCS)
  2. Continuous Integration/Continuous Deployment (CI/CD)
  3. Containerization
  4. Orchestration
  5. Infrastructure as Cloud (IaC)
  6. Cloud Providers
  7. Monitoring and Logging
  8. Security

All you need to get started is some basic programming knowledge: to start from the beginning check out the intro guide here.

Follow along each week as we release detailed guides to each of the topics above.


In the fourth part of this harnessing AI to become a DevOps pro series, I will being using Clio to learn about orchestration. Some known tools for this is Kubernetes, Docker Swarm and OpenShift.

What is Clio? Learn more about Clio, our AI copilot that runs in your CLI, here.

I think I will be focusing on Kubernetes for this article. As far as I know, this is the most popular one and the only one of these I have heard of before.

What is orchestration?

>> Can you briefly describe what orchestration is?

Orchestration in the context of DevOps refers to the automated configuration, management, and coordination of computer systems, applications, and services. It involves managing complex tasks and workflows to streamline and optimize
processes, often across multiple environments.

Key aspects of orchestration include:

1. Automation: Automating repetitive tasks to reduce manual intervention and errors.
2. Coordination: Ensuring that different components of an application or system work together seamlessly.
3. Scalability: Managing resources efficiently to handle varying loads and demands.
4. Deployment: Automating the deployment of applications and services across different environments.
5. Monitoring and Management: Continuously monitoring systems to ensure they are running smoothly and making adjustments as needed.

Tools like Kubernetes, Docker Swarm, and Apache Mesos are commonly used for orchestration in containerized environments, helping to manage the lifecycle of containers and applications.

Okay, so this sounds almost like a “layer” on top of VCS and CI/CD which we have learned about so far.

Up until now, I have learned some new things during this series, but I think that everything will be new in this part actually. Let’s try to get some hands on experience. I think this often makes things easier to understand since just an explanation can be a bit “dry”.

Getting some hands on experience

To see how Clio walked me through the topic of Orchestration in real time, I’ve put together a video for this part of the series.

<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/pCPUk8dkfMs?si=YVOspD-uhFuOYSGb” title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen></iframe>

Ok let’s begin: I’m just going to ask Clio to give me something to exercise with to learn the basics first.

>>> Can you come up with a hands on exerciee so I can learn the basics of kubernetes?

Sure! Here's a hands-on exercise to help you learn the basics of Kubernetes. This exercise will guide you through setting up a local Kubernetes cluster using Minikube, deploying a simple application, and managing it with basic Kubernetes commands.

### Exercise: Deploy a Simple Application on Kubernetes

#### Prerequisites

• Install Minikube https://minikube.sigs.k8s.io/docs/start/ on your local machine.
• Install kubectl https://kubernetes.io/docs/tasks/tools/ command-line tool.

#### Step 1: Start Minikube

1. Open your terminal.
2. Start Minikube with the following command:
minikube start
This will create a local Kubernetes cluster.

#### Step 2: Create a Deployment

1. Use  kubectl  to create a deployment. This will deploy a simple Nginx web server.
kubectl create deployment nginx --image=nginx

2. Verify the deployment by listing the pods:
kubectl get pods


#### Step 3: Expose the Deployment

1. Expose the Nginx deployment to make it accessible outside the cluster:
kubectl expose deployment nginx --type=NodePort --port=80

2. Get the URL to access the Nginx server:
minikube service nginx --url

3. Open the URL in your web browser to see the Nginx welcome page.

#### Step 4: Scale the Deployment

1. Scale the deployment to run multiple replicas:
kubectl scale deployment nginx --replicas=3

2. Verify the scaling by listing the pods again:
kubectl get pods


#### Step 5: Clean Up

1. Delete the service:
kubectl delete service nginx

2. Delete the deployment:
kubectl delete deployment nginx

3. Stop Minikube:
minikube stop


### Summary

In this exercise, you learned how to:

• Start a local Kubernetes cluster using Minikube.
• Deploy a simple application using a Kubernetes deployment.
• Expose the application to external traffic.
• Scale the application by adjusting the number of replicas.
• Clean up resources.

Feel free to ask if you have any questions or need further assistance!

I started out by installing Minikube and Kubernetes command line tool (kubectl). After that, I continued on the steps and ran “minikube start” to create a local cluster. So far, it’s easy to follow, but I’m when the steps is done, I need some more explanations to what I’ve actually done 🙂

I like that on step 3.3, I could actually try what I was going through. When I opened up the url in the browser, the welcome page for Nginx was showing up. That means that everything, so far, is working.

Actually, after reading the summary, I understood most of this. But I have some follow up questions.

>>> I went through the steps and everything worked great. But can you explain to me what a cluster is?

Certainly! In Kubernetes, a cluster is a set of nodes (machines) that run containerized applications managed by Kubernetes. Here's a breakdown of the key components and concepts related to a Kubernetes cluster:

### Key Components of a Kubernetes Cluster

1. Master Node (Control Plane):
• The master node is responsible for managing the cluster. It runs several key components:
• API Server: Exposes the Kubernetes API and serves as the entry point for all administrative tasks.
• Scheduler: Assigns workloads to nodes based on resource availability and other constraints.
• Controller Manager: Manages various controllers that handle routine tasks, such as replicating pods and handling node failures.
• etcd: A distributed key-value store that holds the cluster's state and configuration data.

2. Worker Nodes:
• Worker nodes are the machines where your application workloads (containers) run. Each worker node contains:
• Kubelet: An agent that ensures containers are running in a pod.
• Kube-proxy: Manages network rules and facilitates communication between services.
• Container Runtime: Software that runs containers, such as Docker or containerd.

3. Pods:
• The smallest deployable units in Kubernetes, a pod is a group of one or more containers with shared storage/network resources and a specification for how to run the containers.

4. Services:
• An abstraction that defines a logical set of pods and a policy to access them. Services enable communication between different parts of an application and external access to the application.

### How a Cluster Works

• Deployment: When you deploy an application, Kubernetes schedules the pods to run on the worker nodes based on resource availability and other constraints.
• Scaling: You can scale your application by increasing or decreasing the number of pod replicas.
• Load Balancing: Kubernetes automatically distributes network traffic to the pods in a service, ensuring even load distribution.
• Self-Healing: If a pod or node fails, Kubernetes automatically replaces or reschedules the affected pods to maintain the desired state.

### Summary

A Kubernetes cluster is a powerful system that automates the deployment, scaling, and management of containerized applications. It abstracts the underlying infrastructure, allowing you to focus on building and running applications without
worrying about the complexities of managing individual machines.

If you have more questions or need further clarification, feel free to ask!

So I think that was a great explanation for what Kubernetes does in detail, how a cluster works, how what the different parts of a Kubernetes application is.

The few commands and what I have learned so far sounds easy. But I guess it’s a bit harder when you have some sort of real web application with postgresql database and similar.

What more can we try with Kubernetes?

>>> Now that I have went through the basics. Are there anything else you think I should try out?

Great to hear that you've got the basics down! Here are some additional exercises and concepts you can explore to deepen your understanding of Kubernetes:

### Advanced Exercises and Concepts

1. ConfigMaps and Secrets:
• Learn how to manage configuration data and sensitive information using ConfigMaps and Secrets.
• Exercise: Create a ConfigMap and a Secret, and use them in a pod to configure an application.
2. Persistent Storage:
• Understand how to use Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) to manage storage.
• Exercise: Deploy an application that requires persistent storage, such as a database, and configure it with a PV and PVC.
3. Namespaces:
• Explore how namespaces can be used to organize and manage resources within a cluster.
• Exercise: Create multiple namespaces and deploy applications in each to see how they are isolated.
4. Ingress:
• Learn how to use Ingress resources to manage external access to services within the cluster.
• Exercise: Set up an Ingress controller and create Ingress rules to route traffic to different services.
5. Helm:
• Get familiar with Helm, a package manager for Kubernetes, to simplify application deployment.
• Exercise: Install Helm, and use it to deploy a complex application like WordPress or Jenkins.
6. Monitoring and Logging:
• Implement monitoring and logging to gain insights into your cluster's performance and troubleshoot issues.
• Exercise: Deploy tools like Prometheus for monitoring and Fluentd for logging.
7. Custom Resource Definitions (CRDs):
• Explore how to extend Kubernetes functionality by creating custom resources.
• Exercise: Define a simple CRD and create a custom controller to manage it.
8. Network Policies:
• Learn how to secure your applications by defining network policies that control traffic flow.
• Exercise: Create network policies to restrict communication between pods.
9. Auto-scaling:
• Implement Horizontal Pod Autoscaler (HPA) to automatically scale your applications based on demand.
• Exercise: Configure HPA for an application and test its behavior under load.
10. CI/CD Integration:
• Integrate Kubernetes with a CI/CD pipeline to automate application deployment.
• Exercise: Set up a simple CI/CD pipeline using tools like Jenkins or GitHub Actions.


### Summary

These exercises will help you gain a deeper understanding of Kubernetes and its ecosystem. As you explore these topics, you'll become more proficient in managing and deploying applications in a Kubernetes environment.

Feel free to ask if you need guidance on any specific topic or exercise!

In the previous part of this series, I tried learning about CI/CD. Maybe would be cool to make an extra article where I combine CI/CD and Kubernetes on one of my servers. Let’s come back to that.

I also see that step 6 here is “Monitoring and loggin”, this is one of the parts of this series coming later. So maybe I should use Kubernetes there as well just to exercise even more on this.

Let’s try one more hands on.

>>> Can you come up with a simple hands-on task to learn about "ConfigMaps and Secrets"?

Certainly! Here's a simple hands-on task to help you learn about ConfigMaps and Secrets in Kubernetes. This exercise will guide you through creating a ConfigMap and a Secret, and then using them in a pod to configure an application.

### Exercise: Using ConfigMaps and Secrets in Kubernetes

#### Prerequisites

• A running Kubernetes cluster (you can use Minikube as before).
• kubectl command-line tool installed.

#### Step 1: Create a ConfigMap

1. Create a ConfigMap YAML file:
Create a file named "configmap.yaml" with the following content:
apiVersion: v1
kind: ConfigMap
metadata:
    name: my-config
data:
    APP_COLOR: "blue"
    APP_MODE: "production"

2. Apply the ConfigMap:
Use  kubectl  to create the ConfigMap in your cluster:
kubectl apply -f configmap.yaml

3. Verify the ConfigMap:
List the ConfigMaps to ensure it was created:
kubectl get configmaps

#### Step 2: Create a Secret

1. Create a Secret YAML file:
Create a file named  secret.yaml  with the following content:
apiVersion: v1
kind: Secret
metadata:
    name: my-secret
type: Opaque
data:
    USERNAME: YWRtaW4=  # Base64 encoded 'admin'
    PASSWORD: cGFzc3dvcmQ=  # Base64 encoded 'password'

2. Apply the Secret:
Use  kubectl  to create the Secret in your cluster:
kubectl apply -f secret.yaml

3. Verify the Secret:
List the Secrets to ensure it was created:
kubectl get secrets

#### Step 3: Use ConfigMap and Secret in a Pod

1. Create a Pod YAML file:
Create a file named  pod.yaml  with the following content:
apiVersion: v1
kind: Pod
metadata:
    name: my-pod
spec:
    containers:
    - name: my-container
    image: nginx
    env:
    - name: APP_COLOR
        valueFrom:
        configMapKeyRef:
            name: my-config
            key: APP_COLOR
    - name: APP_MODE
        valueFrom:
        configMapKeyRef:
            name: my-config
            key: APP_MODE
    - name: USERNAME
        valueFrom:
        secretKeyRef:
            name: my-secret
            key: USERNAME
    - name: PASSWORD
        valueFrom:
        secretKeyRef:
            name: my-secret
            key: PASSWORD

2. Apply the Pod:
Use  kubectl  to create the Pod in your cluster:
kubectl apply -f pod.yaml

3. Verify the Pod:
Check the status of the Pod to ensure it is running:
kubectl get pods

4. Access the Pod's Environment Variables:
Use  kubectl  to access the Pod and print the environment variables:
kubectl exec my-pod -- env
You should see the environment variables  APP_COLOR ,  APP_MODE ,  USERNAME , and  PASSWORD  with their respective values.

### Summary

In this exercise, you learned how to:

• Create a ConfigMap and a Secret in Kubernetes.
• Use ConfigMap and Secret to configure environment variables in a Pod.

This exercise demonstrates how Kubernetes can manage configuration data and sensitive information securely and efficiently. If you have any questions or need further assistance, feel free to ask!

Nice: not only was this easy to follow, but really helpful as well. Having secrets and maps like this can help you with security and similar. Using the environment to store access keys to APIs and similar is something you should do and often do.

Summary

Okay, I think it’s enough Kubernetes for today. I will definietely come back to this later in the series and use it in combination with other things we will be learning.

In this part, we learned about the basics of Kubernetes and also got some hands-on experience with a cluster as well as secrets/configmaps.

In the next part of this series, we are going to learn about Infrastructure as Code (IaC).

Related Articles