Welcome to the second part of the tutorial series: Harnessing AI to Become a DevOps Pro! In this series we enable you to master DevOps fundamentals using the help of our AI assistant Clio as your instructor.
We will be exploring 8 key areas of DevOps:
- Version control Systems (VCS)
- Continuous Integration/Continuous Deployment (CI/CD)
- Containerization
- Orchestration
- Infrastructure as Cloud (IaC)
- Cloud Providers
- Monitoring and Logging
- Security
All you need to get started is some basic programming knowledge: to start from the beginning check out the intro guide here.
Follow along each week as we release detailed guides to each of the topics above.
In the second part of this series on AI for DevOps, I’m going to learn about Continuous Integration/Continuous Deployment, or CI/CD. In the previous part of this series, I learned about Version Control Systems (VCS). And if you haven’t read the introduction, you’ll find it here.
So what is actually CI / CD?
Just to make it clear what we’re going to learn, I first want to know in simple words what this subject is all about. I already briefly know what it is, but let’s see what Clio will tell me about it.
What is Clio? Learn more about Clio, our AI copilot that runs in your CLI, here.
>>> Can you explain in simple words what CI/CD is?
Certainly! CI/CD stands for Continuous Integration and Continuous Deployment (or Continuous Delivery). It's a set of practices used in software development to improve the process of delivering code changes more frequently and reliably. Here's a simple breakdown:
1. Continuous Integration (CI):
• Purpose: To automate the integration of code changes from multiple contributors into a single software project.
• Process: Developers frequently commit code changes to a shared repository. Each change triggers an automated build and testing process to ensure the new code integrates well with the existing codebase.
• Benefits: It helps catch bugs early, reduces integration problems, and ensures that the software is always in a working state.
2. Continuous Deployment (CD):
• Purpose: To automate the deployment of code changes to production environments.
• Process: After the code passes all automated tests in the CI process, it is automatically deployed to production. This means that new features, bug
fixes, and improvements can be delivered to users quickly and efficiently.
• Benefits: It reduces the time to market for new features, ensures that the software is always up-to-date, and minimizes the risk of deployment errors.
3. Continuous Delivery (CD):
• Purpose: Similar to Continuous Deployment, but with a manual approval step before deployment to production.
• Process: The code is automatically tested and prepared for release, but a human decision is required to deploy it to production.
• Benefits: It provides more control over what gets released and when, while still maintaining the benefits of automation.
In summary, CI/CD helps teams deliver software faster and with higher quality by automating the integration, testing, and deployment processes. It's a key practice in DevOps, which aims to bridge the gap between development and operations teams.
Okay, I think this was a good explanation that was easy to understand. It doesn’t make me a DevOps expert, but I’m certainly more confident to proceed to the next step.
To easily follow the steps below, I have created a video guide to go along with this tutorial:
<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/-2TODX17-cQ?si=u4qO2fAIsNHzWnkl” title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen></iframe>
Diving in
Since Clio doesn’t have the best memory, I just need to remind her that I have already learned about VCS.
I have already wen't through part one where you teached me about VCS. Now I want to dive into CI/CD.
Great! Since you've already covered version control systems, you're well-prepared to dive into CI/CD. Here's a more focused plan to get you started with CI/CD:
1. Choose a CI/CD Tool:
• Recommendation: Start with a user-friendly tool like GitHub Actions if you're already using GitHub, or GitLab CI/CD if you're on GitLab. Jenkins is also apopular choice for more complex pipelines.
• Action: Set up an account on the platform of your choice if you haven't already.
2. Set Up a Simple CI Pipeline:
• Objective: Automate the build and test process for a small project.
• Action: Create a simple project (e.g., a "Hello World" application) and write a CI configuration file to automate the build and test steps. For GitHub
Actions, this would be a YAML file in the .github/workflows directory.
3. Integrate Automated Testing:
• Objective: Ensure your code is tested automatically with each change.
• Action: Write unit tests for your project and configure your CI pipeline to run these tests automatically.
4. Explore Continuous Deployment:
• Objective: Automate the deployment of your application to a staging or production environment.
• Action: Use a platform like Heroku, AWS, or a container service like Docker Hub to deploy your application. Update your CI/CD pipeline to include
deployment steps.
5. Learn by Doing:
• Objective: Gain hands-on experience by iterating on your pipeline.
• Action: Add more complexity to your pipeline, such as environment variables, secrets management, or multi-stage deployments.
6. Monitor and Optimize:
• Objective: Ensure your CI/CD pipeline is efficient and reliable.
• Action: Set up notifications for build failures, monitor build times, and optimize your pipeline for speed and reliability.
7. Explore Advanced Features:
• Objective: Utilize advanced CI/CD features to enhance your workflow.
• Action: Look into parallel builds, caching, and conditional workflows to improve your pipeline.
8. Document and Share:
• Objective: Share your knowledge and learn from others.
• Action: Document your CI/CD setup and share it with the community. Contribute to forums or write blog posts about your experiences.
By following these steps, you'll gain practical experience with CI/CD and develop a deeper understanding of how it fits into the DevOps lifecycle. If you need help with any specific tool or step, feel free to ask!
Cool, it looks like we’re going to actually do a real life task together. I’m going to stick with GitHub since this is what we’ve already used. And since step 1 is to set up an account, I guess we can mark that as done 🙂
Set up a simple CI pipeline:
This step seems to require a little project that we can use to test building and similar. I will first just ask her to create a simple Python project that we can use.
Great, I already have an account on GitHub. For step 2, can you make a very simple Python project that we can use for this purpose?
Let's create a simple Python project that we can use to set up a CI pipeline with GitHub Actions. We'll create a basic "Hello World" application with a simple test.
### Step-by-Step Guide:
1. Create a New Directory for the Project:
• Let's create a directory named hello-world-python .
2. Create a Simple Python Script:
• We'll create a Python script that prints "Hello, World!".
3. Write a Simple Test:
• We'll use the unittest framework to write a simple test for our script.
4. Set Up GitHub Actions:
• We'll create a GitHub Actions workflow to automate the testing of our script.
Well that sounds simple enough. So she actually wanted to do this for me, but I like to do things a bit more manually when I’m learning. Here is a short guide on exacly what I did:
### Step 1: Create the Project Directory
• Open your terminal and navigate to the location where you want to create the project.
• Run the following command to create a new directory:
mkdir hello-world-python
cd hello-world-python
### Step 2: Create the Python Script
• Create a file named hello.py with the following content:
def say_hello():
return "Hello, World!"
if __name__ == "__main__":
print(say_hello())
### Step 3: Write a Simple Test
• Create a file named test_hello.py with the following content:
import unittest
from hello import say_hello
class TestHello(unittest.TestCase):
def test_say_hello(self):
self.assertEqual(say_hello(), "Hello, World!")
if __name__ == '__main__':
unittest.main()
When we have this in place, we can continue to setting up the GitHub Actions:
>>> Let's proceed to GitHub Actions for the CI
Great! Let's set up a GitHub Actions workflow to automate the testing of your Python project. This will ensure that every time you push changes to your
repository, the tests will run automatically.
### Step-by-Step Guide to Set Up GitHub Actions:
1. Create a GitHub Repository:
• Go to GitHub and create a new repository named hello-world-python .
• Clone the repository to your local machine and move your project files into it.
2. Create a GitHub Actions Workflow:
• In your project directory, create a new directory for GitHub Actions workflows:
mkdir -p .github/workflows
3. Create the Workflow File:
• Create a file named python-app.yml inside the .github/workflows directory with the following content:
name: Python application
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Test with unittest
run: |
python -m unittest discover
4. Commit and Push Changes:
• Add, commit, and push your changes to GitHub:
git add .
git commit -m "Set up CI with GitHub Actions"
git push origin main
5. Check the Workflow:
• Go to your GitHub repository and click on the "Actions" tab. You should see your workflow running. If everything is set up correctly, it will pass the
tests.
### Explanation:
• on: [push, pull_request]: This triggers the workflow on every push and pull request.
• runs-on: ubuntu-latest: The workflow runs on the latest version of Ubuntu.
• actions/checkout@v2: Checks out your repository code.
• actions/setup-python@v2: Sets up a Python environment.
• pip install -r requirements.txt: Installs dependencies (if you have any listed in requirements.txt ).
• python -m unittest discover: Runs all tests in the project.
If you encounter any issues or have questions, feel free to ask!
When I checked the actions tab, I saw that there was an error. I couldn’t really figure it out on my own what the problem was, but luckily I had Clio to ask 😀
>>> I got an error on line 15 in the yml file
Let's troubleshoot the issue with the YAML file. The error on line 15 might be related to the indentation or syntax. Let's review the YAML file and ensure everything is correctly formatted.
Here's the corrected version of the python-app.yml file:
name: Python application
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
# pip install -r requirements.txt
- name: Test with unittest
run: |
python -m unittest discover
### Key Points:
• Indentation: YAML is sensitive to indentation. Ensure that each level of indentation is consistent (usually 2 spaces).
• Commented Line: If you don't have a requirements.txt file, you can comment out the pip install -r requirements.txt line as shown above.
### Steps to Fix:
1. Edit the YAML File:
• Open the python-app.yml file in your text editor.
• Ensure the indentation is consistent and matches the example above.
2. Commit and Push the Changes:
• After making the changes, commit and push them to your GitHub repository:
git add .github/workflows/python-app.yml
git commit -m "Fix YAML indentation"
git push origin main
3. Check the Workflow Again:
• Go to the "Actions" tab in your GitHub repository to see if the workflow runs successfully.
And this time, it was successfully run 😀
Okay, for the first time in my life, I have now successfully set up an GitHub action. And I even understood everything since I got great explanations from Clio. I guess that this wasn’t the most advanced CI, but I now understand much more. And the point of this series isn’t necessarily to make me an expert DevOps, I want to understand more.
And the more I understand, the easier it will be to know where I need to read more.
Jumping in to CD
>>> So I guess the next step now is CD. How can we do something real life to learn more about this? I want to use AWS
Using AWS for Continuous Deployment (CD) is a great way to gain real-world experience with cloud-based deployments. AWS offers a variety of services that can be used for deploying applications, such as Elastic Beanstalk, EC2, or Lambda. For simplicity and ease of use, let's focus on deploying a simple application using AWS Elastic Beanstalk, which abstracts much of the infrastructure management for you.
### Step-by-Step Guide for Continuous Deployment on AWS:
1. Set Up an AWS Account:
• If you don't have one, create an AWS account at aws.amazon.com https://aws.amazon.com/.
2. Install the AWS CLI:
• Install the AWS Command Line Interface (CLI) on your local machine to interact with AWS services.
• Follow the instructions on the AWS CLI installation page https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html.
3. Configure the AWS CLI:
• Run aws configure in your terminal and enter your AWS Access Key ID, Secret Access Key, region, and output format.
4. Create an Elastic Beanstalk Application:
• Use the AWS Management Console or the AWS CLI to create a new Elastic Beanstalk application.
• For a simple Python application, you can choose the "Python" platform.
5. Prepare Your Application for Deployment:
• Ensure your application is ready for deployment. You may need to create a requirements.txt file for dependencies and a Procfile to specify how to run your application.
6. Set Up GitHub Actions for Deployment:
• Modify your GitHub Actions workflow to include deployment steps to AWS Elastic Beanstalk.
• Example Workflow:
name: Deploy to AWS Elastic Beanstalk
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Deploy to Elastic Beanstalk
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'your-region'
run: |
zip -r application.zip .
aws elasticbeanstalk create-application-version --application-name your-app-name --version-label v1 --source-bundle S3Bucket="your-bucket-
name",S3Key="application.zip"
aws elasticbeanstalk update-environment --application-name your-app-name --environment-name your-env-name --version-label v1
7. Set Up AWS Credentials:
• Add your AWS credentials as secrets in your GitHub repository settings. This allows GitHub Actions to authenticate with AWS.
• Go to your GitHub repository > Settings > Secrets > New repository secret. Add AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY .
8. Commit and Push Changes:
• Commit and push your changes to trigger the deployment workflow.
### Explanation:
• Elastic Beanstalk: A service that makes it easy to deploy and manage applications in the AWS Cloud.
• GitHub Secrets: Securely store AWS credentials for authentication.
• Deployment Steps: The workflow includes steps to package your application and deploy it to Elastic Beanstalk.
By following these steps, you'll have a basic CD pipeline that automatically deploys your application to AWS Elastic Beanstalk whenever you push changes. If you have any questions or need further assistance, feel free to ask!
I have now modified my “python-app.yml” file to include the new job. But I feel there’s something missing. The name of the bucket, app name and similar can’t be called just “your-app-name”. I need to ask about this:
>>> will the S3bucket be automatically created? or do i need to do this first?
The S3 bucket will not be automatically created by the AWS Elastic Beanstalk deployment process. You will need to create the S3 bucket manually before
deploying your application. Here's how you can do it:
### Steps to Create an S3 Bucket:
1. Log in to the AWS Management Console:
• Go to AWS Management Console https://aws.amazon.com/console/ and log in with your credentials.
2. Navigate to S3:
• In the AWS Management Console, search for "S3" in the services search bar and select "S3" from the results.
3. Create a New Bucket:
• Click on the "Create bucket" button.
• Enter a unique name for your bucket. Bucket names must be globally unique across all AWS accounts.
• Choose the AWS region where you want to create the bucket. It's a good practice to choose the same region where your Elastic Beanstalk environment is
located.
• Configure any additional settings as needed (e.g., versioning, encryption).
• Click "Create bucket" to finalize the creation.
4. Note the Bucket Name:
• Make sure to note down the bucket name, as you'll need to use it in your GitHub Actions workflow for the deployment process.
### Update Your GitHub Actions Workflow:
• Replace your-bucket-name in the workflow file with the name of the S3 bucket you created.
### Why Use S3 for Elastic Beanstalk?
• Storage: S3 is used to store the application package (e.g., ZIP file) that Elastic Beanstalk will deploy.
• Integration: Elastic Beanstalk can easily pull the application package from S3 during the deployment process.
I have no problem following this guide, and it was easy for me to set up a new bucket. I updated the workflow, so now it’s just the app name left.
>>> What about the app name and environment name?
When deploying an application to AWS Elastic Beanstalk, you'll need to specify both an application name and an environment name. These are identifiers used by Elastic Beanstalk to manage your application and its deployment environment.
### Application Name:
• Definition: The application name is a logical container for your Elastic Beanstalk resources. It represents your overall application and can contain multiple environments (e.g., development, staging, production).
• How to Set:
• You can choose any name that makes sense for your application. For example, if your application is called "HelloWorldApp," you might use that as your application name.
• When creating your Elastic Beanstalk application via the AWS Management Console or CLI, you'll be prompted to enter this name.
### Environment Name:
• Definition: The environment name is a specific deployment of your application. Each environment runs a version of your application and can be configured independently.
• How to Set:
• Choose a name that reflects the purpose of the environment, such as "HelloWorldApp-dev" for a development environment or "HelloWorldApp-prod" for a production environment.
• When creating an environment within your Elastic Beanstalk application, you'll be prompted to enter this name.
### Setting Up in AWS:
1. Create an Elastic Beanstalk Application:
• Go to the AWS Management Console, navigate to Elastic Beanstalk, and create a new application.
• Enter your chosen application name.
2. Create an Environment:
• Within the Elastic Beanstalk application, create a new environment.
• Enter your chosen environment name and configure the environment settings (e.g., platform, instance type).
### Update Your GitHub Actions Workflow:
• Replace your-app-name and your-env-name in the workflow file with the names you chose for your Elastic Beanstalk application and environment.
By setting up these names, you ensure that your deployment process targets the correct resources in AWS Elastic Beanstalk. If you have any questions or need further assistance, feel free to ask!
I went through a wizard, and I must say that I’ve never seen so many checkbox, select lists, etc in my life. There were pages after pages with different choices. I tried not to touch anything, and hopefully it will work out of the box. Learning more about all of this will come in a later part of this series.
Now I just need to update the workflow with the correct names, and we’ll try to deploy 🙂 (Just wait a few minutes for the environment to be created).
The CD stopped a couple of times. First I had a typo in the name, and the second time it was because the environment hadn’t been started correctly on aws. When I got the confirmation that it was running, I could finally try the deployment my self.
Summary
Okay, I have now learned a lot of the basics when it comes to CI/CD. I know a little bit more where the settings goes, where to monitor the process, etc. But I still want some more.
I’m going to ask Clio if she can recommend me some YouTube videos or articles I can read in order to learn a little bit more of this basic stuff. For this, she actually needs to go to the Internet and find out what’s best for me.
>>> Can you recommend me some YouTube videos or articles I can read to learn even more about the basics of CI/CD?
To learn more about GitHub Actions for CI/CD, here are some excellent resources:
1. GitHub Actions Documentation:
• The official GitHub Actions Documentation https://docs.github.com/en/actions provides comprehensive guides and examples for setting up workflows and
automating tasks.
2. GitHub Learning Lab:
• The GitHub Learning Lab https://lab.github.com/ offers interactive courses that guide you through creating and managing workflows with GitHub Actions.
3. YouTube Videos:
• GitHub Actions: Hello World: This video introduces the basics of GitHub Actions and demonstrates how to set up a simple workflow.
• GitHub Actions CI/CD Tutorial: This tutorial provides a step-by-step guide on using GitHub Actions for continuous integration and continuous deployment,
covering more advanced features and best practices.
Okay, so there I have it. Some more reading materials and videos I can dive more into 🙂
In the next part of this series, we’ll start working with Containerization (Docker and similar). That’s going to be interesting!