Using Acorn in Developer Mode to Build a Python Web App

Aug 22, 2022 by Thorsten Klein
Using Acorn in Developer Mode to Build a Python Web App

Acorn is a great tool to package a complete application stack, including images, configuration and deployment specification into a single OCI image for easy distribution.

This also enables seamless promotion of the same image through different environments, e.g. from development through staging to production.

In this blog we want to focus on the very first element in that chain: development.

Prerequisites

All you need to follow along is

  1. Acorn CLI installed: See our installation guide
  2. Admin access to a Kubernetes cluster: Here, we’re using k3d.
  3. Acorn installed into your cluster: Run acorn install against your cluster.

Note: You do not need to have Python installed, as we’ll do all testing in the deployed Acorn app container.

Goal

In this blog, we’re going to demonstrate how you can leverage Acorn and especially its built-in development mode to speed up your development workflow.

Flask, Redis and Postgres Architecture As an example, we’re going to build a Python application using the Flask framework and integrate it with some third-party tooling, namely Redis and Postgres.

We’re building something similar to what we demonstrate in our Getting Started Guide, but make it a bit more complex, closer to a real-life use case.

Our application will show a web page where people can enter their E-Mail to subscribe to some newsletter.

Below the subscription form it shows a list of like-minded people’s addresses (Not really GDPR compliant, but well..).

Additionally, it gives operators an endpoint to see which subscriptions are still processing.

flask-redis-postgres.png

Setup & Preparation

Note: You can download the source code for each step defined below from our examples repository

All of our development will happen in a new directory, which we’ll call acorn-devspace.

For a start, we will create some filestructure as below:

├── Acornfile ├── Dockerfile ├── requirements.txt └── src └── app ├── __init__.py ├── app.py └── settings.py

For the rest of this blog, we’ll assume that any command is executed from inside the new acorn-devspace directory.

Step 1: Starting Development

We’ll start with a basic Flask app, without integrating with Postgres and Redis yet:

  1. src/app/init.py will stay empty.

  2. src/app/settings.py will be used to configure some application settings via environment variables:

import logging as log import os GREETING = os.getenv("GREETING", "Hello World!") OUTPUT_DIR = os.getenv("OUTPUT_DIR", "/app/output") os.makedirs(OUTPUT_DIR, exist_ok=True) LOG_LEVEL = "INFO" log.basicConfig(level=LOG_LEVEL)

The src/app/app.py file defines a Flask application serving two routes:

  • / serves a webpage including a subscription form, a greeting text customizable via the GREETING setting and a list of active subscriptions
  • /processing returns a JSON dict for operators to see which subscriptions are currently being processed
import logging as log import os import time from flask import Flask, jsonify, render_template_string, request from app import settings tpl = """ <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <style> .container { display: flex; flex-wrap: wrap; flex-direction: column; justify-content: center; align-items: center; } </style> <title>Awesome Acorn</title> </head> <body> <div class="container"> <h1>Awesome Acorn!</h1> <h2>{{ greeting }}</h2> <form action="" method="post"> <input type="email" name="email" placeholder="Email"> <input type="submit" value="Subscribe"> {% if error %} <p style="color: red">{{ error }}</p> {% endif %} </form> <h2>Subscribers</h2> <ul> {% for subscriber in subscribers %} <li>{{ subscriber }}</li> {% endfor %} </ul> </div> </body> </html> """ class SubscriberExistsError(Exception): ''' Exception to be raised when a subscriber already exists ''' pass processing_jobs = [] subscriber_file = os.path.join(settings.OUTPUT_DIR, 'subscribers.txt') with open(subscriber_file, 'w') as f: log.info(f"Created subscribers file at {subscriber_file} if it didn't exist") app = Flask(__name__) @app.route('/', methods=['GET', 'POST']) def index(): error = None # subscription form submission if request.method == 'POST': email = request.form.get('email') if email and email.strip() != "": try: add_subscriber(email) process_subscription(email) except SubscriberExistsError: error = f'{email} is already subscribed' except Exception as e: log.error(e) error = 'Error adding subscription, please try again later!' else: error = 'Please enter an email' return render_template_string( tpl, greeting=settings.GREETING, error=error, subscribers=get_subscribers(), ) @app.route('/processing', methods=['GET']) def processing(): ''' Return the list of processing jobs ''' return jsonify({"count": len(processing_jobs), "jobs": processing_jobs}) def add_subscriber(email: str): ''' Add a new subscriber to the list. ''' log.info(f'Adding {email} to subscribers') with open(subscriber_file, 'a') as f: f.write(email + '\n') def get_subscribers(): ''' Return all emails subscribed so far ''' with open(subscriber_file, 'r') as f: return f.read().splitlines() def process_subscription(email: str): ''' Some long operation to process a new subscription e.g. sending a welcome mail and setting up custom jobs ''' log.info(f'Processing subscription for {email}') global processing_jobs processing_jobs.append(email) time.sleep(10) log.info(f'Finished processing subscription for {email}') processing_jobs.remove(email)

Now that we have the application code, we create all files required to define the dependencies of our app and how it should be built, packaged and deployed:

  1. requirements.txt defines all Python dependencies of our app. We’ll already specify all dependencies that we will need throughout the whole article, so we won’t have to touch this file again later.
flask psycopg2-binary redis validate_email rq gunicorn

Those dependencies are required to run a Flask app that can interact with PostgreSQL and Redis. Gunicorn is a production-grade webserver used instead of Flask’s built-in webserver in production environments.

Note: Generally, it’s best practice to use pinned versions instead of opting for the latest version implicitly as we do it here.

  1. We use a Dockerfile to describe how our application can be built as a container image including all runtime dependencies.
FROM python:3-alpine WORKDIR /app ENV FLASK_APP=/app/src/app/app.py ENV FLASK_RUN_HOST=0.0.0.0 ENV PYTHONPATH=/app/src:$PYTHONPATH RUN apk add --no-cache gcc musl-dev linux-headers COPY . . RUN pip install -r requirements.txt EXPOSE 5000 CMD ["flask", "run"]

Note: We choose Alpine-based images throughout this blog, as they’re quite lightweight but still contain useful tools, e.g. for debugging.

  1. The following Acornfile adds configuration and deployment information to the mix:
containers: { app: { build: "." ports: publish: "5000/http" env: { "GREETING": "Hello World" } }

In this Acornfile we define an Acorn app consisting of a single container called app that is built from the current directory (.), that exposes some HTTP service on port 5000 which we want to publish externally.

In addition to that, we configure the app using the GREETING environment variable.

First Run

Let’s run our app with a single command:

acorn run --name awesome-acorn .

This will trigger a build of our Acorn image and then deploy it to your Kubernetes cluster.

Since we specified that we want to publish HTTP port 5000, Acorn will provide us with an accessible ENDPOINT domain right after the app started successfully.

You’ll see it in the output of the run command or by checking our your running app via

acorn app awesome-acorn

If you now access that endpoint using your webbrowser, you should see our webpage:

01-01.png

Feel free to play around with it.

Also, you can view the logs of your application container(s) using

acorn logs awesome-acorn

Easy! But now it’s obvious, that this whole change->build->deploy->check_logs cycle can be quite tedious if all you want are quick development iterations.

Introducing the Development Mode

Here, the

–dev
flag comes into play.

It will activate Acorn’s built-in interactive development mode.

What this does is:

  • internally set the args.dev argument to true
  • build the Acorn image and deploy the app
  • stream logs and application status in the foreground
  • stop and delete everything on exit of the current shell (CTRL+C)

OK, that’s not everything. The best part of it is:

  • watch for local file changes and
  • trigger a rebuild/redeploy on e.g. Acornfile or Dockerfile changes
  • upload file changes to the running container

Live-Reloading hat very last point right there is the trick that brings us to live-reloading of code.

The goal of live-reloading is to avoid the tedious build cycle described above by having the runtime watch for file changes and then automatically trigger a reload of the application without the need for a new build or deployment.

This drastically speeds up the development iterations by reducing the wait time required until you can see your change in action.

Note: In this blog, we’re using Python and Flask, which is perfect for live-reloading of code, but there are many other combos out there that can leverage this feature.

Let’s give this a try.

To enable live-reloading in our current setup, we only need to change the Acornfile so it looks like the following:

containers: { app: { build: "." ports: publish: "5000/http" env: { "GREETING": "Hello World" "FLASK_DEBUG": "1" } dirs: { "/app": "./" } } }

The new environment variable FLASK_DEBUG enables the development webserver in the Flask runtime, which triggers a reload on file changes.

With the new dirs section, Acorn will create a bind mount of the local working directory ./ to the /app directory inside the app container where the source code resides (as defined in the Dockerfile).

To see it in action, we run the Acorn app in development mode using the --dev flag:

acorn run --dev --name awesome-acorn .

In the output of this command you will see the build process happening, followed by continuous status update on the deployment of the Acorn app and lastly by a stream of container logs.

Accessing the exposed Endpoint, we will still see the same webpage with the original

Hello World
greeting.

We can easily test both ways of synchronization:

  1. Changing the Acornfile:
  • Change the value of the GREETING environment variable to e.g. Hi Acorn and save the file
  • Watch Acorn rebuild and redeploy the app
  • On your next visit to the exposed endpoint, you will see the new greeting
  1. Changing the Python code:
  • In the app.py file, change the line

    {{ greeting }}

    in the tpl template string to

    {{ greeting }}

    and save the file
  • Watch Acorn synchronize the changed file with the running container
  • Watch Flask notice the file change and reload the webserver
  • On your next visit to the exposed endpoint, you will see the Hello World greeting in a shiny red color We will make use of this way more during the rest of this article to always verify our changes on the fly.

Synchronizing the other Direction But there’s even more to it. If you’ve read the helptext of the run command (acorn run --help), you’ve probably seen the --bidirectional-sync flag as well, which goes hand in hand with the --dev flag.

This will tell Acorn to not only synchronize file changes from your local working directory to the running container, but also the other way around.

This can come in handy if your app writes information to some file that you would e.g. download from a webserver or filestorage later on… which takes way too long for rapid development.

As you can see in the app.py code, our app saves all new subscribers in a plain text file, separated by newlines.

Usually, that file is only used internally to the app.

You could inspect it manually by executing a command inside the running Acorn app container:

acorn exec awesome-acorn cat /app/output/subscribers.txt

But there’s an even easier way using bidirectional synchronization so you can even inspect the file in your editor without any redirect, etc.

Just run the app again in development mode with the additional --bidirectional-sync flag:

acorn run --name awesome-acorn --dev --bidirectional-sync .

All the output will stay the same as before except, that you now also see (sync) messages for “Downstream” synchronization, when a file changed in the mapped directory inside the container.

You will see a new directory output created in your local working directory, which contains the subscribers.txt which will continuously be updates as you add more subscribers using the web app.

Step 2: From Easy to (a little more) Sophisticated

Now that we know how quick development is enabled using Acorn, let’s build a proper app from what we have so far.

Goal: In this section, we will separate some of the code we stuffed into

app.py
out into different specific files. We will not yet integrate it with external services.

Make sure that your app is still running in development mode:

acorn run --name awesome-acorn --dev --bidirectional-sync .

Now, we separate concerns:

  1. Move the HTML template out of the Python code. Move only the value of the tpl variable in app.py (without quotes) into a new file inside a new directory:_ src/app/web/templates/index.html_. Afterwards, move everything between the into another new file src/app/web/static/css/style.css and replace the tags with a single line to include the new file:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link rel="stylesheet" href="/static/css/style.css"/> <title>Awesome Acorn</title> </head> <body> <div class="container"> <h1>Awesome Acorn!</h1> <h2 style="color: red">{{ greeting }}</h2> <form action="" method="post"> <input type="email" name="email" placeholder="Email"> <input type="submit" value="Subscribe"> {% if error %} <p style="color: red">{{ error }}</p> {% endif %} </form> <h2>Subscribers</h2> <ul> {% for subscriber in subscribers %} <li>{{ subscriber }}</li> {% endfor %} </ul> </div> </body> </html>
.container { display: flex; flex-wrap: wrap; flex-direction: column; justify-content: center; align-items: center; }
  1. Flask has a concept called Blueprints which we will now implement in a new file src/app/web/controllers.py. It will contain most of the code that before resided in app.py. The new Blueprint bundles all defined routes:
import logging as log import os import time from app import settings from flask import Blueprint, jsonify, render_template, request bp = Blueprint("web", __name__, url_prefix="/") class SubscriberExistsError(Exception): ''' Exception to be raised when a subscriber already exists ''' pass processing_jobs = [] subscriber_file = os.path.join(settings.OUTPUT_DIR, 'subscribers.txt') with open(subscriber_file, 'w') as f: log.info(f"Created subscribers file at {subscriber_file} if it didn't exist") @bp.route('/', methods=['GET', 'POST']) def index(): error = None # subscription form submission if request.method == 'POST': email = request.form.get('email') if email and email.strip() != "": try: add_subscriber(email) process_subscription(email) except SubscriberExistsError: error = f'{email} is already subscribed' except Exception as e: log.error(e) error = 'Error adding subscription, please try again later!' else: error = 'Please enter an email' return render_template( "index.html", greeting=settings.GREETING, error=error, subscribers=get_subscribers(), ) @bp.route('/processing', methods=['GET']) def processing(): ''' Return the list of processing jobs ''' return jsonify({"count": len(processing_jobs), "jobs": processing_jobs}) def add_subscriber(email: str): ''' Add a new subscriber to the list. ''' log.info(f'Adding {email} to subscribers') with open(subscriber_file, 'a') as f: f.write(email + '\n') def get_subscribers(): ''' Return all emails subscribed so far ''' with open(subscriber_file, 'r') as f: return f.read().splitlines() def process_subscription(email: str): ''' Some long operation to process a new subscription e.g. sending a welcome mail and setting up custom jobs ''' log.info(f'Processing subscription for {email}') global processing_jobs processing_jobs.append(email) time.sleep(10) log.info(f'Finished processing subscription for {email}') processing_jobs.remove(email)
  1. This makes app.py appear pretty empty, as it will now only focus on registering blueprint(s). Additionally, we tell the app where it can find the templates and static files:
from flask import Flask from app.web import bp as web app = Flask(__name__, template_folder='web/templates', static_folder='web/static') app.register_blueprint(web)

To be able to import the Blueprint like this, we also need to add the new file src/app/web/init.py:

from .controllers import bp as bp

While we changed a lot of things, Acorn synchronized everything in the running session.

Let’s have a look at the endpoint using a web browser.

01-01.png

Still looks the same. But everything about the app changed. Anyway, we didn’t need to take any manual steps to rebuild the app or deploy a new version. Acorn took care of all that automatically.

Pretty handy, isn’t it?

Step 3: Integrating with third-party services

You’re absolutely right. Storing all that data somewhere in the container filesystem and even in-memory. As mentioned in the very beginning, we’re going to move that data to external services, namely to a PostgreSQL database and a Redis cache which we’ll use as a queuing system.

Note: The Acorn app should still be running in development mode

  1. We’re going to add two containers to the Acornfile:
containers: { app: { // ... } db: { image: "postgres:alpine" ports: "5432/tcp" env: { "POSTGRES_USER": "acorn-devspace" "POSTGRES_PASSWORD": "acorn-devspace" "POSTGRES_DB": "acorn-devspace" } } redis: { image: "redis:alpine" ports: "6379/tcp" } }

Both containers run services on specific ports. Since we don’t specify any access mode for them, they’re only accessible within the Acorn app.

The containers are reachable from the app container by their simple names db and redis.

Postgres requires some additional configuration via environment variables.

  1. Replace some parts of controllers.py with separate files: src/app/sub/db.py for database code, src/app/sub/process.py for subscription processing:
import psycopg2 from app import settings class SubscriberExistsError(Exception): ''' Exception to be raised when a subscriber already exists ''' pass def new_connection(): conn = psycopg2.connect( host=settings.DB_HOST, database=settings.DB_DATABASE, user=settings.DB_USER, password=settings.DB_PASSWORD, ) return conn # Get subscribers from database def get_subscribers() -> list[str]: conn = new_connection() cur = conn.cursor() cur.execute("SELECT email FROM subscribers") res = cur.fetchall() conn.close() return [e[0] for e in res] # Add new email to database def add_subscriber(email: str): if email in get_subscribers(): raise SubscriberExistsError conn = new_connection() cur = conn.cursor() cur.execute("INSERT INTO subscribers (email) VALUES (%s)", (email,)) conn.commit() conn.close()
import logging as log import time import redis from rq import Queue, get_current_job redis_conn = redis.StrictRedis(host="redis", port=6379, db=0) jobqueue = Queue(connection=redis_conn) def process_subscription(email: str) -> dict: job = get_current_job() log.info(f"Processing subscription for {email}...") time.sleep(10) log.info(f"Completed processing subscription for {email}") return { "email": email, "job_id": job.id, "status": "success", } def get_processing_jobs() -> list[dict]: return [ { "email": job.args[0], "job_id": job.get_id(), "status": job.get_status(), } for job in jobqueue.get_jobs() ]

Additionally, we ease our imports by exporting everything in here in the src/app/sub/init.py:

from .db import * from .process import *
  1. After the last step, we can now cleanup app/src/web/controllers.py:
import logging as log import app.sub as subs from app import settings from flask import Blueprint, jsonify, render_template, request bp = Blueprint("web", __name__, url_prefix="/") @bp.route('/', methods=['GET', 'POST']) def index(): error = None # subscription form submission if request.method == 'POST': email = request.form.get('email') if email and email.strip() != "": try: subs.add_subscriber(email) subs.add_processing_job(email) except subs.SubscriberExistsError: error = f'{email} is already subscribed' except Exception as e: log.error(e) error = 'Error adding subscription, please try again later!' else: error = 'Please enter an email' return render_template( "index.html", greeting=settings.GREETING, error=error, subscribers=subs.get_subscribers(), ) @bp.route('/processing', methods=['GET']) def processing(): ''' Return the list of processing jobs ''' queued_jobs = subs.get_processing_jobs() return jsonify({"count": subs.jobqueue.count, "jobs": queued_jobs})
  1. Let’s also add the missing settings to src/app/settings.py:
import logging as log import os GREETING = os.getenv("GREETING", "Hello World!") OUTPUT_DIR = os.getenv("OUTPUT_DIR", "/app/output") os.makedirs(OUTPUT_DIR, exist_ok=True) LOG_LEVEL = "INFO" log.basicConfig(level=LOG_LEVEL) DB_HOST = os.getenv("DB_HOST", "db") DB_DATABASE = os.getenv("DB_DATABASE", "acorn-devspace") DB_USER = os.getenv("DB_USER", "acorn-devspace") DB_PASSWORD = os.getenv("DB_PASSWORD", "acorn-devspace") REDIS_HOST = os.getenv("REDIS_HOST", "redis")

Including (init) files

After you’ve saved all those files, try visiting the endpoint again.

You’ll face an error saying

psycopg2.errors.UndefinedTable: relation "subscribers" does not exist.

That’s because our Postgres database is not populated with any tables yet.

There are many ways to do this when starting the database container in the very beginning.

Once particularly clean way is to use an init script supported by Postgres.

Luckily, Acorn allows you to include files into the container filesystem at runtime.

Add the following to the db container section in the Acornfile:

containers: { // ... db: { // ... files: { "/docker-entrypoint-initdb.d/00-init.sql": "CREATE TABLE subscribers (email text);" } } }

With this, the Postgres container will start with an init script that initially creates the required table.

Saving this will trigger a rebuild and redeployment of our app, which will not show this error anymore.

Depending on Others

Just seeing the aforementioned error makes us realize: Our app doesn’t work at all if Postgres and Redis aren’t OK.

Acorn got us covered here as well by providing a dependsOn setting effectively holding back the deployment of the app container until the others are up and running:

containers: { app: { // ... dependsOn: [ "db", "redis" ] } }

Outsource the Processing

It may have gone unnoticed up to now, but with the above changes, we’re not processing new subscriptions anymore.

We’re putting them into a Redis queue and leave them there without acting on them.

We have two options now:

  1. Implement something to act on each job as it comes into the queue, having a continuously running service to do so
  2. Create a CronJob that periodically works on all jobs in the queue until its empty and then stops until its next run To save some energy, Option 2 sounds reasonable. And actually, we can re-use our original container image for this.

Let’s add a job to the Acornfile below the containers section:

containers: { // ... } jobs: "process-subscriptions": { build: "." command: "rq worker --burst --url redis://redis:6379" schedule: "@daily" }

Et voilà, now we have a CronJob that starts a process to clean the subscription processing queue once every day.

It’s part of the Acorn app, so it can also call the Redis container by name.

Finish Development

Please, give the app another try, add some Emails to the list, checkout the /processing endpoint and see how jobs are accumulating there.

Feel free to change the schedule of the CronJob to e.g. run once per minute and view how it works on all the items in the queue.

This is the final Acornfile after this section:

containers: { app: { build: "." ports: publish: "5000/http" dependsOn: [ "db", "redis" ] env: { "GREETING": "Hi Acorn" } if args.dev { env: {"FLASK_DEBUG": "1"} dirs: { "/app": "./" } } } db: { image: "postgres:alpine" ports: "5432/tcp" env: { "POSTGRES_USER": "acorn-devspace" "POSTGRES_PASSWORD": "acorn-devspace" "POSTGRES_DB": "acorn-devspace" } files: { "/docker-entrypoint-initdb.d/00-init.sql": "CREATE TABLE subscribers (email text);" } } redis: { image: "redis:alpine" ports: "6379/tcp" } } jobs: "process-subscriptions": { build: "." command: "rq worker --burst --url redis://redis:6379" schedule: "@daily" }

Wait… if args.dev {...} I didn’t see that before. Correct, that’s new.

Remember, that --dev internally sets the args.dev argument to true?

We can make use of that here to disable some options that should only be active in development mode.

Step 4: Moving to Production

Production environments come with different requirements than development:

  • data persistence
  • configuration without touching the source
  • proper secrets
  • production-grade webserver

Let’s get those done before packaging our app for others to use in production.

  1. Add a volume for Postgres to persist subscriber data by adding a volume in the
    Acornfile
containers: { // ... db: { if !args.dev { dirs: { "/var/lib/postgresql/data": "volume://pgdata" } } } } // if !args.dev { volumes: { "pgdata": { accessModes: "readWriteOnce" } } }
  1. End users won’t want to read and modify the Acornfile to run our app with a custom greeting text, so let’s expose it as an argument:
args: { // Configure the greeting text in the web app greeting: "Hello World" } containers: { // ... }

Now users can run acorn run --help and see a help text as per the comment in the Acornfile and set a new greeting text via acorn run --greeting "Hi Args".

For you developing this right now, can just be . as before.

  1. Then, let’s we remove the hardcoded Postgres password and use a random auto-generated one of type "token":
containers: { app: { // ... env: { // ... "DB_PASSWORD": "secret://pg-creds/token" } } // ... db: { // ... env: { "POSTGRES_PASSWORD": "secret://pg-creds/token" } } } // ... secrets: { "pg-creds": { type: "token" } }
  1. Finally, let’s switch from Flask’s built-in webserver to gunicorn as a production-quality server for our app by changing the app containers entrypoint:
containers: { app: { // ... if !args.dev { entrypoint: #"gunicorn -w 4 -b 0.0.0.0:5000 app.app:app"# } } }

Package and Push for Distribution

Now that’s pretty! Let’s package everything into a single Acorn image and push it to some OCI registry, e.g. DockerHub.

acorn login my.registry.com # Build acorn build -t my.registry.com/acorns/awesome-acorn:v0.1.0 . # Push acorn push my.registry.com/acorns/awesome-acorn:v0.1.0 # Run the image acorn run --name awesome-acorn my.registry.com/acorns/awesome-acorn:v0.1.0

Wrapping Up

In this post, we built an entire Python Flask application from scratch, without even having to have Python installed locally, by synchronizing all of our file changes with containers deployed in a Kubernetes cluster via Acorn’s development mode. We also synchronized changed files back to our local filesystem for debugging and testing.

Then we integrated our homegrown app with third-party apps it now depends on.

In the end, we made our deployment package production-ready and distributed it as a single Acorn image that anyone can download and run easily, with a single customizable argument.

That was a lot. But there’s even more to it. Checkout out Acorn docs.