/building-an-application-with-a-rabbitmq-acorn-service
Building an Application with a RabbitMQ Acorn Service – Part 2
Acorn is a great tool to package a complete application stack, including images, configuration and deployment specification into a single OCI image for easy distribution.
This also enables seamless promotion of the same image through different environments, e.g. from development through staging to production.
In this blog we want to focus on the very first element in that chain: development.
All you need to follow along is
Note: You do not need to have Python installed, as we’ll do all testing in the deployed Acorn app container.
In this blog, we’re going to demonstrate how you can leverage Acorn and especially its built-in development mode to speed up your development workflow.
Flask, Redis and Postgres Architecture As an example, we’re going to build a Python application using the Flask framework and integrate it with some third-party tooling, namely Redis and Postgres.
We’re building something similar to what we demonstrate in our Getting Started Guide, but make it a bit more complex, closer to a real-life use case.
Our application will show a web page where people can enter their E-Mail to subscribe to some newsletter.
Below the subscription form it shows a list of like-minded people’s addresses (Not really GDPR compliant, but well..).
Additionally, it gives operators an endpoint to see which subscriptions are still processing.
Note: You can download the source code for each step defined below from our examples repository
All of our development will happen in a new directory, which we’ll call acorn-devspace.
For a start, we will create some filestructure as below:
├── Acornfile ├── Dockerfile ├── requirements.txt └── src └── app ├── __init__.py ├── app.py └── settings.py
For the rest of this blog, we’ll assume that any command is executed from inside the new acorn-devspace directory.
We’ll start with a basic Flask app, without integrating with Postgres and Redis yet:
src/app/init.py will stay empty.
src/app/settings.py will be used to configure some application settings via environment variables:
import logging as log import os GREETING = os.getenv("GREETING", "Hello World!") OUTPUT_DIR = os.getenv("OUTPUT_DIR", "/app/output") os.makedirs(OUTPUT_DIR, exist_ok=True) LOG_LEVEL = "INFO" log.basicConfig(level=LOG_LEVEL)
The src/app/app.py file defines a Flask application serving two routes:
import logging as log import os import time from flask import Flask, jsonify, render_template_string, request from app import settings tpl = """ <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <style> .container { display: flex; flex-wrap: wrap; flex-direction: column; justify-content: center; align-items: center; } </style> <title>Awesome Acorn</title> </head> <body> <div class="container"> <h1>Awesome Acorn!</h1> <h2>{{ greeting }}</h2> <form action="" method="post"> <input type="email" name="email" placeholder="Email"> <input type="submit" value="Subscribe"> {% if error %} <p style="color: red">{{ error }}</p> {% endif %} </form> <h2>Subscribers</h2> <ul> {% for subscriber in subscribers %} <li>{{ subscriber }}</li> {% endfor %} </ul> </div> </body> </html> """ class SubscriberExistsError(Exception): ''' Exception to be raised when a subscriber already exists ''' pass processing_jobs = [] subscriber_file = os.path.join(settings.OUTPUT_DIR, 'subscribers.txt') with open(subscriber_file, 'w') as f: log.info(f"Created subscribers file at {subscriber_file} if it didn't exist") app = Flask(__name__) @app.route('/', methods=['GET', 'POST']) def index(): error = None # subscription form submission if request.method == 'POST': email = request.form.get('email') if email and email.strip() != "": try: add_subscriber(email) process_subscription(email) except SubscriberExistsError: error = f'{email} is already subscribed' except Exception as e: log.error(e) error = 'Error adding subscription, please try again later!' else: error = 'Please enter an email' return render_template_string( tpl, greeting=settings.GREETING, error=error, subscribers=get_subscribers(), ) @app.route('/processing', methods=['GET']) def processing(): ''' Return the list of processing jobs ''' return jsonify({"count": len(processing_jobs), "jobs": processing_jobs}) def add_subscriber(email: str): ''' Add a new subscriber to the list. ''' log.info(f'Adding {email} to subscribers') with open(subscriber_file, 'a') as f: f.write(email + '\n') def get_subscribers(): ''' Return all emails subscribed so far ''' with open(subscriber_file, 'r') as f: return f.read().splitlines() def process_subscription(email: str): ''' Some long operation to process a new subscription e.g. sending a welcome mail and setting up custom jobs ''' log.info(f'Processing subscription for {email}') global processing_jobs processing_jobs.append(email) time.sleep(10) log.info(f'Finished processing subscription for {email}') processing_jobs.remove(email)
Now that we have the application code, we create all files required to define the dependencies of our app and how it should be built, packaged and deployed:
flask psycopg2-binary redis validate_email rq gunicorn
Those dependencies are required to run a Flask app that can interact with PostgreSQL and Redis. Gunicorn is a production-grade webserver used instead of Flask’s built-in webserver in production environments.
Note: Generally, it’s best practice to use pinned versions instead of opting for the latest version implicitly as we do it here.
FROM python:3-alpine WORKDIR /app ENV FLASK_APP=/app/src/app/app.py ENV FLASK_RUN_HOST=0.0.0.0 ENV PYTHONPATH=/app/src:$PYTHONPATH RUN apk add --no-cache gcc musl-dev linux-headers COPY . . RUN pip install -r requirements.txt EXPOSE 5000 CMD ["flask", "run"]
Note: We choose Alpine-based images throughout this blog, as they’re quite lightweight but still contain useful tools, e.g. for debugging.
containers: { app: { build: "." ports: publish: "5000/http" env: { "GREETING": "Hello World" } }
In this Acornfile we define an Acorn app consisting of a single container called app that is built from the current directory (.), that exposes some HTTP service on port 5000 which we want to publish externally.
In addition to that, we configure the app using the GREETING environment variable.
Let’s run our app with a single command:
acorn run --name awesome-acorn .
This will trigger a build of our Acorn image and then deploy it to your Kubernetes cluster.
Since we specified that we want to publish HTTP port 5000, Acorn will provide us with an accessible ENDPOINT domain right after the app started successfully.
You’ll see it in the output of the run command or by checking our your running app via
acorn app awesome-acorn
If you now access that endpoint using your webbrowser, you should see our webpage:
Feel free to play around with it.
Also, you can view the logs of your application container(s) using
acorn logs awesome-acorn
Easy! But now it’s obvious, that this whole change->build->deploy->check_logs cycle can be quite tedious if all you want are quick development iterations.
Here, the
–dev
It will activate Acorn’s built-in interactive development mode.
What this does is:
OK, that’s not everything. The best part of it is:
Live-Reloading hat very last point right there is the trick that brings us to live-reloading of code.
The goal of live-reloading is to avoid the tedious build cycle described above by having the runtime watch for file changes and then automatically trigger a reload of the application without the need for a new build or deployment.
This drastically speeds up the development iterations by reducing the wait time required until you can see your change in action.
Note: In this blog, we’re using Python and Flask, which is perfect for live-reloading of code, but there are many other combos out there that can leverage this feature.
Let’s give this a try.
To enable live-reloading in our current setup, we only need to change the Acornfile so it looks like the following:
containers: { app: { build: "." ports: publish: "5000/http" env: { "GREETING": "Hello World" "FLASK_DEBUG": "1" } dirs: { "/app": "./" } } }
The new environment variable FLASK_DEBUG enables the development webserver in the Flask runtime, which triggers a reload on file changes.
With the new dirs section, Acorn will create a bind mount of the local working directory ./ to the /app directory inside the app container where the source code resides (as defined in the Dockerfile).
To see it in action, we run the Acorn app in development mode using the --dev flag:
acorn run --dev --name awesome-acorn .
In the output of this command you will see the build process happening, followed by continuous status update on the deployment of the Acorn app and lastly by a stream of container logs.
Accessing the exposed Endpoint, we will still see the same webpage with the original
Hello World
We can easily test both ways of synchronization:
Synchronizing the other Direction But there’s even more to it. If you’ve read the helptext of the run command (acorn run --help), you’ve probably seen the --bidirectional-sync flag as well, which goes hand in hand with the --dev flag.
This will tell Acorn to not only synchronize file changes from your local working directory to the running container, but also the other way around.
This can come in handy if your app writes information to some file that you would e.g. download from a webserver or filestorage later on… which takes way too long for rapid development.
As you can see in the app.py code, our app saves all new subscribers in a plain text file, separated by newlines.
Usually, that file is only used internally to the app.
You could inspect it manually by executing a command inside the running Acorn app container:
acorn exec awesome-acorn cat /app/output/subscribers.txt
But there’s an even easier way using bidirectional synchronization so you can even inspect the file in your editor without any redirect, etc.
Just run the app again in development mode with the additional --bidirectional-sync flag:
acorn run --name awesome-acorn --dev --bidirectional-sync .
All the output will stay the same as before except, that you now also see (sync) messages for “Downstream” synchronization, when a file changed in the mapped directory inside the container.
You will see a new directory output created in your local working directory, which contains the subscribers.txt which will continuously be updates as you add more subscribers using the web app.
Now that we know how quick development is enabled using Acorn, let’s build a proper app from what we have so far.
Goal: In this section, we will separate some of the code we stuffed into
out into different specific files. We will not yet integrate it with external services.app.py
Make sure that your app is still running in development mode:
acorn run --name awesome-acorn --dev --bidirectional-sync .
Now, we separate concerns:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link rel="stylesheet" href="/static/css/style.css"/> <title>Awesome Acorn</title> </head> <body> <div class="container"> <h1>Awesome Acorn!</h1> <h2 style="color: red">{{ greeting }}</h2> <form action="" method="post"> <input type="email" name="email" placeholder="Email"> <input type="submit" value="Subscribe"> {% if error %} <p style="color: red">{{ error }}</p> {% endif %} </form> <h2>Subscribers</h2> <ul> {% for subscriber in subscribers %} <li>{{ subscriber }}</li> {% endfor %} </ul> </div> </body> </html>
.container { display: flex; flex-wrap: wrap; flex-direction: column; justify-content: center; align-items: center; }
import logging as log import os import time from app import settings from flask import Blueprint, jsonify, render_template, request bp = Blueprint("web", __name__, url_prefix="/") class SubscriberExistsError(Exception): ''' Exception to be raised when a subscriber already exists ''' pass processing_jobs = [] subscriber_file = os.path.join(settings.OUTPUT_DIR, 'subscribers.txt') with open(subscriber_file, 'w') as f: log.info(f"Created subscribers file at {subscriber_file} if it didn't exist") @bp.route('/', methods=['GET', 'POST']) def index(): error = None # subscription form submission if request.method == 'POST': email = request.form.get('email') if email and email.strip() != "": try: add_subscriber(email) process_subscription(email) except SubscriberExistsError: error = f'{email} is already subscribed' except Exception as e: log.error(e) error = 'Error adding subscription, please try again later!' else: error = 'Please enter an email' return render_template( "index.html", greeting=settings.GREETING, error=error, subscribers=get_subscribers(), ) @bp.route('/processing', methods=['GET']) def processing(): ''' Return the list of processing jobs ''' return jsonify({"count": len(processing_jobs), "jobs": processing_jobs}) def add_subscriber(email: str): ''' Add a new subscriber to the list. ''' log.info(f'Adding {email} to subscribers') with open(subscriber_file, 'a') as f: f.write(email + '\n') def get_subscribers(): ''' Return all emails subscribed so far ''' with open(subscriber_file, 'r') as f: return f.read().splitlines() def process_subscription(email: str): ''' Some long operation to process a new subscription e.g. sending a welcome mail and setting up custom jobs ''' log.info(f'Processing subscription for {email}') global processing_jobs processing_jobs.append(email) time.sleep(10) log.info(f'Finished processing subscription for {email}') processing_jobs.remove(email)
from flask import Flask from app.web import bp as web app = Flask(__name__, template_folder='web/templates', static_folder='web/static') app.register_blueprint(web)
To be able to import the Blueprint like this, we also need to add the new file src/app/web/init.py:
from .controllers import bp as bp
While we changed a lot of things, Acorn synchronized everything in the running session.
Let’s have a look at the endpoint using a web browser.
Still looks the same. But everything about the app changed. Anyway, we didn’t need to take any manual steps to rebuild the app or deploy a new version. Acorn took care of all that automatically.
Pretty handy, isn’t it?
You’re absolutely right. Storing all that data somewhere in the container filesystem and even in-memory. As mentioned in the very beginning, we’re going to move that data to external services, namely to a PostgreSQL database and a Redis cache which we’ll use as a queuing system.
Note: The Acorn app should still be running in development mode
containers: { app: { // ... } db: { image: "postgres:alpine" ports: "5432/tcp" env: { "POSTGRES_USER": "acorn-devspace" "POSTGRES_PASSWORD": "acorn-devspace" "POSTGRES_DB": "acorn-devspace" } } redis: { image: "redis:alpine" ports: "6379/tcp" } }
Both containers run services on specific ports. Since we don’t specify any access mode for them, they’re only accessible within the Acorn app.
The containers are reachable from the app container by their simple names db and redis.
Postgres requires some additional configuration via environment variables.
import psycopg2 from app import settings class SubscriberExistsError(Exception): ''' Exception to be raised when a subscriber already exists ''' pass def new_connection(): conn = psycopg2.connect( host=settings.DB_HOST, database=settings.DB_DATABASE, user=settings.DB_USER, password=settings.DB_PASSWORD, ) return conn # Get subscribers from database def get_subscribers() -> list[str]: conn = new_connection() cur = conn.cursor() cur.execute("SELECT email FROM subscribers") res = cur.fetchall() conn.close() return [e[0] for e in res] # Add new email to database def add_subscriber(email: str): if email in get_subscribers(): raise SubscriberExistsError conn = new_connection() cur = conn.cursor() cur.execute("INSERT INTO subscribers (email) VALUES (%s)", (email,)) conn.commit() conn.close()
import logging as log import time import redis from rq import Queue, get_current_job redis_conn = redis.StrictRedis(host="redis", port=6379, db=0) jobqueue = Queue(connection=redis_conn) def process_subscription(email: str) -> dict: job = get_current_job() log.info(f"Processing subscription for {email}...") time.sleep(10) log.info(f"Completed processing subscription for {email}") return { "email": email, "job_id": job.id, "status": "success", } def get_processing_jobs() -> list[dict]: return [ { "email": job.args[0], "job_id": job.get_id(), "status": job.get_status(), } for job in jobqueue.get_jobs() ]
Additionally, we ease our imports by exporting everything in here in the src/app/sub/init.py:
from .db import * from .process import *
import logging as log import app.sub as subs from app import settings from flask import Blueprint, jsonify, render_template, request bp = Blueprint("web", __name__, url_prefix="/") @bp.route('/', methods=['GET', 'POST']) def index(): error = None # subscription form submission if request.method == 'POST': email = request.form.get('email') if email and email.strip() != "": try: subs.add_subscriber(email) subs.add_processing_job(email) except subs.SubscriberExistsError: error = f'{email} is already subscribed' except Exception as e: log.error(e) error = 'Error adding subscription, please try again later!' else: error = 'Please enter an email' return render_template( "index.html", greeting=settings.GREETING, error=error, subscribers=subs.get_subscribers(), ) @bp.route('/processing', methods=['GET']) def processing(): ''' Return the list of processing jobs ''' queued_jobs = subs.get_processing_jobs() return jsonify({"count": subs.jobqueue.count, "jobs": queued_jobs})
import logging as log import os GREETING = os.getenv("GREETING", "Hello World!") OUTPUT_DIR = os.getenv("OUTPUT_DIR", "/app/output") os.makedirs(OUTPUT_DIR, exist_ok=True) LOG_LEVEL = "INFO" log.basicConfig(level=LOG_LEVEL) DB_HOST = os.getenv("DB_HOST", "db") DB_DATABASE = os.getenv("DB_DATABASE", "acorn-devspace") DB_USER = os.getenv("DB_USER", "acorn-devspace") DB_PASSWORD = os.getenv("DB_PASSWORD", "acorn-devspace") REDIS_HOST = os.getenv("REDIS_HOST", "redis")
After you’ve saved all those files, try visiting the endpoint again.
You’ll face an error saying
psycopg2.errors.UndefinedTable: relation "subscribers" does not exist.
That’s because our Postgres database is not populated with any tables yet.
There are many ways to do this when starting the database container in the very beginning.
Once particularly clean way is to use an init script supported by Postgres.
Luckily, Acorn allows you to include files into the container filesystem at runtime.
Add the following to the db container section in the Acornfile:
containers: { // ... db: { // ... files: { "/docker-entrypoint-initdb.d/00-init.sql": "CREATE TABLE subscribers (email text);" } } }
With this, the Postgres container will start with an init script that initially creates the required table.
Saving this will trigger a rebuild and redeployment of our app, which will not show this error anymore.
Just seeing the aforementioned error makes us realize: Our app doesn’t work at all if Postgres and Redis aren’t OK.
Acorn got us covered here as well by providing a dependsOn setting effectively holding back the deployment of the app container until the others are up and running:
containers: { app: { // ... dependsOn: [ "db", "redis" ] } }
It may have gone unnoticed up to now, but with the above changes, we’re not processing new subscriptions anymore.
We’re putting them into a Redis queue and leave them there without acting on them.
We have two options now:
Let’s add a job to the Acornfile below the containers section:
containers: { // ... } jobs: "process-subscriptions": { build: "." command: "rq worker --burst --url redis://redis:6379" schedule: "@daily" }
Et voilà, now we have a CronJob that starts a process to clean the subscription processing queue once every day.
It’s part of the Acorn app, so it can also call the Redis container by name.
Please, give the app another try, add some Emails to the list, checkout the /processing endpoint and see how jobs are accumulating there.
Feel free to change the schedule of the CronJob to e.g. run once per minute and view how it works on all the items in the queue.
This is the final Acornfile after this section:
containers: { app: { build: "." ports: publish: "5000/http" dependsOn: [ "db", "redis" ] env: { "GREETING": "Hi Acorn" } if args.dev { env: {"FLASK_DEBUG": "1"} dirs: { "/app": "./" } } } db: { image: "postgres:alpine" ports: "5432/tcp" env: { "POSTGRES_USER": "acorn-devspace" "POSTGRES_PASSWORD": "acorn-devspace" "POSTGRES_DB": "acorn-devspace" } files: { "/docker-entrypoint-initdb.d/00-init.sql": "CREATE TABLE subscribers (email text);" } } redis: { image: "redis:alpine" ports: "6379/tcp" } } jobs: "process-subscriptions": { build: "." command: "rq worker --burst --url redis://redis:6379" schedule: "@daily" }
Wait… if args.dev {...} I didn’t see that before. Correct, that’s new.
Remember, that --dev internally sets the args.dev argument to true?
We can make use of that here to disable some options that should only be active in development mode.
Production environments come with different requirements than development:
Let’s get those done before packaging our app for others to use in production.
Acornfile
containers: { // ... db: { if !args.dev { dirs: { "/var/lib/postgresql/data": "volume://pgdata" } } } } // if !args.dev { volumes: { "pgdata": { accessModes: "readWriteOnce" } } }
args: { // Configure the greeting text in the web app greeting: "Hello World" } containers: { // ... }
Now users can run acorn run --help and see a help text as per the comment in the Acornfile and set a new greeting text via acorn run --greeting "Hi Args".
For you developing this right now, can just be . as before.
containers: { app: { // ... env: { // ... "DB_PASSWORD": "secret://pg-creds/token" } } // ... db: { // ... env: { "POSTGRES_PASSWORD": "secret://pg-creds/token" } } } // ... secrets: { "pg-creds": { type: "token" } }
containers: { app: { // ... if !args.dev { entrypoint: #"gunicorn -w 4 -b 0.0.0.0:5000 app.app:app"# } } }
Now that’s pretty! Let’s package everything into a single Acorn image and push it to some OCI registry, e.g. DockerHub.
acorn login my.registry.com # Build acorn build -t my.registry.com/acorns/awesome-acorn:v0.1.0 . # Push acorn push my.registry.com/acorns/awesome-acorn:v0.1.0 # Run the image acorn run --name awesome-acorn my.registry.com/acorns/awesome-acorn:v0.1.0
In this post, we built an entire Python Flask application from scratch, without even having to have Python installed locally, by synchronizing all of our file changes with containers deployed in a Kubernetes cluster via Acorn’s development mode. We also synchronized changed files back to our local filesystem for debugging and testing.
Then we integrated our homegrown app with third-party apps it now depends on.
In the end, we made our deployment package production-ready and distributed it as a single Acorn image that anyone can download and run easily, with a single customizable argument.
That was a lot. But there’s even more to it. Checkout out Acorn docs.