Skip to content

Using docker

What is docker? Quoting them

Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers.

The conventional way

When you deploy usually you need to:

  • Decide how many environments you will deploy (testing, staging, production...)
  • Prepare the requirements.
  • Prepare possible environment variables.
  • Prepare secrets to be passed onto the application.
  • Possibly, prepare the database accesses via those same environment variables.
  • Orchestration.
  • ...

And in the end, a lot of hope that everything will work flawlessly in every single environment as long as those are exactly the same.

This is great but prompt to human mistakes.

The docker way

Using docker you still need to think about infrastructure and resources for your application but reduces the fact that you need to install the same binaries in every single environment since it will be managed by a container.

Imagine a container as a zip file. You simply put together all that is needed for your Esmerald to work in one place and "zip it" which in this case, you will "dockerize it". Which means in every single environment the binaries will be exactly the same and not reliant on humans reducing the complexity.

Esmerald and docker example

Let's assume we want to deploy a simple Esmerald application using docker. Assuming that external resources are already handled and managed by you.

Let's use:

  • Nginx configuration - Web server.
  • Supervisor - Process manager.
  • Esmerald dockerized application.

Assumptions:

  • All of configrations will be places in a folder called /deployment.
  • The application will have a simple folder structure

    .
    ├── app
    │   ├── __init__.py
    │   └── main.py
    ├── Dockerfile
    ├── deployment/
    │   ├── nginx.conf
    │   └── supervisor.conf
    └── requirements.txt
    
  • The requirements file

    esmerald
    uvicorn
    nginx
    supervisor
    

As mentioned in these docs, we will be using uvicorn for our examples but you are free to use whatever you want.

The application

Let's start with a simple, single file application just to send an hello world.

app/main.py
from typing import Union

from esmerald import Esmerald, Gateway, get


@get("/")
def home():
    return {"Hello": "World"}


@get("/users/{user_id}")
def read_user(user_id: int, q: Union[str, None] = None):
    return {"item_id": user_id, "q": q}


app = Esmerald(
    routes=[
        Gateway(handler=home),
        Gateway(handler=read_user),
    ]
)

Nginx

Nginx is a web server that can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache.

You find more details about Nginx but exploring their documentation and how to use it.

Let's start by building our simple nginx application.

events {
    worker_connections 1024;
}

http {
  server {
    listen 80;
    client_max_body_size 4G;

    server_name example.com;

    location / {
      proxy_set_header Host $http_host;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection $connection_upgrade;
      proxy_set_header X-CSS-Protection "1; mode=block";
      proxy_set_header X-Content-Type-Options "nosniff";
      proxy_set_header Cache-Control "public,max-age=120,must-revalidate,s-maxage=120";
      add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
      add_header Content-Security-Policy "default-src 'self';" always;
      add_header X-Frame-Options "SAMEORIGIN";
      proxy_redirect off;
      proxy_buffering off;
      proxy_pass http://uvicorn;
    }

    location /static {
      # path for static files
      root /path/to/app/static;
    }
  }

  map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
  }

  upstream uvicorn {
    server unix:/tmp/uvicorn.sock;
  }

}

We have created a simple nginx configuration with some level of security to make sure we protect the application on every level.

Supervisor

Supervisor is a simple, yet powerful, process manager that allows to monitor and control a number of processes on a UNIX-like operating systems.

Their documentation will help you to understand better how to use it.

Now it is time to create a supervisor configuration.

[unix_http_server]
file = /run/supervisor.sock
chown = root:root
chmod = 0700
username = username
password = passwd

[supervisord]
nodaemon = true
nocleanup = true
logfile  =/var/log/supervisord.log
loglevel = warn
childlogdir  =/var/log
user = root

[supervisord]
nodaemon = true
nocleanup = true
logfile  =/var/log/supervisord.log
loglevel = warn
childlogdir =/var/log
user = root

[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl = unix:///run/supervisor.sock
username = username
password = passwd

[program:nginx]
command = nginx -g "daemon off;"
autostart = true
autorestart = true
priority = 200
stopwaitsecs = 60
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes = 0
stderr_logfile = /dev/stderr
stderr_logfile_maxbytes = 0

[fcgi-program:uvicorn]
socket = tcp://localhost:8000
command = uvicorn --fd 0 app.main:app
numprocs = 4
priority = 14
startsecs = 10
autostart = true
autorestart = true
process_name = uvicorn-%(process_num)d
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes = 0
redirect_stderr = true

It looks complex and big but let's translate what this configuration is actually doing.

  1. Creates the initial configurations for the supervisor and supervisord.
  2. Declares instructions how to start the nginx.
  3. Declares the instrutions how to start the uvicorn and the esmerald application.

Dockefile

The Dockerfile is where you place all the instructions needed to start your application once it is deployed, for example, start the supervisor which will then start all the processes declared inside.

# (1)
FROM python:3.9

# (2)
RUN apt-get update && \
    apt-get install -y --no-install-recommends \
        libatlas-base-dev gfortran nginx supervisor nginx-extras

# (3)
WORKDIR /src

# (4)
COPY ./requirements.txt /src/requirements.txt

# (5)
RUN pip install --no-cache-dir --upgrade -r /src/requirements.txt

# (6)
COPY ./app /src/app

COPY deployment/nginx.conf /etc/nginx/
COPY deployment/nginx.conf /etc/nginx/sites-enabled/default
COPY deployment/supervisord.conf /etc/

# (7)
CMD ["/usr/bin/supervisord"]
  1. Start from an official python base image.
  2. Install the minimum requirements to run the nginx and the supervisor.
  3. Set the current working directory to /src.

    This is where you will be putting the requirements.txt and the app directory.

  4. Copy the requirements for your project.

    You should only copy the requirements and not the rest of the code and the reason for it is the cache from docker. If the file doesn't change too often, then it will cache and the next time you need to rebuild the image, it won't repeat the same steps all the time.

  5. Install the requirements.

    The --no-cache-dir is optional. You can simply add it to tell pip not to cache the packages locally.

    The --upgrade is to make sure that pip upgrades the current installed packages to the latest.

  6. Copy the ./app to the /src directory.

    Also copies the necessary created nginx.conf and supervisor.conf previously created to the corresponding system folders.

  7. Tells supervisor to start running. The system will be using the supervisor.conf file created and it will trigger the instructions declared like starting the nginx and uvicorn.

Build the docker image

With the Dockerfile created it is now time to build the image.

$ docker build -t myapp-image .

Test the image locally

You can test your image locally before deploying and see if it works as you want.

$ docker run -d --name mycontainer -p 80:80 myapp-image

Verify it

After building the image and start it locally you can then check if it works as you need it to work.

Example:

Important

It was given an example of how to build some files similar to the ones needed for a given deployment.

You should always check and change any of the examples to fit your needs and make sure it works for you

OpenAPI docs

Esmerald provides the OpenAPI documentation ready to be used and always active and you can acess via:

Documentation in production

By design, the docs will be always available but majority of the applications will not have the documentation available in production for many reasons.

To disable the documentation for being generated you can simply use the internal flag enable_openapi.

from typing import Union

from esmerald import Esmerald, Gateway, get


@get("/")
def home():
    return {"Hello": "World"}


@get("/users/{user_id}")
def read_user(user_id: int, q: Union[str, None] = None):
    return {"item_id": user_id, "q": q}


app = Esmerald(
    routes=[
        Gateway(handler=home),
        Gateway(handler=read_user),
    ],
    enable_openapi=False,
)

Or do it via your custom settings

from esmerald import EsmeraldAPISettings


class AppSettings(EsmeraldAPISettings):
    enable_openapi: bool = False