Skip to main content


Containerization has become a game-changer in the world of software. It allows applications to be portable and reproducible across environments by bundling your application into a "container". As the name implies, containers encapsulate your application with all its dependencies into one unit, which can then be deployed while being isolated from the underlying environment.

Docker is one of the leading container technologies, which is what we'll be using in this section to containerize your application. In spite of the daunting technical jargon, containerizing your app is easier than it seems.
The three steps you need to do are:

  1. Write a Dockerfile, which is a file containing a set of commands that need to run in order to build your app.
  2. Build your image, which is essentially executing the commands from the Dockerfile.
  3. Push your image to a container registry, which is where images are stored remotely.

In order to follow along, you'll need to install docker, check this link to set it up.


You can think of a docker image as recipe for a cake and a docker container as the actual cake baked by following the recipe. In other words, a container is an instansiation of an image.

A Note on State Persistence in Containers

During the lifetime of an application, a container may start and stop multiple times due to failure and recovery or on-demand upscaling and downscaling. Containers are meant to be stateless so that when they restart, the app will continue working as expected. Any files written during the lifetime of a container will no longer exist when the container stops and another one starts.

To incorporate the notion of a state into your container, you must rely on external sources for that state. For example, you can have a remote cloud-based database or storage blob that reside outside of the container and connect to them from within the application code. That way when a new container starts, it will have retained the state because the state lifetime is decoupled from the container lifetime.

Write a Dockerfile

Once you’ve installed docker, you’ll need to create the necessary Dockerfile to be able to build your image. The Dockerfile will be beside the web server code you wrote from the previous section. Your project directory may look something like this:

weights.pkl # model weights used by your web server # your web server

To help you get started, here is a sample Dockerfile you can use:

# Specify base image
FROM python:3.7-slim-stretch

# Install dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential make gcc gnupg \
python3-dev unixodbc-dev

# Create /app dir and it set as working directory
RUN mkdir /app

# Copy file
COPY requirements.txt /app

# Install requirements
RUN pip install --upgrade pip
RUN pip install --upgrade setuptools
RUN pip install --no-cache-dir -r requirements.txt

# Copy relevant files and directories
COPY . /app

# Expose port and run web server
CMD ["uvicorn", "server:app", "--workers", "1", "--host", "", "--port", "8000"]

Here's a simple breakdown of the above Dockerfile:

  • FROM specifies python:3.7-slim-stretch as base image. This is the image you'll be building on top of. Any executed after that is an added layer on the base image.
  • RUN runs linux commands like apt-get update && apt-get install to install all required dependencies and mkdir /app which creates a directory called /app.
  • WORKDIR sets /app as the working directory for the instructions that follow.
  • COPY copies requirements.txt to the working directory specified.
  • RUN again to install all required python requirements with pip install.
  • COPY . /app copies everything in the Dockerfile directory to the /app directory specified by dockerfile as the working directory.
  • EXPOSE 8000 informs docker that the container will be listening on port 8000.
  • CMD provides the initial command to run when the container is starting. This command should run the web server you implemented earlier.

Ports 8008 and 9090 are reserved for internal use. If you use them the deployment will fail, make sure you change it in the dockerfile now before moving to the next step.


Make sure that command that starts the web server at the end of the Dockerfile starts only 1 worker. Scaling will be handled by Konan.

Now that you have the blueprint for building the image, i.e., the Dockerfile, let's actually build it.

Build the Image

To build the docker image, run the following command from the same directory the Dockerfile exists in:

docker build -t <IMAGE-NAME>:<TAG> .

The above command builds a docker image that’s called <IMAGE-NAME> and assigns it a tag <TAG>. Replace <IMAGE-NAME> and <TAG> with values of your choice. If you don't specify a tag, latest will be automatically assigned as the image tag.

Tags help you reference different builds of the same image easily. This is useful when you're tweaking the image (ex: add dependencies in the dockerfile or fixing a bug in the code) and need to build again.

If you run the build command multiple times and want to maintain an identifier for each image build, you can tag each resultant image with a different tag, for example:
example-image:v1, example-image:v2, example-image:test


If you have a large image, this command will take some time.

Test the Image

To test the image, you need to run it and test whether the container is behaving as expected or not. List all your local docker images using the following command:

docker images

You should see the docker image you just built in the listed docker images. Now, run your docker image as container to test it locally:

docker run -d -p 8000:8000 <IMAGE-NAME>:<TAG>

Now, list all your running containers to make sure that your container is running successfully:

docker container ls

You should see your container running successfully. Now you can send an API request to any of the endpoints you previously created by executing this command in a separate terminal:

curl -X POST "http://localhost:8000/predict/" -H  "accept: application/json" -H  "Content-Type: application/json" -d "{\"some_feat\":\"A\",\"other_feat\":\"1\", \"optional_field\": \"True\"}"

The above command is an example of an API request which send a request to endpoint called 'predict' that takes 3 parameters: some_feat, other_feat and optional_feat.

If all goes well and you can see the expected response for this request, you're ready to push your image to a container registry.

Push Image to Container Registry

Once you have the docker image created on your system, you’ll have to push it to a container registry. Think of a container registry as Google Drive but for docker images; it's where docker images are stored remotely. There's a multitude of container registries you can host your images on like Docker Hub, Quay, GitLab or any registries from cloud providers.

Whatever the container registry you choose, the steps required to push your image will be very similar:

  1. Create an account for the container registry of your choice or use the Konan Container Registry.
  2. Login to the container registry from the terminal with docker login.
  3. Append container registry prefix to the image name. This prefix differs with each container registry. So for an image example-image:v1, the resulting image identifiers may look something like this:
    • Docker Hub: example-image:v1 or or <docker-user>/example-image:v1. Note that not adding a prefix to the image will default to adding
    • Quay:
    • GitLab:
      This can be achieved by running the docker tag command as follows:
    docker tag example-image:v1 <REGISTRY-PREFIX>/example-image:v1
  4. Push your docker image using the docker push command followed by the image url.

Konan Container Registry (KCR)

If you don't already have a container registry to host your images on, you can use the container registry provided by Konan. Once you're a registered user and join an organization on Konan (details in the next section), you'll be provided with the necessary credentials to push your images on KCR, as well as the set of commands you need to execute to push the image. For now, just keep note of your locally built image name and tag because you'll need them to be able to push to KCR.

If you already have a container registry to host your images on, push your image using the aforementioned steps and keep note of the image url.