Posted in Docker, Misc, Tech

Docker Compose – Containerizing a Django Application

In the previous blog: Docker – Deep Dive, we created a Dockerfile with a set of instructions, started a service and create a container. Actual Projects are usually much more complex and require more than one services at a time running on multiple containers. Docker Compose handles exactly this.

Docker Compose is a tool to configure and run multiple containers that are inter dependent. It uses a YAML file to configure all the services and you can start/stop all of them with these commands:

Start services: docker-compose -f <filename.yaml> up

Stop services: docker-compose -f <filename.yaml> down

Django Application on Docker

As an example I am trying to run on docker, my Twitter Clone App that is built with Django and uses MySQL as the database: https://github.com/shilpavijay/TwitterClone

We will need a Dockerfile and docker-compose.yml file in the project directory.

Dockerfile:

FROM python:3.7-alpine
RUN apk update && apk add bash
RUN apk add gcc libc-dev linux-headers mariadb-dev
RUN pip install django djangorestframework django-rest-swagger PyJWT==1.7.1 gunicorn mysqlclient
COPY TwitterClone /app/TwitterClone
COPY TUsers /app/TUsers
COPY Tweets /app/Tweets
COPY manage.py /app
WORKDIR /app
EXPOSE 8000
CMD ["gunicorn","TwitterClone.wsgi","--bind=0.0.0.0:8000"]
  • In the above file, we are using a pre-built light weight image called alpine for python.
  • It then install all the required packages. Not specifying the version of the package will install the latest available version.
  • Next, all the project files and directories are copied to /app folder on the created container.
  • The current directory is changed to /app.
  • We also expose the port on which the application will run.
  • The last command will start the application.

docker-compose.yaml

version: '3.4'

services:
  db:
    image: mysql:5.7
    ports:
      - '3306:3306'
    container_name: twitter_db
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    environment:
      MYSQL_DATABASE: 'twitterclone'
      MYSQL_USER: 'root'
      MYSQL_PASSWORD: 'guessme'
      MYSQL_ROOT_PASSWORD: 'guessme'

  twitterapp:
    container_name: twitterclone
    build:
      context: ./
      dockerfile: Dockerfile
    restart: always
    ports:
      - '8080:8000'
    volumes:
      - .:/code
    depends_on:
      - db
  • We have two services, one is the Django App (twitterapp) and the other is the MySQL DB (db). Ensure the names are always in lowercase.
  • The app will be built with the instructions in the Dockerfile.
  • MySQL container will be based on the mysql image available on docker hub. (refer previous blogpost)
  • The “depends_on” clause tells that the application has a dependency on the database.
  • MySQL service takes a while to start. There is usually a lag between the app start and database start time. To ensure a services does not fail if the other has not started yet, “restart: always” is configured.
  • Port forwarding is configured in the “ports:” clause. The host port 8080 is mapped to the container post 8000 on which the application is running. Hence, to access the application from the host system, we will be using port 8080.
  • We will discuss “volumes” in the subsequent section

Next, update settings.py in the Django Project with the host, port, login details to match that given in the Docker Compose file:

DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.mysql',
            'NAME': 'twitterclone',
            'USER': 'root',
            'PASSWORD': 'guessme',
            'HOST': 'db',
            'PORT': '3306',
            },
        }

When running the application for the first time OR if there are any changes to Django migrations, we need to run this explicitly in order to apply the migrations.

docker exec -it  python manage.py migrate

Then, execute the following commands to build and run the docker container:

docker-compose up --build

We are Done!!! The application should now be running on localhost:8080.

The complete code is here: https://github.com/shilpavijay/TwitterClone

Docker Volumes

Docker volumes are used for data persistence. When a container is stopped and re-run, the database starts all over again and the data stored previously is lost. To make it available even after the container is re-run, Docker volumes are made use of.

Here’s how it works. A directory from the host file system is mapped to a directory of the container file system. So, each time a container is started, the data is replicated into the host directory and when the container is restarted, it gets the data automatically from the directory in the host system.

Docker volumes can be created using the ‘docker run’ command. It can also be configured in the Docker Compose file.

There are 3 different types of docker volumes:

  • Host volume: User specifies the host directory where the data is to be replicated.
docker run -v /home/user/dockerdata:/var/lib/mysql/data
  • Anonymous volume: User does not specify the host directory. It is automatically taken care of, by docker. Only the directory on the container is specified.
docker run -v /var/lib/mysql/data
  • Named volume: User specifies a name but not the path. Path is handled by docker. You can reference the volume by the given name This type is most frequently used.
docker run -v name:/var/lib/mysql/data

Conclusion

In this blogpost series, we have leant the basics required to understand and use Docker. We have seen the Evolution of Docker, the need for docker and also how to setup and run an application on docker. We leant about Docker Compose and Docker Volumes too, in this post. Here are the links to previous posts:

Docker – Introduction

Docker – Deep Dive

Github link to the project discussed in this blog: https://github.com/shilpavijay/TwitterClone

You can also refer the official documentation page for more information and updates: https://docs.docker.com/get-started/

Posted in Docker, Misc, Tech

Docker – Deep Dive

Docker – Introduction covered the evolution, the need for Docker and the reason why it is widely used. This post will be a deep dive into the concepts of Docker and it’s practical use cases.

This post covers everything required to create an image and run an application using docker. The OS I have chosen Windows for all the examples, mainly because it is quite challenging. Working with docker on Windows is not as seamless as it is on Linux/Unix OS. Hence I have included a lot of workaround hoping it will be of help those using Windows.

Installation:

https://docs.docker.com/docker-for-windows/install/

You can install from the above link by following a few simple instructions. Once done, add the following to the “PATH” variable. You can verify if it works, with “docker –version” command.

C:\Program Files\Docker\Docker\resources\bin C:\ProgramData\DockerDesktop\version-bin

Let us begin

Now, we will create a simple python program that prints “hello world” and name it main.py.

print('Hello World!!!')

Dockerfile:

Dockerfile is a text document that contains all the instructions required to build an image. It should be named exactly as “Dockerfile” and placed inside the application folder. Most commonly used commands in a dockerfile are:

FROM python
ENV <> 
RUN <>
COPY <>
CMD <>
WORKDIR <>
  • Dockerfile always starts with a FROM command that specifies the pre-existing image (present on dockerhub) to import from.
  • ENV is for setting environment variables in the container. This is optional and usually moved to the Docker compose file
  • RUN to execute linux commands
  • COPY – executes on the host. Can copy files from the host to the container.
  • CMD – is the entry point command for the application execution
  • WORKDIR – points to the directory in the container, where the application code exists.

Create a “Dockerfile” with these three lines:

FROM python:3.7-alpine
COPY helloWorld.py run.py
CMD ["python","run.py"]
  • The first line will set up the environment by installing python. 3.7-alpine which refers to the already available Image on docker hub. You can find more such Images on hub.docker.com. This step takes time when the container is built for the first time as the installations happen. Subsequently, it should take lesser time.
  • The next lines copies the contents of the local file helloWorld.py onto the file on the newly created Image. Here the file name is run.py, but the name can be anything.
  • The third line specifies the Command that is to be run on the Container to execute the program in the file, just copied.

Now, run the following commands to first build the image and then create a container. Replace <sample-image> with a name of your own. You should now see “Hello World!!!” printed on the console.

docker build -t <sample-image> .
docker run <sample-image>

Working with Environment variables

We previously saw how to execute a simple print statement on a docker container. Let’s now see a little more complex example where we pass an environment variable from local as an argument while building the container. This will set the environment variable in the container image. We will verify the same by printing the environment variable in the container.

main.py

import os
def main():
    print(os.environ.get('testenv'))
main()

Dockerfile

FROM python:3.7-alpine
COPY main.py run.py
ARG x
ENV testenv=$x
CMD ["python","run.py"]

Let us set an environment variable in our local machine. Command for windows:

D:\Projects\DockerSample>set whatdoing="docker"
D:\Projects\DockerSample>echo %whatdoing%
 "docker"

Here’s how we run the build command by passing an argument:

docker build --build-arg x=%whatdoing% -t sample-image .
docker run sample-image

The above should print “docker”. %whatdoing% is an env variable in the local machine. In the Dockerfile, we receive an argument “ARG x” and set the same as the value for the environment variable “testenv”. Our python program “main.py” is fetching the environment variable “testenv” and prints the value of the same.

Hence, we can not only install various packages onto the docker container but also set all the required environment variables before-hand, in order to run an application on the newly created container.

Frequently used

Here are some frequently used commands that can come handy while working with docker images and containers:

- Get an image from Docker hub: (Example - redis)
docker pull <name> 

- List all the images:
docker images

- List only image IDs:
docker images -q 
(q stands for quiet. Suppresses other columns)

- Create a container of a given image:
docker run <image>

- Run a container in detached mode:
docker run -d <image>

- Remove a docker image by ID
docker rmi <Image ID>

- List the docker containers in running state: (PS - Process State)
docker ps

- List all containers irrespective of their state:
docker ps -a

- Restart a container:
docker stop <container-id>
docker start <container-id> 

- Bind a port of your host to a port of the container while creating a container:
docker run -p <host-port>:<container-port> <image>

- View container logs: (you can also stream the logs using -f option)
docker logs <container-id/name> -f

- Specify a name for the container:
docker run --name inamedthis <image>

- Get the terminal of the running container:
docker exec -it <container-id/name> /bin/bash

- List the current networks:
docker network ls

- Create a new docker network:
docker network create <name>

Cleaning container images:

Here’s an important workaround while working with Windows. Time and again, I ran into issues with the container not reflecting the current state. This was due to old containers not being stopped and removed correctly before creating new ones when they had the same name. Here’s a batch file that clears old containers:

@ECHO OFF 
FOR /f "tokens=*" %%i IN ('docker ps -aq') DO docker stop %%i
FOR /f "tokens=*" %%i IN ('docker ps -aq') DO docker rm %%i 

You can put the above in a file and add to the path variable so that you can use it whenever you are creating/debugging containers multiple times.

In the next post we will see how we can run multiple services on docker using Docker Compose and also host an application on docker.

Posted in Docker, Misc, Tech

Docker – Introduction

docker.png

Let’s start by looking back on how applications were hosted and how it has evolved. Initially, we had one application running on a single server with its own dedicated memory, CPU and disk space. This model proved to be highly costly in order to host high load applications.

hypervisor.pngVirtual Machine

Next, we moved to Virtual Machines (VM), also called as Hypervisor Virtualization. We could now run multiple applications on each virtual machine, each one being a chunk of the actual physical server and having its own memory, CPU and disk space. However, the drawback was that each Virtual Machine needed its own Operating System and this incurred additional overhead of licensing costs, security patching, driver support, admin time etc.

Docker arrived with a solution to this problem. There is just one OS installed on the server. Containers are created on the OS. Each Container is capable of running separate applications or services. Here’s a comparative illustration of Containers vs Virtual Machines:

docker vs vm

  • Docker is said to be derived from the words: docker + worker -> Docker.
  • It is written using Golang
  • Is Open source with Apache license 2.0.

Let’s look at what the following mean:

Docker Container: It is a software that contains the code and all its dependencies to facilitate an application to run quickly and reliably irrespective of the computing environment

Docker Container Image is a lightweight, standalone, executable package that contains everything to run an application: the code, runtime, system tools, libraries and settings. Container Images become Containers at runtime.

Docker Hub is a public docker repository to store and retrieve docker Images. This is provided by Docker, however there are other third-party registries as well.

Open Container Initiative or OCI  is responsible for standardizing container format and container runtime.

Why is Docker so popular? 

  • A Docker image can run the same way irrespective of the server or the machine. It is hence very portable and consistent. It eliminates the overhead of setting up and debugging environment.
  • Another reason is Rapid Deployment, the deployment time is reduced to a few seconds.
  • They start and stop much faster than Virtual Machines
  • Applications are easy to scale as containers can be easily added from an environment.
  • Helps in converting a monolithic application into Microservices. [To read more, refer Microservices]
  • last but not the least, it is Open Source.