I am not a Django developer but my son asked me for help to set up his Django blog deployment using Azure DevOps. The initial plan was to deploy the application on the existing Ubuntu VM on AWS with a later plan to move that to PaaS on AWS or Azure. Considering this, deploying inside the container (Azure Container Instance or Amazon Elastic Container) was the best choice. So I divided this task into two parts:
- Containerization of application (discussed in this blog post) including developing inside the container.
- Continuous deployment of the application into the docker container using devops.
Advantages of development inside Docker Container
- As a developer, if you are developing different applications on varying platforms and versions then you may not want to install/set up all those environments on your development machine.
- Similar to the concept of infrastructure as code approach (IaaC), setting up your custom docker image for your customized environment removes the complexity of setting up the exact same environment on different developer's machine and also on the different environments e.g. staging, UAT, Production, etc. So no more "Works on my Machine" scenario. Obviously, this also comes with the added benefit of a repeatability setup with easy version control.
- Developers can switch easily between different development environments for different applications just by starting and stopping containers.
- With the help of a multi-container environment (e.g.
docker-compose)
you can easily spin up dependent services on your machine e.g. Postgressql + Solr for one application development while SQL and Rabbitmq for others.
Conternaization of the DjangoCMS application
For the purpose of this talk, I have kept my Django application very simple with just two containers, 1) Custom-built DjangoCMS container with 2) pre-built Postgresql container in thedocker-compose
.For building the custom image you need to define the
Dockerfile
with a minimum following:- Base image - you can start with a basic OS image (e.g. Ubuntu) or use the image with some of the environment already set up. I used the
python:3.8
image. - Move your files into an image and set up the remaining custom environment. For setting up the environment, I used the requirements.txt file and installed the required packages using
pip install -r ./requirements.txt
- Define CMD and/or Entrypoint to specify what to run when your container starts. In nutshell, CMD specifies the command which can be overwritten on the docker cli at the time of starting the container, while Entrypoint cannot be overwritten however parameters can be passed to it while running the container. Entrypoint helps in configuring the container as executable. More details are here.
Dockerfile
here.A couple of important things to note here are
1) Each RUN creates a layer in the docker image. Suppose you need to download the package, install the package and then delete the installation file then it is better to do all this in one RUN command by chaining the command. Otherwise, docker creates a layer after each RUN i.e. deleting the installation file will actually not reduce the image size. However, during the initial coding of
Dockerfile
you may sometimes want to keep RUN separate this helps in not building the bigger layer again and again while working on the Dockerfile
. Of course, this comes with the cost of the increased size of the docker image, so I recommend that once steps are finalized and working as expected, you should go back and try to combine the RUN commands.2) Docker builds the image layer by layer, it by default uses the layer from the previous build if nothing in the
Dockerfile
or dependency has changed up to that layer. This makes it very important to use the RUN commands in the appropriate order. Nb: Ordering the layers helped me in reducing the image build time from around 8 minutes to just 45 seconds in the real application. See the sample Dockerfile below, where I have copied just the requirements.txt first to install the packages before copying the actual application files into the container. This approach helped in not rebuilding the layer (or installing the packages) where I installed packages for every change in the application file.
#Base image
FROM python:3.8
RUN mkdir /djangocms_project
WORKDIR /djangocms_project
#Make sure to first copy the requirements.txt and install packages before copying anything else
#This minimizes the rebuilding the docker layer where we are installing the packages hence allows faster deployment using devops
COPY ./djangocms_project/requirements.txt /djangocms_project
WORKDIR /djangocms_project
# Installing all packages in the global environment. If you need to run more than one python
# environment inside the docker container then create virtual environment first.
RUN pip install -r ./requirements.txt
#now copy rest of the files
COPY ./djangocms_project /djangocms_project
#set the default command
#For container inspection
#CMD ["sleep","3600"]
#For running the server
CMD [ "python", "./manage.py", "runserver", "0.0.0.0:80" ]
EXPOSE 80
EXPOSE 443
Bringing related services using docker-compose.
As mentioned earlier, you can bring all dependency environments into docker-compose and then start the docker-compose (docker-compose up --build
) to start your application with all the dependencies. Make sure that you set up the dependencies (see depends_on
) correctly in the compose file.Although for simplicity I have shown the credentials directly in the docker-compose. In your actual environment, you should have setup to pass these at the runtime from the vault (Azure Key Vault or AWS KMS) or you can also use the secrets variable from your Azure DevOps project.
Also, note that you can choose the ports you expose your application on the host machine e.g. I have exposed the PostgreSQL on port 5434 on the host while the Django application within the same
docker-compose
and the same network can still connect to it on port 5432.I have given the sample cutdown version of the docker-compose below, in the real application few extra containers e.g.
- Nginx with ModSecurity with Certbot (for Letsencrypt certificates) - I will try to write an article about that later.
- Solr (connected to Django application for indexing the CMS content)
version: '3.4'
services:
pgadmin4:
image: dpage/pgadmin4
ports:
- "8082:80"
environment:
#Make sure to use vaults and pick the credentials and pass at runtime
- PGADMIN_DEFAULT_EMAIL=xxxxxxxxxxxxxxxxxxxx
- PGADMIN_DEFAULT_PASSWORD=xxxxxxxxxxxxxxxx
networks:
- djangocms_projectnet
container_name: demp_pgadmin4
depends_on:
- postgresql
postgresql:
image: postgres:12.4
volumes:
- type: volume
source: postgress_djangocms_data
target: /var/lib/postgresql/data
volume:
nocopy: true
- type: volume
source: postgress_djangocms_conf
target: /etc/postgresql
volume:
nocopy: true
- type: volume
source: postgress_djangocms_log
target: /var/log/postgresql
volume:
nocopy: true
container_name: djangocms_postgresql
environment:
#Make sure to use vaults and pick the credentials and pass at runtime
- POSTGRES_PASSWORD=xxxxxxxxxxx
- POSTGRES_USER=postgres
- POSTGRES_DB=djangocms_project
networks:
- djangocms_projectnet
ports:
- "5434:5432"
restart: unless-stopped
djangoapp:
image: djangocms_djangoapp
container_name: djangocms_djangoapp
volumes:
- type: volume
source: django_djangocms_media
target: /djangocms_project/media
- type: bind
source: ./logs
target: /djangocms_project/logs
- type: bind # for development inside docker
source: ./djangocms_project
target: /djangocms_project
ports:
- "8085:80"
networks:
- djangocms_projectnet
build:
context: .
dockerfile: ./Dockerfile
depends_on:
- postgresql
restart: unless-stopped
networks:
djangocms_projectnet:
external: false
volumes:
postgress_djangocms_data:
external: false
postgress_djangocms_conf:
external: false
postgress_djangocms_log:
external: false
django_djangocms_media:
external: false
Please note the build context for the Django app in the docker-compose above. This tells the docker-compose to build the Django app image locally using the specified
dockerfile
while for the PostgreSQL it pulls the image for the registry. You will need to change this in your deployment pipeline so that the build agent builds the image and pushes it into the private docker registry while the deployment agent pulls the image from the registry and deploys it as any other image. More about this in the next article.
Enabling development inside the docker container
This has been achieved by using the bind mount of the original development, files which we copied into the docker image (see theDockerfile
above), from the host to the docker. This enables any change in the files to reflect immediately into the docker container without rebuilding the image and without restarting the container. Make sure that you use this mount only for development purposes and change the docker-compose for the deployment. (In the next article I will talk about how to do these automatically based on your environment in your CI/CD pipelines.) volumes:
- type: bind # for development inside docker
source: ./djangocms_project
target: /djangocms_project
I the next I article I will talk about the approach I followed to use AzureDevops for continuous deployment of the same application with the basic smoke test in the deployment pipeline.