Docker and monolithic architecture

Docker Daemon and CLI

Docker is powered by two central subsystems:

  • Docker Daemon is a server running in the background. It listens to requests from the CLI and manages the container lifecycle.

  • Docker CLI - the Docker command line interface. A command to bring up, start, or stop containers is issued via a string.

Docker CLI gives commands to Docker Daemon to execute a specific command. The CLI can be installed on the host system or set up remotely - communication with Daemon takes place via REST API.

The functionality of Docker Daemon is not limited to starting or stopping containers: the system regulates networks and ports, logs containers. Below are the most frequent commands to Docker Daemon.

Docker CLI Cheat Sheet.

Basic commands:

# start a container based on the specified image
docker run <image_name> 

# show list of active containers
docker ps 

# show all containers, including stopped ones
docker ps -a 

# stop container
docker stop <container_identifier> 

# delete container
docker rm <container_identifier> 

# show a list of all local images
docker images 

# download image from Docker Hub
docker pull <image_name> 

# remove local image
docker rmi <image_identifier>

Creating and working with images:

Creating and working with images:# build an image based on Dockerfile
docker build -t <image_name>:<tag> <path_to_Dockerfile> 

# tag the image with a new tag
docker tag <old_tag> <new_tag> 

# rename and tag the image for uploading to another repository
docker tag <image_name>:<old_tag> <new_repository>/<new_tag> 

# send the image to Docker Hub or another registry
docker push <repository_name>/<image_name>:<tag>

Networks and ports:

# show list of networks
docker network ls

# determine port matching when running the container
docker run -p <local_port>:<container_port> <image_name>

Working with Docker Compose:

# start the services defined in the `docker-compose.yml` file
docker-compose up 

# stop and remove the services described in `docker-compose.yml` file
docker-compose down

Working with Docker Volumes:

# create Docker Volume
docker volume create <volume_name> 

# start the container by connecting the Volume
docker run -v <volume_name>:<path_in_container> <image_name>

Logging and monitoring:

# show container logs
docker logs <container_identifier>

# display container resource utilization statistics
docker stats <container identifier>

Dockerfile

The image creation commands are captured in a raw text document - a Dockerfile:

INSTRUCTION argument(s)

where INSTRUCTION is an instruction for Docker Daemon, and argument(s) is the argument itself or the specific values that are passed to INSTRUCTION.

Instructions are case insensitive, but it is common to write them in "capsize" to visually distinguish them from arguments.

The instructions explain what the Docker Daemon should do before, during, or after running the container from the image.

The basic instructions for a Dockerfile are.

FROM specifies the base image from which to create a new image. Most often FROM is used for images with an operating system and pre-installed components.

RUN specifies what commands should be executed inside the container when building the image. This is how you can install dependencies or upgrade packages to the correct version.

COPY and ADD copies files from the local file system to the container. Most often copies the source code of an application.

WORKDIR sets the working directory for subsequent instructions. This way, files in different directories can be worked on sequentially.

CMD defines the default arguments when the container is started.

ENTRYPOINT specifies the command to be executed when the container is started.

An example Docker file for a Python application:

Using the base image with Python
FROM python:3.8

# Install dependencies
RUN pip install flask

# Copy the source code into the image
COPY . /app

# Specify the working directory
WORKDIR /app

# Define the command to run the application
CMD ["python", "app.py"]

Docker Image (Docker Image)

In order to create an image from a Dockerfile and run the container, you need to:

  1. Go to the directory where the dockerfile is located.

  2. Use the docker build command to create an image from the file.

  3. If necessary, verify the images with the docker images command.

  4. Run the container from the image with the docker run command.

When working with images, you can use tags to specify the version of the images. By default, Docker assigns the tag latest when building.

#example of building an image with explicit tagging
docker build -t my-python-app:v1.0

The following commands are used to send the image to the Docker Hub registry:

docker tag my-python-app:v1.0 username/my-python-app:v1.0
docker push username/my-python-app:v1.0

To load an image from the registry, use the command:

docker pull username/my-python-app:v1.0

Docker images are static. Containers, on the other hand, are changeable. To "update" an image, you can start a container from it, make changes, and save the state to the new image. This is done using the docker commit command:

docker commit -m "Added changes" -a "Author" container_id username/my-python-app:v1.1

A Docker image is a standard format, which means that Docker Daemon can work with it on any platform. This allows for painless porting of projects from one system to another - containers are packed into images and ported. And isolation of all dependencies and components inside the image guarantees that the project will exactly "stand up" on the target platform with Docker without additional customization.

Docker Container.

A container is an image instance (instance) running in an isolated environment. One container "packs" one running server process.

You can, of course, put several processes, even a whole monolith - there are no strict limitations on the part of the tool. But this is considered a mistake of microservice architecture design. Docker allows you to customize the interaction of containers with the external environment and other containers, as well as regulate resource consumption. So there is no good reason to try to fit everything in one.

Additional features

If it is necessary for a container to work with its own data instance without modifying the original, we can mount a directory from the host system into the container itself. This is done with the command:

docker run -v /path/to/host-directory:/path/in/container image_name

Docker Volumes are repositories that are associated with a container, but are not tied to its lifecycle. This means that any data that the container sends to Volumes will persist even if the container is stopped or destroyed.

# command to create a Volume in a container
 docker run -v my_volume:/path/in/container image_name

To pass environment variables to the container, the -e flag is used in conjunction with the RUN command:

docker run -e MY_VARIABLE=value image_name

The container can export ports to communicate with the "outside world". This is especially relevant for web applications where ports can be used to access a web server.

docker run -p 8080:80 image_name

You can impose limits on the resources used by the container, such as the amount of RAM or the number of CPU cores.

docker run --memory 512m --cpus 0.5 image_name

Docker Registry

The Docker Registry is a publicly available image repository. The service helps to:

  • centrally store images and their versions;

  • speed up deployment - images are downloaded immediately to the target system and are ready to work;

  • automate the processes of building, testing, and deploying containers.

Docker Hub is a public registry that stores publicly available images (of Linux distributions, databases, languages, etc.). Organizations can create their own private Docker registries to store sensitive data.

Creating a Docker private registry

Installing Docker Distribution

Docker Distribution is the official implementation of the Docker Registry protocol. Let's install it on the server that will serve as the private registry.

docker run -d -p 5000:5000 --restart=always --name registry registry:2

This command starts a private registry on port 5000. You can optionally configure HTTPS using an SSL certificate.

Using the Private Registry

You can now use the registry to store and distribute private Docker images.

# tag the image   
docker tag my-image localhost:5000/my-image
# send the image to the private registry
docker push localhost:5000/my-image

Run Monolith on a server without Docker

In the README of an application you can find detailed instructions on how to deploy it on a server. For our example, let's take the README of the monolithic application, which we have intentionally shortened.

Package Installation:

sudo apt-get install -y build-essential libssl-dev zlib1g-dev libbz2-dev \
       libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \
       xz-utils tk-dev libffi-dev liblzma-dev python-openssl git npm redis-server vim ffmpeg

Installing pyenv:

$ curl https://pyenv.run | bash
$ echo 'export PATH="$HOME/.pyenv/bin:$PATH"' >> ~/.bashrc && \
echo 'eval "$(pyenv init -)"' >> ~/.bashrc && \
echo 'eval "$(pyenv virtualenv-init -)"' >> ~/.bashrc && source ~/.bashrc

Installing Python 3.6.9:

pyenv install 3.6.9

If Ubuntu version 20+ is installed and an error occurs, apply the following commands:

$ sudo apt install clang -y
$ CC=clang pyenv install 3.6.9

Creating a virtual environment:

$ pyenv virtualenv 3.6.9 cpa-project

Virtual environment activation:

$ pyenv activate cpa-project

Installing NodeJS 8.11.3:

$ npm i n -g
$ sudo n install 8.11.3
$ sudo n # in the window that appears, select the version 8.11.3

Project Cloning:

$ git clone git@github.com:User/cpa-project.git

Let's go to the project:

$ cd cpa-project

Install python dependencies (make sure the virtual environment is active):

$ pip install -U pip
$ pip install -r requirements.txt

Installing nodejs dependencies

$ npm install

Perform migrations for the database and create test data:

$ python manage.py migrate
$ python manage.py generate_test_data

Building the client side:

$ npm run watch # For development with automatic rebuilds
$ npm run build # For production

Building the same project in Docker

Let's see how the same application can be deployed in a Docker environment. First, let's allocate services for the compose file and give them names:

version: '3'
services:
 {stage}-project-ex--app:
   container_name: {stage}-project-ex--app
   build:
     context: ..
     dockerfile: Dockerfile
   env_file:
     - ".env.{stage}"
   networks:
     - stage_project-ex_network
   depends_on:
     - {stage}-project-ex--redis
     - {stage}-project-ex--clickhouse
     - {stage}-project-ex--postgres
     - {stage}-project-ex--mailhog
   volumes:
     - ..:/app/
     - ./crontab.docker:/etc/cron.d/crontab.docker
   command: /start
   labels:
     - "traefik.enable=true"
     - "traefik.http.routers.{stage}_fp_app.rule=Host(`web.{stage}.project-ex.io`)"
     - "traefik.http.services.{stage}_fp_app.loadbalancer.server.port=8000"
     - "traefik.http.routers.{stage}_fp_app.entrypoints=websecure"
     - "traefik.http.routers.{stage}_fp_app.tls.certresolver=stage_project-ex_app"

 {stage}-project-ex--app-cron:
   container_name: {stage}-project-ex--app-cron
   build:
     context: ..
     dockerfile: Dockerfile
   env_file:
     - ".env.{stage}"
   networks:
     - stage_project-ex_network
   depends_on:
     - {stage}-project-ex--redis
     - {stage}-project-ex--clickhouse
     - {stage}-project-ex--postgres
     - {stage}-project-ex--mailhog
   volumes:
     - ..:/app/
     - ./crontab.docker:/etc/cron.d/crontab.docker
   command: sh -c "printenv >> /etc/environment && crontab /etc/cron.d/crontab.docker && cron -f"

 {stage}-project-ex--front:
   container_name: {stage}-project-ex--front
   build: ./frontend-builder
   env_file:
     - ".env.{stage}"
   networks:
     - stage_project-ex_network
   depends_on:
     - {stage}-project-ex--app
   volumes:
     - ..:/app/

 {stage}-project-ex--clickhouse:
   container_name: {stage}-project-ex--clickhouse
   image: yandex/clickhouse-server:20.4.6.53
   env_file:
     - ".env.{stage}"
   networks:
     - stage_project-ex_network
   volumes:
     - /home/project-ex/stands/{stage}/docker_data/clickhouse/data:/var/lib/clickhouse
     - ./docker_data/clickhouse/schema:/var/lib/clickhouse/schema
     - ./docker_data/clickhouse/users.xml:/etc/clickhouse-server/users.xml
     - ./docker_data/clickhouse/project-ex.xml:/etc/clickhouse-server/users.d/default-user.xml
   labels:
     - "traefik.enable=true"
     - "traefik.tcp.routers.{stage}_fp_clickhouse.rule=HostSNI(`*`)"
     - "traefik.tcp.routers.{stage}_fp_clickhouse.entryPoints=clickhouse"
     - "traefik.tcp.routers.{stage}_fp_clickhouse.service={stage}_fp_clickhouse"
     - "traefik.tcp.services.{stage}_fp_clickhouse.loadbalancer.server.port=8123"

 {stage}-project-ex--postgres:
   container_name: {stage}-project-ex--postgres
   image: postgres:13.11-alpine
   env_file:
     - ".env.{stage}"
   networks:
     - stage_project-ex_network
   stdin_open: true
   tty: true
   volumes:
     - {stage}-project-ex--postgres:/var/lib/postgresql
   labels:
     - "traefik.enable=true"
     - "traefik.tcp.routers.postgres.rule=HostSNI(`*`)"
     - "traefik.tcp.routers.postgres.entryPoints=postgres"
     - "traefik.tcp.routers.postgres.service=postgres"
     - "traefik.tcp.services.postgres.loadbalancer.server.port=5432"

 {stage}-project-ex--redis:
   container_name: {stage}-project-ex--redis
   image: redis:alpine
   env_file:
     - ".env.{stage}"
   networks:
     - stage_project-ex_network
   volumes:
     - {stage}-project-ex--redis:/data

 {stage}-project-ex--mailhog:
   container_name: {stage}-project-ex--mailhog
   image: mailhog/mailhog:v1.0.1
   env_file:
     - ".env.{stage}"
   networks:
     - stage_project-ex_network
   labels:
     - "traefik.enable=true"
     - "traefik.http.routers.{stage}_fp_mailhog.rule=Host(`mail.{stage}.project-ex.io`)"
     - "traefik.http.services.{stage}_fp_mailhog.loadbalancer.server.port=8025"
     - "traefik.http.routers.{stage}_fp_mailhog.entrypoints=websecure"
     - "traefik.http.routers.{stage}_fp_mailhog.tls.certresolver=stage_project-ex_app"

volumes:
 {stage}-project-ex--postgres:
   name: {stage}-project-ex--postgres
   driver: local
 {stage}-project-ex--redis:
   name: {stage}-project-ex--project-ex
   driver: local

networks:
 stage_project-ex_network:
   external: true
   name: stage_project-ex_network

Docker Compose is a tool for running multi-container applications in Docker. The .yaml file specifies all necessary settings and commands. Starting containers from the compose file is done with the docker-compose up command.

In the .yaml file we can see from which containers container_name (and which versions) our previously monolithic application is launched. And the {stage} keyword is the branch in GitLab from which the container will be lifted. Optionally, we can run containers from different branches on the same server.

The fact that we have broken the application into microservices does not make it monolithic. Microserviceability of the product is laid down at the stage of its design and creation, when each task is allocated to a separate service.

The line in compose dockerfile: Dockerfile builds our container. The file contains instructions such as:

FROM python:3.6.9-buster

ENV DJANGO_SETTINGS=advgame.local_settings

ARG DEBIAN_FRONTEND=noninteractive

RUN apt-get update \
 # dependencies for building Python packages
 && apt-get install -y build-essential \
 # psycopg2 dependencies
 && apt-get install -y libpq-dev \
 # Translations dependencies
 && apt-get install -y gettext \
 # Cron
 && apt-get install -y cron \
 # Vim
 && apt-get install -y vim \
 # cleaning up unused files
 && apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
 && rm -rf /var/lib/apt/lists/*

# Set timezone
ENV TZ=Europe/Moscow
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

# Have to invalidate cache here because Docker is bugged and doesn't invalidate cache
# even if requirements.txt did change

ADD ../requirements.txt /requirements.txt
RUN pip install -r /requirements.txt

COPY ./docker-compose/start.sh /start
RUN chmod +x /start

# Copy hello-cron file to the cron.d directory
COPY ./docker-compose/crontab.docker /etc/cron.d/crontab.docker
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/crontab.docker
# Apply cron job
RUN crontab /etc/cron.d/crontab.docker

COPY . /app

WORKDIR /app

Next, we wrote a make-file that will allow us to manage the configuration of the project:

dir=${CURDIR}
project=project-ex
окружение=локальное
interactive:=$(shell [ -t 0 ] && echo 1)
ifneq ($(interactive),1)
  опцияT=-T
endif

uid=$(shell id -u)
gid=$(shell id -g)
# Команда для выполнения docker-compose
c=
# Параметр для docker-compose exec
p=

dc:
  @docker-compose -f ./docker-compose/$(env).yml --env-file=./docker-compose/.env.$(env) $(cmd)

compose-logs:
  @make dc cmd="logs" env="$(env)"

cp-env:
  [ -f ./docker-compose/.env.$(env) ] && echo ".env.$(env) file exists" || cp ./docker-compose/.env.example ./docker-compose/.env.$(env)
  sed -i "s/{stage}/$(env)/g" ./docker-compose/.env.$(env)
  @if [ "$(env)" = "local" ] ; then \
     sed -i "s/{domain}/ma.local/g" ./docker-compose/.env.$(env) ; \
  fi;
  @if [ "$(env)" = "dev" ] ; then \
     sed -i "s/{domain}/dev.project-ex.io/g" ./docker-compose/.env.$(env) ; \
  fi;

cp-yml:
  @if [ ! "$(env)" = "local" ] ; then \
     [ -f ./docker-compose/$(env).yml ] && echo "$(env).yml file exists" || cp ./docker-compose/stage.example.yml ./docker-compose/$(env).yml ; \
     sed -i "s/{stage}/$(env)/g" ./docker-compose/$(env).yml; \
  fi;

init:
  docker network ls | grep stage_project-ex_network > /dev/null || docker network create stage_project-ex_network
  @make cp-env
  @make cp-yml
  [ -f ./docker-compose/.env.$(env) ] && echo ".env.$(env) file exists" || cp ./docker-compose/.env.$(env).example ./docker-compose/.env.$(env)
  @make dc cmd="up -d"
  @make dc cmd="start $(env)-$(project)--postgres" env="$(env)"
  sleep 5 && cat ./docker-compose/docker_data/pgsql/data/init_dump.sql | docker exec -i $(env)-$(project)--postgres psql -U project-ex
  @make dc cmd="exec $(env)-$(project)--app python ./manage.py migrate" env="$(env)"
  @make ch-restore env="$(env)"
  @make build-front env="$(env)"
  @make collect-static env="$(env)"

create_test_db:
  @make dc cmd="exec $(env)-$(project)--postgres dropdb --if-exists -U project-ex project-ex_test" env="$(env)" > /dev/null
  @make dc cmd="exec $(env)-$(project)--postgres createdb -U project-ex project-ex_test" env="$(env)"
  cat ./docker-compose/docker_data/pgsql/data/init_dump.sql | docker exec -i $(env)-$(project)--postgres psql -U project-ex project-ex_test

bash-front:
  @make dc cmd="exec $(env)-$(project)--front sh" env="$(env)"

...

Make creates a kind of short command alias for service management. In it, you can initialize the project, recreate the base, build the front end, and so on. These are the same commands we will use in the GitLab CI file.

Next, we run the make init command to initialize the project.

version: '3'
services:
 stage-project-ex--traefik:
   image: "traefik:v3.0.0-beta2"
   container_name: "stage-project-ex--traefik"
   command:
     - "--log.level=DEBUG"
     - "--providers.docker=true"
     - "--providers.docker.exposedbydefault=false"
     - "--entrypoints.web.address=:80"
     - "--entrypoints.web.http.redirections.entryPoint.to=websecure"
     - "--entrypoints.websecure.address=:443"
     - "--entrypoints.postgres.address=:5432"
     - "--entrypoints.clickhouse.address=:8123"
     - "--entrypoints.mongo.address=:27017"
     - "--certificatesresolvers.stage_project-ex_app.acme.httpchallenge=true"
     - "--certificatesresolvers.stage_project-ex_app.acme.httpchallenge.entrypoint=web"
     - "--certificatesresolvers.stage_project-ex_app.acme.email=it@email-ex.com"
     - "--certificatesresolvers.stage_project-ex_app.acme.storage=/letsencrypt/acme.json"
   restart: always
   ports:
     - 80:80
     - 443:443
     - 5432:5432
     - 8123:8123
     - 27017:27017
   networks:
     - stage_project-ex_network
   volumes:
     - "/opt/letsencrypt:/letsencrypt"
     - "/var/run/docker.sock:/var/run/docker.sock:ro"

networks:
 stage_project-ex_network:
   external: true
   name: stage_project-ex_network

Configuring Ports

The compose file doesn't say anything about ports. The main reason is that we will access the project by domain name using Traefik. Applications work separately from compose-files and project versions: Traefik learns about new containers from Docker Daemon, and the configuration for the application is written in the compose-file after the keyword .

Traefik proxies traffic to the container based on hostname (not only HTTP/HTTPS), requests LE-certificate, renews it itself. You do not need to specify which IP or hostname to proxy to or change Traefik config.

If we raise local containers with local domain name, we cannot request LE-certificate. Therefore, we have to communicate with the web via HTTP, and disable redirect to HTTPS in Traefik

The version of traefik image:v3.0.0-beta2 is chosen for a reason, it supports different domain names for routing to PostgreSQL containers. In the example above, using beta2 is not necessary, as any request on port 5432 will be proxied to a single PostgreSQL container.

When there are multiple Postgres containers

Working with multiple PostgreSQL containers and routing to them based on domain names requires creating a self-signed Wildcard certificate of the local domain and adding information about it to the Traefik config.

This is done only so that PostgreSQL containers can be accessed "from the outside" to work with the database directly. In case the containers are running on the same Docker network, Traefik is not needed.

In compose traefik add:

    command:
      - "--providers.file.filename=/conf/dynamic-conf.yml"
    volumes:
      - "./tls:/tls"
      - "./conf:/conf"

In conf/dynamic-conf.yml write the certificate files

tls:
  certificates:
    - certFile: /tls/something.com.pem
      keyFile: /tls/something.com.key

In the tls/ directory, put the Wildcard certificate files created by the Bash script mkcert.sh

#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'

 DOMAIN_NAME=$1

if [ ! -f $1.key ]; then

  if [ -n "$1" ]; then
    echo "You supplied domain $1"
    SAN_LIST="[SAN]\nsubjectAltName=DNS:localhost, DNS:*.localhost, DNS:*.$DOMAIN_NAME, DNS:$DOMAIN_NAME"
    printf $SAN_LIST
  else
    echo "No additional domains will be added to cert"
    SAN_LIST="[SAN]\nsubjectAltName=DNS:localhost, DNS:*.localhost"
    printf $SAN_LIST
  fi

  openssl req \
    -newkey rsa:2048 \
    -x509 \
    -nodes \
    -keyout "$1.key" \
    -new \
    -out "$1.crt" \
    -subj "/CN=compose-dev-tls Self-Signed" \
    -reqexts SAN \
    -extensions SAN \
    -config <(cat /etc/ssl/openssl.cnf <(printf $SAN_LIST)) \
    -sha256 \
    -days 3650

  echo "new TLS self-signed certificate created"

else

  echo "certificate files already exist. Skipping"

fi

Правим labels контейнера postgres в compose-файле

    labels:
      - "traefik.enable=true"
      - "traefik.tcp.routers.qa222_postgres.rule=HostSNI(`qa222.something.com`)"
      - "traefik.tcp.routers.qa222_postgres.entryPoints=postgres"
      - "traefik.tcp.routers.qa222_postgres.service=qa222_postgres"
      - "traefik.tcp.services.qa222_postgres.loadbalancer.server.port=5432"
      - "traefik.tcp.routers.qa222_postgres.tls=true"

CI/CD project

Below we have attached our GitLab CI file, in it we can see the previously mentioned make commands

variables:
 APP4_ENV: "gitlab"

default:
 tags:
   #gtilab runner tag
   - dev-project-ex-1

stages:
 - ci
 - delivery
 - build
 - deploy

.before_script_template: &build_test-integration
 before_script:
 - echo "Prepare job"
 - sed -i "s!env=local!env=${APP4_ENV}!" ./Makefile
 - make cp-env
 - make cp-yml
 - make up

.verify-code: &config_template
 stage: ci
 <<: *build_test-integration
 only:
   refs:
     - merge_requests
     - develop
     - master

Linter:
 <<: *config_template
 script:
   - make build
   - make linter

Tests:
 <<: *config_template
 script:
   - make tests

Delivery:
 stage: delivery
 script:
   - echo "Rsync from $CI_PROJECT_DIR"
   - sudo rm -rf "/home/project-ex/stands/dev/project-ex/!\(static|node_modules\)"
   - sed -i "s!env=local!env=dev!" ./Makefile
   - rsync -av --delete-before --no-perms --no-owner --no-group
     --exclude "node_modules/"
     --exclude "__pycache__/"
     --exclude "logs/"
     --exclude "docker-compose/docker_data/clickhouse/data/"
     $CI_PROJECT_DIR/ /home/project-ex/stands/dev/project-ex
 only:
   - develop
 except:
   - master

Build:
 stage: build
 script:
   - echo "cd /home/project-ex/stands/dev/project-ex"
   - cd /home/project-ex/stands/dev/project-ex
   - echo "make cp-env"
   - make cp-env
   - echo "cp-yml"
   - make cp-yml
   - echo "build"
   - make build
 only:
   - develop
 except:
   - master

Build-front:
 stage: build
 script:
   - echo "cd /home/project-ex/stands/dev/project-ex"
   - cd /home/project-ex/stands/dev/project-ex
   - echo "build-front"
   - make build-front
 only:
   changes:
     - '*.js'
     - '*.css'
     - '*.less'
   refs:
     - develop
     - master

Deploy:
 stage: deploy
 script:
   - cd /home/project-ex/stands/dev/project-ex
   - mkdir -p logs
   - make restart
   - make migrate
   - make collect-static
 only:
   - develop
 except:
   - master

Pros and cons of Docker

Docker is designed for server applications and does not always support GUIs.

On top of that, Docker requires the developer to be precise and accurate. Incorrect container configuration or insufficient security measures can jeopardize the whole system.

As you can see, the tool has its disadvantages too. But the advantages are undoubtedly greater: Docker isolates applications using namespaces and cgroups - there is no need to start a separate virtual machine for each task. It also optimizes resource allocation across containers and is able to manage the application lifecycle - start, stop, scale and update containers. And the biggest plus is the ecosystem and live community. There are hundreds of ready-made images lying around in Docker Hub, and in the community you can ask questions or find a ready-made solution.

Most importantly, remember: a monolithic architecture does not mean a "bad", "outdated" or "unfashionable" approach. You should design an application based on common sense and assessment of your own capabilities. Because implementing microservices with CI/CD requires more skills and competencies than automating a monolithic application.