Deploying Multi-Container Applications with Docker Compose: A Comprehensive Guide
This tutorial will guide you through deploying multi-container applications using Docker Compose. You'll learn to define, configure, and run interconnected services, simplifying the orchestration of your Docker projects.
Docker has revolutionized how developers package and deploy applications. However, in production or even development environments, it's rare to find an application consisting of just a single container. Most modern applications are composed of multiple interconnected services, such as a database, an API backend, a web frontend, a caching system, and so on.
This is where Docker Compose comes in: an essential tool for defining and running multi-container Docker applications. With Docker Compose, you use a single YAML file to configure all the services of your application, making orchestration, management, and deployment much easier.
In this tutorial, we will explore Docker Compose in depth, from installation to creating a docker-compose.yml file for a real web application, including managing volumes, networks, and environment variables.
🎯 What will you learn in this tutorial?
- Understand the need for and benefits of Docker Compose.
- Install Docker Compose on your system.
- Create a
docker-compose.ymlfile for a multi-service application. - Define services, images, ports, volumes, and networks.
- Run and manage your application with Docker Compose commands.
- Handle data persistence and environment variables.
- Deploy an example web application with a frontend, backend, and database.
🚀 Why is Docker Compose indispensable?
Imagine you have a web application that requires:
- A frontend server (e.g., Nginx serving React).
- A backend server (e.g., a Node.js or Python API).
- A database (e.g., PostgreSQL or MySQL).
- A caching system (e.g., Redis).
Without Docker Compose, you would have to start each container individually, manually managing their networks, volumes, and dependencies. This can be tedious, error-prone, and difficult to replicate across different environments.
Key advantages of Docker Compose:
- Repeatable environment definitions: Define your application stack once and replicate it anywhere.
- Simplified orchestration: Start, stop, and manage multiple services with a single command.
- Isolation and communication: Each service runs in its own container and can easily communicate with others via defined networks.
- Data persistence: Manage volumes to ensure your data persists beyond the container lifecycle.
- Efficient local development: Facilitates setting up development environments that mimic production.
🛠️ Installing Docker Compose
Docker Compose is distributed with Docker Desktop for Windows and macOS, so if you already have Docker Desktop installed, you likely have it! You can verify this by opening a terminal and running:
docker compose version
If you see a version number, you're good to go! If not, or if you're on Linux, follow the instructions below.
Installation on Linux (recommended method for CLI plugin):
Generally, docker compose is installed alongside Docker Engine. Ensure you have the latest version of Docker Engine. If you need to install it separately (for older versions), you can use pip or download the binary directly.
1. Update Docker Engine (recommended):
Follow the official Docker installation guide for your Linux distribution. For example, for Debian/Ubuntu-based systems:
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
2. Verification:
docker compose version
You should see output similar to:
Docker Compose version v2.x.x
📖 Anatomy of a docker-compose.yml file
The heart of Docker Compose is the docker-compose.yml file. This file defines the services that make up your application, their configurations, networks, and volumes. It is a YAML file, which means indentation is crucial.
Basic structure:
version: '3.8' # Compose file format version
services:
web:
# Web service configuration
db:
# Database service configuration
volumes:
# Named volume definitions
networks:
# Custom network definitions
Key sections:
-
version: Defines the Compose file format version. Using version3.xis recommended for the latest features. -
services: This is the main section where you define each container that is part of your application. Each service is an arbitrary name (e.g.,web,db,api) that encapsulates a container's configuration. -
volumes: Here you define named volumes that your services will use for data persistence. Volumes ensure that your data is not lost when containers are removed. -
networks: Allows you to define custom networks for your services to communicate securely and in isolation. By default, Compose creates a default network for all services.
💻 Creating our first multi-container application
Let's create a simple application consisting of a web service (we'll use Nginx to serve a static HTML page) and a database service (PostgreSQL).
Step 1: Project structure
Create a folder for your project and, inside it, the following files and directories:
my-compose-app/
├── docker-compose.yml
└── web/
└── index.html
Step 2: Create the index.html file
In the web/ directory, create an index.html file with the following content:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>My Docker Compose Application</title>
<style>
body { font-family: sans-serif; text-align: center; margin-top: 50px; background-color: #f4f4f4; }
h1 { color: #333; }
p { color: #666; }
.container { background-color: white; padding: 20px; border-radius: 8px; box-shadow: 0 4px 8px rgba(0,0,0,0.1); display: inline-block; }
</style>
</head>
<body>
<div class="container">
<h1>👋 Hello from Docker Compose!</h1>
<p>This is a simple web application serving Nginx.</p>
<p>PostgreSQL database is also running.</p>
</div>
</body>
</html>
Step 3: Create the docker-compose.yml file
In the root of your project (my-compose-app/), create the docker-compose.yml file:
version: '3.8'
services:
web:
image: nginx:latest
ports:
- "80:80" # Maps host port 80 to Nginx container port 80
volumes:
- ./web:/usr/share/nginx/html # Mounts our 'web' directory to Nginx's service directory
depends_on:
- db # Indicates that the 'web' service depends on 'db' (does not start 'web' until 'db' is ready)
networks:
- app_network
db:
image: postgres:13
environment:
POSTGRES_DB: mydatabase
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- db_data:/var/lib/postgresql/data # Data persistence for PostgreSQL
networks:
- app_network
volumes:
db_data: # Defines a named volume for the database
networks:
app_network: # Defines a custom network for our services
driver: bridge
Let's break down this file:
-
webservice:image: nginx:latest: Uses the latest official Nginx image.ports: "80:80": Maps host port 80 to container port 80. This means you can access your application viahttp://localhost:80orhttp://your_ip.volumes: ./web:/usr/share/nginx/html: Mounts our host'swebdirectory (whereindex.htmlis) inside the Nginx container, to the location where Nginx serves static content.depends_on: - db: An instruction that tells Docker Compose that thewebservice depends on thedbservice. This ensures thatdbstarts beforeweb. Important:depends_ononly ensures the startup order, it does not wait for the service to be fully ready to accept connections.networks: - app_network: Connects this service to our customapp_network.
-
dbservice:image: postgres:13: Uses the official PostgreSQL version 13 image.environment: Sets environment variables required by the PostgreSQL image to configure the database (name, user, password).volumes: db_data:/var/lib/postgresql/data: Mounts a named volume calleddb_datato the directory where PostgreSQL stores its data. This ensures your database data persists even if you remove thedbcontainer.networks: - app_network: Connects this service to the same custom network.
-
volumessection:db_data:: Declares the named volumedb_datathat will be used by thedbservice.
-
networkssection:app_network:: Declares a custom network namedapp_networkwith thebridgedriver (the default driver for Docker networks). All services on this network can communicate with each other by their service name (e.g.,webcan connect todbusingdbas the hostname).
🏃 Running your application with Docker Compose
Navigate to the root of your project (my-compose-app/) in the terminal and execute the following command:
docker compose up -d
up: Creates and starts the containers for all services defined indocker-compose.yml.-d: Runs the containers in detached mode (background), freeing up your terminal.
You will see output indicating the creation of the network, volumes, and the start of the containers.
[+] Running 3/3
✔ Network mi-app-compose_app_network Created 0.0s
✔ Volume "mi-app-compose_db_data" Created 0.0s
✔ Container mi-app-compose-db-1 Started 0.8s
✔ Container mi-app-compose-web-1 Started 0.8s
Now, open your browser and visit http://localhost (or http://your_ip). You should see the index.html page served by Nginx.
Useful Docker Compose commands:
- View service status:
docker compose ps
This will show you the running containers, their mapped ports, and their status.
- View service logs:
docker compose logs
To view logs for a specific service (e.g., `web`):
docker compose logs web
- Stop services:
docker compose stop
This will stop the containers, but not remove them. You can restart them with `docker compose start`.
- Stop and remove services (and networks):
docker compose down
This will stop and remove the containers and networks created by Compose. By default, named volumes are *not* removed to protect your data. If you also want to remove volumes:
docker compose down -v
<div class="callout warning">⚠️ <strong>Warning:</strong> `docker compose down -v` will delete persistent data from your volumes. Use with caution!</div>
- Rebuild service images:
If you have made changes to a service's
Dockerfileor its build context, you can rebuild the image and restart the service:
docker compose up --build -d
- Execute a command inside a service:
docker compose exec web bash
This will give you a shell inside the `web` container.
💾 Data Persistence with Volumes
Data persistence is crucial for applications that handle important information, such as databases. Containers are inherently ephemeral; if a container is removed, any data stored within it will be lost. Docker Compose, along with Docker volumes, solves this problem.
In our docker-compose.yml, we defined a named volume db_data for PostgreSQL:
volumes:
db_data:
And we mounted it in the db service:
db:
# ...
volumes:
- db_data:/var/lib/postgresql/data
This means Docker will manage a volume on your host's filesystem (a Docker-specific location) that is linked to the /var/lib/postgresql/data directory inside the db container. Even if you remove the db container with docker compose down, the data in db_data will persist. When you restart the db service with docker compose up, it will reconnect to that same volume, and your data will be there.
What happens if I don't use volumes for my database?
If you didn't mount a volume to the database, every time the container was removed and recreated (e.g., with `docker compose down` followed by `docker compose up`), all your database data would be lost. Volumes are essential for critical data persistence.🌐 Networking in Docker Compose
Docker Compose creates a default network for your application, allowing all services to communicate with each other using their service names as hostnames. In our example, the web service can "see" the db service by simply making requests to db:5432 (the default PostgreSQL port).
While a default network works well for many cases, defining custom networks (app_network in our example) offers several advantages:
- Clarity: Makes your application's network architecture explicit.
- Isolation: Allows different Compose applications to share the same Docker machine without interfering with each other's networks.
- Control: You can specify network drivers or more advanced configurations.
networks:
app_network: # Name of our custom network
driver: bridge # Type of network driver (bridge is the default)
And then you connect each service to this network:
web:
# ...
networks:
- app_network
db:
# ...
networks:
- app_network
This ensures that web and db are on the same network and can communicate. If you had another group of services that don't need to interact with this application, they could be on a different network for better isolation.
⚙️ Environment Variables and Dynamic Configuration
Environment variables are a common way to pass configurations to services without hardcoding them directly into the docker-compose.yml file or images. This is especially useful for database credentials, API keys, or other values that change between environments (development, testing, production).
In our db service, we used the environment section:
db:
# ...
environment:
POSTGRES_DB: mydatabase
POSTGRES_USER: user
POSTGRES_PASSWORD: password
Docker Compose supports interpolating environment variables from the host. You can define a .env file in the same directory as your docker-compose.yml to automatically load variables.
Example with .env:
Create a .env file in my-compose-app/:
# .env
DB_NAME=mydatabase
DB_USER=user
DB_PASSWORD=secretpassword
Then, modify your docker-compose.yml to use these variables:
version: '3.8'
services:
web:
# ...
networks:
- app_network
db:
image: postgres:13
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- db_data:/var/lib/postgresql/data
networks:
- app_network
volumes:
db_data:
networks:
app_network:
driver: bridge
Now, when you run docker compose up -d, Compose will read the variables from the .env file and inject them into the services. This is a recommended practice for managing secrets and configurations.
✨ Extending your docker-compose.yml (Other useful directives)
Docker Compose offers many other directives to configure your services. Here are some of the most common ones:
build: Instead ofimage, you can specify abuildto build an image from aDockerfilein a specific context.
services:
app:
build: ./app # Looks for a Dockerfile in the ./app directory
# Or with a specific Dockerfile:
# build:
# context: ./app
# dockerfile: Dockerfile.dev
restart: Defines the container's restart policy (e.g.,no,always,on-failure,unless-stopped).
services:
api:
# ...
restart: always
healthcheck: Defines how Docker should check if a container is "healthy" (useful fordepends_onwith waiting).
services:
db:
# ...
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d mydatabase"]
interval: 10s
timeout: 5s
retries: 5
expose: Exposes ports that are only accessible from other services on the same network, not from the host.
services:
backend:
# ...
expose:
- "3000" # Internal port for inter-service communication
links(legacy): In older versions of Compose,linkswas used for communication between services. With defined networks, it is no longer necessary, and using service names is the recommended way.
⏩ Typical Docker Compose Workflow
Describe your services, networks, and volumes.
docker compose up -d to build (if necessary) and start all services in the background.Make changes to your code. Docker Compose will mount volumes to see real-time changes, or you can rebuild services if images change.
Use
docker compose logs to diagnose issues.When you're done working, use
docker compose down to stop and clean up containers.If you need to start fresh with data, use
docker compose down -v.✅ Conclusion
Docker Compose is an incredibly powerful and versatile tool for managing multi-container applications. It drastically simplifies the setup and orchestration process, making the development, testing, and deployment of complex applications much more manageable.
Mastering Docker Compose is a fundamental step for any developer or DevOps engineer working with Docker. It allows you to define your infrastructure as code, ensuring consistency across all your environments and freeing you to focus on writing great code, rather than battling with infrastructure configuration.
Keep experimenting with different services, combinations, and configurations in your docker-compose.yml file. Practice is key to mastering this tool.
Tutoriales relacionados
- Asegurando Contenedores Docker: Mejores Prácticas y Herramientas Esencialesintermediate15 min
- Aislamiento y Gestión de Redes en Docker: Conectando Contenedores de Forma Seguraintermediate15 min
- Gestión de Volumenes en Docker: Persistencia de Datos para Contenedoresintermediate18 min
- Optimización de Imágenes en Contenedores Docker para Aplicaciones Web Ligerasintermediate15 min
Comentarios (0)
Aún no hay comentarios. ¡Sé el primero!