Docker Documentation
Comprehensive guide to Docker containerization technology
What is Containerization?
Containerization is a lightweight alternative to full machine virtualization that involves encapsulating an application in a container with its own operating environment. This provides many of the benefits of virtual machines while being more portable and resource-efficient.
What is Docker?
Docker is an open-source platform that automates the deployment, scaling, and management of applications inside lightweight, portable containers. It was first released in 2013 and has since become the industry standard for containerization.
Key Benefits of Docker
- Portability: Containers run consistently across different environments
- Isolation: Applications run in isolated environments
- Efficiency: Containers share the host OS kernel, using fewer resources
- Consistency: Eliminates "works on my machine" problems
- Scalability: Easy to scale applications horizontally
- DevOps: Streamlines development and deployment workflows
Containers vs. Virtual Machines
| Feature | Containers | Virtual Machines |
|---|---|---|
| Size | Small (MBs) | Large (GBs) |
| Startup Time | Fast (seconds) | Slow (minutes) |
| Resource Usage | Low | High |
| OS | Shares host kernel | Full OS per VM |
| Isolation | Process-level | Hardware-level |
Use Cases for Docker
- Microservices Architecture
- Continuous Integration and Deployment
- Development Environment Standardization
- Cloud Migration
- Application Isolation
- DevOps Practices
- Multi-Cloud and Hybrid Cloud Strategies
π¦ΈββοΈ Ready to Become a Docker Hero?
Time to get your magic Docker tools! Go to the official Docker website and download the special software for your computer. It's like getting a superhero costume that lets you create magic boxes! π³ Click here to get Docker!
Linux wizards, use your magic package manager to install Docker:
# For Ubuntu/Debian wizards
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
# For CentOS/RHEL wizards
sudo yum install docker-ce docker-ce-cli containerd.io
# Start your Docker magic!
sudo systemctl start docker
sudo systemctl enable docker
Mac wizards, download Docker Desktop from the website and follow the easy wizard. It comes with all the tools you need!
π§ββοΈ Magic Note Docker Desktop needs macOS 10.14 or newer to work its magic.
Windows wizards, get Docker Desktop from the website. Make sure your computer has the special Hyper-V and Containers powers turned on!
π§ββοΈ Magic Note You need Windows 10 64-bit: Pro, Enterprise, or Education to use Docker Desktop.
After installing, open your magic terminal and type these spells to check if Docker is ready:
docker --version
docker info
π‘ Wizard Tip On Linux, add yourself to the docker group so you don't need to say "sudo" every time:
sudo usermod -aG docker $USER
πΌοΈ What are Magic Pictures (Docker Images)?
Magic pictures are like frozen snapshots of your apps! They're special blueprints that hold everything your app needs - all the tools, libraries, and magic ingredients. Once you have a picture, you can create as many ships as you want! ποΈβ¨
π’ What are Magic Ships (Docker Containers)?
Magic ships are the actual running versions of your pictures! Each ship is like a little boat that carries your app around. Ships are super safe - they can't see or bother other ships, but they can chat through special channels. π³οΈπ
The easiest way to start your Docker adventure is to launch a hello-world ship! It's like saying hello to the magic world:
docker run hello-world
π§ββοΈ Magic Note This spell downloads the hello-world picture and creates a ship from it. The ship says "Hello!" and then sails away.
Now that you know about magic pictures and ships, let's learn the secret spells to control them! These are like magic words that tell Docker what to do:
# See which ships are sailing right now
docker ps
# See all ships (even the ones that stopped sailing)
docker ps -a
# See all the magic pictures you have
docker images
# Download a magic picture from the big library
docker pull nginx:latest
# Launch a ship and let it sail on its own (detached mode)
docker run -d --name my-nginx -p 8080:80 nginx:latest
# Tell a ship to stop sailing
docker stop my-nginx
# Wake up a sleeping ship and make it sail again
docker start my-nginx
# Send a ship back to the magic realm
docker rm my-nginx
# Delete a magic picture you don't need anymore
docker rmi nginx:latest
Ready for more exciting ship adventures? Let's learn advanced spells to explore and control your magic ships!
# Launch a ship with a magic portal (interactive shell)
docker run -it --name my-ubuntu ubuntu:latest /bin/bash
# Send a magic message to a sailing ship
docker exec -it my-ubuntu /bin/bash
# Read the ship's adventure diary (logs)
docker logs my-nginx
# Follow the ship's diary as it writes (real-time logs)
docker logs -f my-nginx
# Get all the secret details about a ship
docker inspect my-nginx
# Send treasures between your world and the ship
docker cp ./local-file.txt my-nginx:/path/in/container/
docker cp my-nginx:/path/in/container/remote-file.txt ./
Time to organize your magic picture collection! Learn spells to find, tag, and clean up your picture library:
# Search for pictures in the grand library
docker search nginx
# Download a picture with a special version tag
docker pull nginx:1.21-alpine
# Give your picture a fancy name tag
docker tag nginx:latest my-nginx:1.0
# Clean up old pictures you don't use anymore
docker image prune
# Super clean - remove ALL unused pictures
docker image prune -a
# See the story of how a picture was made
docker history nginx:latest
# Get all the secret details about a picture
docker inspect nginx:latest
π What is a Recipe File (Dockerfile)?
A Recipe File is like a magical cookbook that tells Docker exactly how to create your perfect magic picture! It contains all the secret ingredients and steps needed to build your app. With the docker build spell, you can create an automated magic process that follows your recipe perfectly every time! π§ββοΈβ¨
Let's create your first magic recipe! This is like writing instructions for baking a cake, but for your app:
# Magic Recipe File (Dockerfile)
FROM ubuntu:20.04
# Set magic environment
ENV DEBIAN_FRONTEND=noninteractive
# Gather magical ingredients
RUN apt-get update && apt-get install -y \
python3 \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
# Set your magic workshop
WORKDIR /app
# Bring in your spellbook (app files)
COPY . /app
# Mix in the special potions (dependencies)
RUN pip3 install -r requirements.txt
# Open the magic portal (port)
EXPOSE 5000
# The final incantation to start your app
CMD ["python3", "app.py"]
Now cast the build spell to create your magic picture:
docker build -t my-app:1.0 .
Master these powerful spells to create amazing magic recipes! Each spell has a special purpose:
# FROM - Choose your magic foundation
FROM ubuntu:20.04
# LABEL - Add magical labels to your creation
LABEL maintainer="your-email@example.com"
LABEL version="1.0"
LABEL description="This is a custom Ubuntu image"
# RUN - Cast spells to install and prepare ingredients
RUN apt-get update && apt-get install -y nginx
# CMD - The default magic words to start your creation
CMD ["nginx", "-g", "daemon off;"]
# ENTRYPOINT - Make your creation act like a magical executable
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["nginx"]
# COPY - Bring treasures from your world into the magic realm
COPY ./index.html /var/www/html/
# ADD - Like COPY but can fetch from distant lands and unpack treasures
ADD https://example.com/file.tar.gz /tmp/
RUN tar -xzf /tmp/file.tar.gz -C /tmp/
# WORKDIR - Set your magical workshop location
WORKDIR /app
# ENV - Set magical environment variables
ENV NODE_ENV=production
ENV APP_VERSION=1.0
# EXPOSE - Open magical portals (ports) for communication
EXPOSE 8080
# VOLUME - Create magical treasure chests for data
VOLUME ["/data"]
# USER - Choose which magical being runs your creation
USER node
# ARG - Define magical variables you can change during creation
ARG VERSION=latest
RUN echo "Building version ${VERSION}"
# HEALTHCHECK - Create a magical health monitor for your creation
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost/ || exit 1
# SHELL - Choose your magical shell for casting spells
SHELL ["/bin/bash", "-c"]
The Build Realm is like your magical workshop - it's the special area where Docker gathers all your ingredients and tools to create magic pictures. The build process can use any treasures from this realm!
To make your magic faster and your pictures lighter, use a .dockerignore spell to hide secret files and folders from the build realm:
# .dockerignore spell content (keeps secrets hidden):
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
Dockerfile
.dockerignore
ARG and ENV are like magical ingredients that change how your recipes work:
# ARG - Magical variable that disappears after building (build-time only)
ARG VERSION=latest
ARG BUILD_DATE
# ENV - Magical potion that stays with your creation forever (runtime)
ENV APP_VERSION=${VERSION}
ENV NODE_ENV=production
# Using magical arguments when casting the build spell
docker build --build-arg VERSION=1.2.3 --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') -t my-app .
π€ What is Docker Compose?
Docker Compose is like a magical team captain that helps all your containers work together! It's a special tool that lets you write a simple recipe (in YAML) to tell all your containers how to connect, share secrets, and work as a team. With one magic command, you can start your entire app team at once! π§ββοΈβ¨
Let's create your first team recipe! This magical file tells all your containers how to work together:
# Team Recipe File (docker-compose.yml)
version: '3.8'
services:
# Your main web app hero
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
environment:
FLASK_ENV: development
depends_on:
- redis
- db
# Speedy memory helper
redis:
image: "redis:alpine"
ports:
- "6379:6379"
# Wise data keeper
db:
image: "postgres:13"
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
postgres_data:
Master these powerful spells to control your container team! Each spell helps manage your magical team:
# Wake up your team and let them work quietly
docker-compose up -d
# Wake up your team and rebuild if needed
docker-compose up --build
# Tell your team to take a break
docker-compose stop
# Send your team home and clean up everything
docker-compose down
# Remove sleeping team members
docker-compose rm
# Read your team's adventure diary
docker-compose logs
# Follow one team member's diary in real-time
docker-compose logs -f web
# Send a message to a team member
docker-compose exec web bash
# Make more copies of a team member
docker-compose up -d --scale web=3
# See who's on your team
docker-compose ps
# Update your team's tools
docker-compose pull
# Train your team (rebuild)
docker-compose build
# Show your team's secret plan
docker-compose config
Unlock the most powerful team magic! These advanced spells give your containers superpowers:
# Super Team Recipe with Magical Powers!
version: '3.8'
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
image: myapp:prod
ports:
- "5000:5000"
environment:
- FLASK_ENV=production
- DATABASE_URL=postgresql://user:password@db:5432/myapp
env_file:
- .env
volumes:
- static_volume:/app/static
depends_on:
- db
restart: unless-stopped
deploy:
replicas: 3
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
- app-network
- monitoring
db:
image: postgres:13
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./backups:/backups
restart: unless-stopped
networks:
- app-network
volumes:
postgres_data:
static_volume:
networks:
app-network:
driver: bridge
monitoring:
external: true
Travel between different magical worlds! Use different team recipes for development, testing, and production:
# Visit different magical worlds with different recipes
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
# Special powers for the production world
# docker-compose.prod.yml
version: '3.8'
services:
web:
environment:
- FLASK_ENV=production
deploy:
replicas: 5
π Docker Networking Magic
Docker gives your containers magical communication powers! They can chat with each other and connect to the outside world through special enchanted networks. π§ββοΈβ¨
# See all the magical communication paths
docker network ls
# Look closely at a communication path
docker network inspect bridge
# Connect a ship to a communication path
docker network connect bridge my-container
# Disconnect a ship from a communication path
docker network disconnect bridge my-container
# Create a special bridge path for your app
docker network create my-app-network
# Create a path with a custom magical address range
docker network create --subnet=192.168.0.0/16 my-custom-network
# Create a floating overlay path for swarm adventures
docker network create --driver overlay my-overlay-network
# Connect ships to your custom communication path
docker run -d --name web1 --network my-app-network nginx
docker run -d --name web2 --network my-app-network nginx
# Test if ships can talk to each other
docker exec -it web1 ping web2
# Open a specific magical doorway
docker run -p 8080:80 nginx
# Open all magical doorways automatically
docker run -P nginx
# Open a doorway to a specific magical address
docker run -p 127.0.0.1:8080:80 nginx
# Open multiple magical doorways
docker run -p 8080:80 -p 8443:443 nginx
# Open a special UDP magical doorway
docker run -p 53:53/udp bind9
π Docker Treasure Chests (Volumes)
Docker gives your containers magical treasure chests to keep their precious data safe! Your data survives even when containers disappear. π΄ββ οΈβ¨
# Build a bridge from your world to the container's world
docker run -d -v /host/path:/container/path nginx
# Bridge a single treasure from your world
docker run -d -v /host/file.txt:/container/file.txt nginx
# Build a read-only bridge (container can see but not change)
docker run -d -v /host/path:/container/path:ro nginx
# Using Team Magic (Docker Compose)
version: '3.8'
services:
web:
image: nginx
volumes:
- ./html:/usr/share/nginx/html
- ./nginx.conf:/etc/nginx/nginx.conf:ro
# Create a named treasure chest
docker volume create my-data-volume
# Give a treasure chest to a ship
docker run -d -v my-data-volume:/container/path nginx
# See all your treasure chests
docker volume ls
# Look inside a treasure chest
docker volume inspect my-data-volume
# Throw away a treasure chest
docker volume rm my-data-volume
# Using Team Magic (Docker Compose)
version: '3.8'
services:
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
# Save treasures from a chest to a magical scroll
docker run --rm -v my-data-volume:/data -v $(pwd):/backup \
alpine tar czf /backup/my-data-volume-backup.tar.gz -C /data .
# Restore treasures from a magical scroll to a chest
docker run --rm -v my-data-volume:/data -v $(pwd):/backup \
alpine tar xzf /backup/my-data-volume-backup.tar.gz -C /data
# Backup a wise data keeper's treasures
docker run --rm -v postgres_data:/data -v $(pwd):/backup \
alpine sh -c 'cd /data && tar czf /backup/postgres-backup.tar.gz .'
Docker Registries
Docker registries are storage systems for Docker images. They allow you to store and share images.
# Log in to Docker Hub
docker login
# Push an image to Docker Hub
docker tag my-app:1.0 username/my-app:1.0
docker push username/my-app:1.0
# Pull an image from Docker Hub
docker pull username/my-app:1.0
# Search for images on Docker Hub
docker search nginx
# Log out from Docker Hub
docker logout
# Run a private registry container
docker run -d -p 5000:5000 --name registry registry:2
# Push an image to the private registry
docker tag my-app:1.0 localhost:5000/my-app:1.0
docker push localhost:5000/my-app:1.0
# Pull an image from the private registry
docker pull localhost:5000/my-app:1.0
# Set up a secure registry with TLS
# First, generate certificates
mkdir -p certs
openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt
# Run the registry with TLS
docker run -d \
--restart=always \
--name registry \
-v "$(pwd)"/certs:/certs \
-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
-p 443:443 \
registry:2
# Log in to Amazon ECR
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-west-2.amazonaws.com
# Tag and push to ECR
docker tag my-app:1.0 123456789012.dkr.ecr.us-west-2.amazonaws.com/my-app:1.0
docker push 123456789012.dkr.ecr.us-west-2.amazonaws.com/my-app:1.0
# Log in to Google Container Registry (GCR)
gcloud auth configure-docker
# Tag and push to GCR
docker tag my-app:1.0 gcr.io/my-project/my-app:1.0
docker push gcr.io/my-project/my-app:1.0
# Log in to Azure Container Registry (ACR)
az acr login --name myregistry
# Tag and push to ACR
docker tag my-app:1.0 myregistry.azurecr.io/my-app:1.0
docker push myregistry.azurecr.io/my-app:1.0
# Use semantic versioning
docker tag my-app:1.0.0 username/my-app:1.0.0
# Use descriptive tags
docker tag my-app:latest username/my-app:latest
docker tag my-app:stable username/my-app:stable
docker tag my-app:dev username/my-app:dev
# Use environment-specific tags
docker tag my-app:1.0 username/my-app:1.0-prod
docker tag my-app:1.0 username/my-app:1.0-staging
docker tag my-app:1.0 username/my-app:1.0-dev
# Use commit SHA for traceability
docker tag my-app:$(git rev-parse --short HEAD) username/my-app:$(git rev-parse --short HEAD)
Docker Best Practices
Follow these best practices to create efficient, secure, and maintainable Docker containers.
# 1. Use minimal base images
FROM alpine:3.14
# 2. Use specific version tags
FROM python:3.9.5-slim
# 3. Minimize the number of layers
RUN apt-get update && apt-get install -y \
package1 \
package2 \
&& rm -rf /var/lib/apt/lists/*
# 4. Use .dockerignore
# .dockerignore file content:
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
# 5. Don't store secrets in images
# Use environment variables or secrets management
ENV DATABASE_URL=${DATABASE_URL}
# 6. Use specific labels
LABEL maintainer="your-email@example.com" \
version="1.0" \
description="This is a custom image"
# 7. Leverage build cache
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
# 8. Use CMD and ENTRYPOINT correctly
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["nginx"]
# 9. Use EXPOSE for documentation
EXPOSE 8080
# 10. Use VOLUME for persistent data
VOLUME ["/data"]
# 1. Use a minimal base image
FROM alpine:3.14
# 2. Use a non-root user
RUN addgroup -g 1001 -S appgroup && \
adduser -u 1001 -S appuser -G appgroup
USER appuser
# 3. Use specific version tags
FROM node:14.17.0-alpine3.14
# 4. Remove unnecessary packages
RUN apk add --no-cache nodejs npm && \
apk del --purge \
&& rm -rf /var/cache/apk/*
# 5. Use multi-stage builds to reduce attack surface
FROM node:14-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM node:14-alpine AS runtime
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
USER node
CMD ["node", "server.js"]
# Runtime security options
docker run --read-only --tmpfs /tmp nginx
docker run --cap-drop ALL --cap-add CHOWN nginx
docker run -u 1000:1000 nginx
docker run --memory=512m --cpus=0.5 nginx
docker run --security-opt no-new-privileges nginx
# Use Docker Scout to scan for vulnerabilities
docker scout cves my-app:1.0
# Use third-party tools like Trivy
# Install Trivy
apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo "deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy
# Scan an image with Trivy
trivy image my-app:1.0
# Using Docker secrets in Swarm mode
echo "my-secret-password" | docker secret create db_password -
# Using secrets in a service
docker service create --name db \
--secret db_password \
-e POSTGRES_PASSWORD_FILE=/run/secrets/db_password \
postgres
# Using secrets in Docker Compose
version: '3.8'
services:
web:
image: my-app
secrets:
- db_password
environment:
- DB_PASSWORD_FILE=/run/secrets/db_password
secrets:
db_password:
file: ./db_password.txt
# Using environment files for secrets in development
version: '3.8'
services:
web:
image: my-app
env_file:
- .env
What are Multi-stage Builds?
Multi-stage builds are a feature in Docker that allows you to use multiple FROM statements in a single Dockerfile. Each FROM instruction begins a new build stage, and you can selectively copy artifacts from one stage to another, leaving behind everything you don't need in the final image.
Benefits of Multi-stage Builds
- Smaller Image Size: Only include necessary artifacts in the final image
- Improved Security: Reduce attack surface by excluding build tools
- Better Performance: Smaller images download and start faster
- Simplified Workflow: One Dockerfile for both build and runtime
# Basic multi-stage build example
FROM node:14-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM node:14-alpine AS runtime
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
EXPOSE 3000
USER node
CMD ["node", "server.js"]
# Go application multi-stage build
FROM golang:1.16-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /app/server .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /app/server .
EXPOSE 8080
CMD ["./server"]
# Java application multi-stage build
FROM maven:3.8-openjdk-11 AS builder
WORKDIR /app
COPY pom.xml .
RUN mvn dependency:go-offline
COPY src ./src
RUN mvn package -DskipTests
FROM openjdk:11-jre-slim
WORKDIR /app
COPY --from=builder /app/target/my-app.jar .
EXPOSE 8080
CMD ["java", "-jar", "my-app.jar"]
# Using multiple build stages for different purposes
FROM node:14-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci
FROM node:14-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM node:14-alpine AS runtime
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
EXPOSE 3000
USER node
CMD ["node", "dist/server.js"]
# Using named stages
FROM node:14-alpine AS base
WORKDIR /app
COPY package*.json ./
RUN npm ci
FROM base AS development
RUN npm ci
COPY . .
CMD ["npm", "run", "dev"]
FROM base AS production
COPY . .
RUN npm run build
CMD ["node", "dist/server.js"]
Building a Complete Application with Docker
In this module, we'll build a complete web application using Docker, incorporating all the concepts we've learned.
# Project structure
my-app/
βββ backend/
β βββ Dockerfile
β βββ requirements.txt
β βββ app.py
βββ frontend/
β βββ Dockerfile
β βββ package.json
β βββ src/
βββ nginx/
β βββ Dockerfile
β βββ nginx.conf
βββ docker-compose.yml
βββ docker-compose.prod.yml
βββ .dockerignore
# backend/Dockerfile
FROM python:3.9-slim AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt
FROM python:3.9-slim AS runtime
WORKDIR /app
COPY --from=builder /app/wheels /wheels
RUN pip install --no-cache /wheels/*
COPY . .
EXPOSE 5000
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "app:app"]
# frontend/Dockerfile
FROM node:14-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM nginx:alpine AS runtime
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
# docker-compose.yml
version: '3.8'
services:
backend:
build: ./backend
environment:
- DATABASE_URL=postgresql://user:password@db:5432/myapp
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
networks:
- app-network
frontend:
build: ./frontend
ports:
- "80:80"
depends_on:
- backend
networks:
- app-network
db:
image: postgres:13
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app-network
redis:
image: redis:alpine
networks:
- app-network
volumes:
postgres_data:
networks:
app-network:
driver: bridge
# docker-compose.prod.yml
version: '3.8'
services:
backend:
restart: unless-stopped
environment:
- NODE_ENV=production
deploy:
replicas: 3
resources:
limits:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
interval: 30s
timeout: 10s
retries: 3
frontend:
restart: unless-stopped
deploy:
replicas: 2
db:
restart: unless-stopped
volumes:
- ./backups:/backups
redis:
restart: unless-stopped
# Development deployment
docker-compose up -d
# Production deployment
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
# Scale services
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --scale backend=5
# Update services
docker-compose -f docker-compose.yml -f docker-compose.prod.yml pull
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
# View logs
docker-compose -f docker-compose.yml -f docker-compose.prod.yml logs -f backend