Docker solves one of the oldest problems in software: "it works on my machine." Once you understand how containers work and why they exist, you'll wonder how you shipped software without them.
This guide is for developers who've heard of Docker, maybe used it occasionally, but haven't fully integrated it into their workflow. By the end, you'll understand not just how to write Dockerfiles, but why each decision matters.
Before Docker, deploying a Node.js app meant:
Docker packages your app and all its dependencies into a single image that runs identically everywhere — your laptop, CI, staging, production. The "it works on my machine" problem disappears because the machine is now part of the package.
Let's start with a Node.js API:
# Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD ["node", "src/index.js"]Build and run:
docker build -t myapi:v1 .
docker run -p 3000:3000 myapi:v1This works, but it has problems. Every code change rebuilds everything including npm ci. Let's understand layer caching first.
Every instruction in a Dockerfile creates a layer. Docker caches layers and only rebuilds from the first changed instruction downward.
# BAD: COPY . . before npm ci means code changes invalidate the npm ci cache
FROM node:20-alpine
WORKDIR /app
COPY . . # This invalidates cache on every code change
RUN npm ci # This always re-runs# GOOD: Copy package files first, then source code
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./ # Only changes when deps change
RUN npm ci # Cached unless package.json changes
COPY . . # Code changes only affect this layer and belowThis single ordering change can reduce build time from 2 minutes to 5 seconds on code-only changes.
For production images, you want to be lean. Multi-stage builds let you use a full build environment and then copy only what's needed into the final image:
# Stage 1: Install and build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production image
FROM node:20-alpine AS runner
WORKDIR /app
# Security: create non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Only copy what's needed
COPY --from=builder --chown=appuser:appgroup /app/dist ./dist
COPY --from=builder --chown=appuser:appgroup /app/node_modules ./node_modules
COPY --from=builder --chown=appuser:appgroup /app/package.json ./
USER appuser
EXPOSE 3000
ENV NODE_ENV=production
CMD ["node", "dist/index.js"]The builder stage never ships to production. The final image contains only compiled output and production dependencies. This reduces image size from ~600MB to ~100MB and removes all dev tooling as an attack surface.
Real applications have multiple services — an API, a database, maybe a cache. Docker Compose defines and runs them together:
# docker-compose.yml
version: '3.8'
services:
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://user:password@postgres:5432/mydb
- REDIS_URL=redis://redis:6379
volumes:
- ./src:/app/src # Hot reload in dev
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
restart: unless-stopped
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: mydb
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d mydb"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
volumes:
- redisdata:/data
volumes:
pgdata:
redisdata:docker compose up -d # Start everything in background
docker compose logs -f api # Follow API logs
docker compose down # Stop and remove containers
docker compose down -v # Stop and remove containers + volumes (deletes DB data)Use Compose files for different environments:
# docker-compose.dev.yml — override for development
version: '3.8'
services:
api:
build:
target: builder # Use the build stage with dev dependencies
command: npm run dev # Hot reload
volumes:
- ./src:/app/src
environment:
- NODE_ENV=development# Development
docker compose -f docker-compose.yml -f docker-compose.dev.yml up
# Production (just the base file)
docker compose up -dNever forget this file — it prevents unnecessary files from being sent to the Docker daemon:
# .dockerignore
node_modules
.git
.gitignore
dist
build
*.log
.env
.env.*
README.md
.DS_Store
coverage
.nyc_output
Without .dockerignore, you might be sending gigabytes of node_modules to Docker on every build.
1. Running containers as root. Always add a non-root user. If a container is compromised and runs as root, the attacker has root on that process.
2. Storing secrets in environment variables committed to git. Use Docker secrets, a vault solution, or at minimum a .env file that's gitignored.
3. Using latest tags in production. Pin to specific versions like node:20.11-alpine so deployments are reproducible.
4. Not using health checks. Docker Compose's depends_on without a health check only waits for the container to start, not for the service inside to be ready. Postgres takes a few seconds to be ready after the container starts.
5. Large image sizes. Use Alpine variants, multi-stage builds, and .dockerignore. A 600MB image takes 10x longer to pull than a 60MB one.
# See running containers
docker ps
# See all containers including stopped
docker ps -a
# Shell into a running container
docker exec -it container_name sh
# View logs
docker logs container_name -f --tail 100
# Remove all stopped containers, unused images, build cache
docker system prune -a
# Inspect a container's environment variables (useful for debugging)
docker inspect container_name | grep -A 20 '"Env"'