If you're a frontend dev and never used Docker, you've probably heard "it works on my machine". Docker solves exactly that — it packages your application with all dependencies into a container that runs the same everywhere: on your Mac, your colleague's PC, in CI and in production. In this guide, I go from zero to deploy focusing on real frontend projects.
What is Docker (no fluff)
Docker is a tool that creates isolated environments (containers) to run applications. Unlike a virtual machine, a container is lightweight — it shares the host kernel and starts in seconds.
- Image — the "blueprint". Contains the base OS, Node.js, your dependencies and your code.
- Container — the running "instance" of an image. You can have multiple containers from the same image.
- Dockerfile — the "recipe" that defines how to build the image.
- docker-compose — orchestrates multiple containers (app + database + redis) in a single command.
Installation
Install Docker Desktop for your operating system. It includes Docker Engine, Docker CLI and Docker Compose:
- macOS: download from docker.com/products/docker-desktop or use
brew install --cask docker - Windows: download Docker Desktop (requires WSL2)
- Linux: install via apt/yum or follow the official documentation
Verify the installation:
docker --version
# Docker version 24.x.x
docker compose version
# Docker Compose version v2.x.xFirst Dockerfile: Next.js application
Let's create a Dockerfile for a Next.js application. The most basic possible to understand the concept:
# Dockerfile
FROM node:20-alpine
WORKDIR /app
# Copy dependencies first (layer caching)
COPY package.json pnpm-lock.yaml ./
RUN corepack enable && pnpm install --frozen-lockfile
# Copy code
COPY . .
# Build
RUN pnpm build
# Run
EXPOSE 3000
CMD ["pnpm", "start"]Each line is a layer. Docker caches each layer — if package.json didn't change, it skips dependency installation. That's why we copy dependencies before the code.
# Build the image
docker build -t my-app .
# Run the container
docker run -p 3000:3000 my-app
# Access: http://localhost:3000.dockerignore: what doesn't go into the image
Just like .gitignore, .dockerignore defines what shouldn't be copied into the image. This reduces size and speeds up the build:
# .dockerignore
node_modules
.next
.git
.env
.env.local
README.md
.vscode
.claudeMulti-stage build: optimized production image
The basic Dockerfile works but the final image is heavy — it contains all devDependencies, source code and build artifacts. In production, you only need the build output. Multi-stage build solves this:
# Dockerfile (multi-stage)
# Stage 1: Install dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN corepack enable && pnpm install --frozen-lockfile
# Stage 2: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN corepack enable && pnpm build
# Stage 3: Production (only what's needed)
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
# Copy only what's needed from the build
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
EXPOSE 3000
CMD ["node", "server.js"]The difference is significant:
- Basic Dockerfile: ~1GB (full node_modules, source code, devDependencies)
- Multi-stage: ~150MB (only Node.js + build output + static files)
Docker Compose: complete dev environment
For local development, docker-compose orchestrates multiple services. A real scenario: Next.js app + PostgreSQL + Redis:
# docker-compose.yml
version: '3.8'
services:
app:
build: .
ports:
- '3000:3000'
volumes:
- .:/app
- /app/node_modules
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/myapp
- REDIS_URL=redis://cache:6379
depends_on:
- db
- cache
db:
image: postgres:16-alpine
ports:
- '5432:5432'
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
cache:
image: redis:7-alpine
ports:
- '6379:6379'
volumes:
pgdata:# Start everything with one command
docker compose up -d
# View logs
docker compose logs -f app
# Take everything down
docker compose down
# Take down and clean volumes (database reset)
docker compose down -vThe volumes: - .:/app mounts your local code inside the container — code changes reflect immediately without rebuild. The - /app/node_modules prevents the local node_modules from overwriting the container's.
Dev vs production environment
It's common to have two Dockerfiles or use target in compose:
# docker-compose.dev.yml
services:
app:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- '3000:3000'
volumes:
- .:/app
- /app/node_modules
command: pnpm dev# Dockerfile.dev (simple, no multi-stage)
FROM node:20-alpine
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN corepack enable && pnpm install
COPY . .
EXPOSE 3000
CMD ["pnpm", "dev"]# Dev
docker compose -f docker-compose.dev.yml up
# Production
docker compose up --buildEnvironment variables
Never put secrets in the Dockerfile. Use environment variables via .env or docker-compose:
# .env (don't commit)
DATABASE_URL=postgresql://user:pass@db:5432/myapp
STRIPE_SECRET_KEY=sk_test_...
NEXT_PUBLIC_API_URL=http://localhost:3000/api# docker-compose.yml
services:
app:
env_file:
- .env
# or individually:
environment:
- NODE_ENV=productionEssential commands
The commands I use daily:
# Images
docker build -t app . # Build image
docker images # List images
docker rmi app # Remove image
# Containers
docker ps # Running containers
docker ps -a # All (including stopped)
docker logs -f <container> # Real-time logs
docker exec -it <container> sh # Enter container
docker stop <container> # Stop
docker rm <container> # Remove
# Compose
docker compose up -d # Start in background
docker compose down # Take down
docker compose down -v # Take down + clean volumes
docker compose logs -f # Logs from all services
docker compose build --no-cache # Rebuild without cache
# Cleanup
docker system prune -a # Remove everything not in useProduction deploy
With the multi-stage image ready, deployment is pushing to a registry and running on the server:
# Build and tag
docker build -t my-app:latest .
# Push to Docker Hub (or ECR, GCR, etc.)
docker tag my-app:latest user/my-app:latest
docker push user/my-app:latest
# On the server (pull and run)
docker pull user/my-app:latest
docker run -d -p 3000:3000 --env-file .env user/my-app:latestServices like Render, Railway and Fly.io automatically detect the Dockerfile and handle build + deploy with no extra configuration.
Common mistakes
- node_modules inside image + local volume — the volume overwrites the container's node_modules. Solution: add
- /app/node_modulesto volumes. - Image too heavy — using
node:20instead ofnode:20-alpine. Alpine is ~5x smaller. - Slow build — not leveraging layer caching. Copy
package.jsonbefore the code. - Port not accessible — forgot
EXPOSEor port mapping indocker run -p. - Environment variables not reaching — in Next.js,
NEXT_PUBLIC_*variables need to be available at build time, not just runtime.
Summary
Docker is not just for DevOps — it's a productivity tool for any developer. With a Dockerfile and a docker-compose, you ensure the environment is consistent from dev to deploy. The learning investment is small compared to the return: fewer environment bugs, faster onboarding, and predictable deploys.
