Every developer has been there. You push a line fix, grab your coffee, and wait. And wait. Twelve minutes later, your Docker image ends up rebuilding from scratch because something about the cache is broken again.
I spent a good part of the last year debugging slow Docker builds across multiple teams. The pattern was always the same: builds that took two minutes were consuming fifteen, and no one knew why. Once I understood what was actually going on under the hood it turned out to be surprisingly manageable to fix.
This guide walks you through how to fix slow Docker builds step by step. We’ll start with how cache actually works, then tear down the most common mistakes, and finish with production-ready patterns you can copy into your projects today.
Table of Contents
Conditions
To follow along, you’ll need:
A working Docker installation (Docker Desktop or Docker Engine 20.10+)
Basic comfort with writing Dockerfiles
Access to a CI/CD system such as GitHub Actions, GitLab CI, or Jenkins
How Docker Build Cache actually works
Each directive in the Docker file generates one. layer. Docker stores these layers and reuses them when it detects that nothing has changed. That’s cash. Simple enough in theory, but the details matter a lot.
How are cache keys calculated?
Different instructions calculate their cache keys differently:
| instruction | Based on the cache | What breaks it? |
|---|---|---|
RUN | The exact command string | Any changes to the command text |
COPY / ADD | File checksum of the source content | Any modifications to the copied files |
ENV / ARG | The name and value of the variable | Changing the value |
FROM | Base Image Digest | A new version of the base image |
Principle of cache chain
Here’s what most people miss: Docker cache is sequential. If any layer’s cache becomes invalid, each subsequent layer is rebuilt from scratch, even if those subsequent layers have not changed at all.
Picture a row of dominoes. Knock an over into the middle and after everything goes down. This is why the order of directives in your Docker file is so important.
Key Insights: The single most effective optimization you can do is to rearrange your Dockerfile so that the things that change most often come last.
How to Identify Common Cash-Busting Mistakes
Before we get into fixing anything, let’s see what’s breaking your cache right now. I’ve seen these patterns in almost every non-optimal Docker file I’ve reviewed.
Mistake 1: Copying everything too soon
It’s big. to put COPY . . This means near the top of the Docker file, before installing the dependencies. anyone File changes in your project invalidate the cache beyond that point. Changed a README? Cool, now your dependencies are reinstalled.
# BAD: Any file change invalidates the dependency install
FROM node:20-alpine
WORKDIR /app
COPY . . # Cache busted on every commit
RUN npm ci # Reinstalls every single time
RUN npm run build
Mistake 2: Not separating dependency files
Your dependency appears (package.json, requirements.txt, go.mod, Gemfile) change less often than your source code. If you don’t copy them separately, you’re reinstalling all the dependencies every time you touch a source file.
Mistake 3: Using ADD instead of COPY
ADD There are special behaviors like automatically extracting archives and retrieving remote URLs. Those properties make its cache behavior unpredictable. live with COPY Unless you specifically need to extract the archive.
Mistake 4: Splitting apt-get update and install
When you put apt-get update And apt-get install In separate RUN commands, the update phase gets cached with the stale package index. Then the install phase fails or grabs old packages.
# BAD: Stale package index
RUN apt-get update
RUN apt-get install -y curl # May fail with stale index
# GOOD: Always combine them
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
Mistake 5: Embedding timestamps or git hashes too early
Injecting build time variables via timestamp or git commit hash ARG or ENV Docker invalidates the cache on each build at the beginning of the file. Move them to the last layer.
⚠️ Take care of this: CI/CD systems often inject variables e.g
BUILD_NUMBERorGIT_SHAAs automatically generated args. If heARGAnnouncements sit near the top, toasting your cash on every run.
How to structure your Docker file for maximum cache reuse
Now let’s fix these mistakes. These five steps, applied in order, will get you most of the way to a great build.
Step 1: Apply the dependency-first pattern.
Just copy the dependencies first, install them, and then copy the rest of the source code. This one change can cut your build time in half.
# GOOD: Dependency-first pattern for Node.js
FROM node:20-alpine
WORKDIR /app
# Copy ONLY dependency files
COPY package.json package-lock.json ./
# Install dependencies (cached unless package files change)
RUN npm ci --production
# Copy source code (only this layer rebuilds on code changes)
COPY . .
# Build
RUN npm run build
The same idea works in every language:
| The language | Copy first. | Install command. |
|---|---|---|
| Node.js | package.json, package-lock.json | npm ci |
| The python | requirements.txt or pyproject.toml | pip install -r requirements.txt |
| go | go.mod, go.sum | go mod download |
| rust | Cargo.toml, Cargo.lock | cargo fetch |
| Java (Maven) | pom.xml | mvn dependency:go-offline |
| Ruby | Gemfile, Gemfile.lock | bundle install |
Step 2: Add an aggressive .dockerignore.
Oh .dockerignore file keeps irrelevant files out of the build context. Fewer files in context means fewer things that can break your cache.
# .dockerignore
.git
node_modules
dist
*.md
*.log
.env*
docker-compose*.yml
Dockerfile*
.github
tests
coverage
__pycache__
Step 3: Use Multi-Stage Builds
Multistage builds let you use a full development image for compiling, then simply copy the finished sample to a slim runtime image. You get smaller images, better security, and better cache performance because there aren’t as many built-in tools and intermediate files.
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package.json ./
EXPOSE 3000
CMD ("node", "dist/index.js")
Step 4: Sort the layers according to the frequency of change.
Think of your Docker file as a stack. Put the boring, stable stuff at the top and the volatile stuff at the bottom:
Base image and system dependencies (rarely change)
Language runtime configuration (changed occasionally)
Application dependencies (change when you add or remove packages)
Source code (changes on every commit)
Build-time metadata such as git hashes or version labels (changes every build)
Step 5: Use BuildKit Mount Caches.
Docker BuildKit is supported. RUN --mount=type=cachewhich mounts a persistent cache directory that survives across builds. This is a game changer for package managers who maintain their download caches.
# syntax=docker/dockerfile:1
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
# Mount pip cache so downloads persist across builds
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txt
COPY . .
The best part: mount caches persist even when the layer itself is invalidated. So if you add a new package, pip downloads only one package instead of fetching everything again.
Common cache targets for popular package managers are:
| Package Manager | Cache target |
|---|---|
| pip | /root/.cache/pip |
| NPM | /root/.npm |
| yarn | /usr/local/share/.cache/yarn |
| go | /go/pkg/mod |
| appropriate | /var/cache/apt |
| maven | /root/.m2/repository |
How to Configure CI/CD Cache Backends
This is where things get tricky. Your local Docker cache works great on your laptop because the layers are persisted between builds. But CI/CD runners are usually transient: each job starts with a completely empty cache. Without explicit cache configuration, every CI build is a cold build.
Option A: Registry-based caches
BuildKit can push and pull cache layers from the container registry. This is the most portable approach and works with any CI system.
docker buildx build \
--cache-from type=registry,ref=myregistry.io/myapp:buildcache \
--cache-to type=registry,ref=myregistry.io/myapp:buildcache,mode=max \
--tag myregistry.io/myapp:latest \
--push .
💡 use
mode=maxTo cache all layers including intermediate construction steps. Defaultmode=minOnly caches layers in the last step, which means your build layers are thrown away.
Option B: GitHub Actions Cache
If you’re on GitHub Actions, there is native integration with BuildKit via the GitHub Actions cache API. It’s fast and requires minimal setup.
# .github/workflows/build.yml
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: myregistry.io/myapp:latest
cache-from: type=gha
cache-to: type=gha,mode=max
Option C: S3 or cloud storage
For teams on AWS, GCP, or Azure, cloud object storage makes for a solid cache backend. It’s fast, consistent, and works in any CI system.
docker buildx build \
--cache-from type=s3,region=us-east-1,bucket=my-docker-cache,name=myapp \
--cache-to type=s3,region=us-east-1,bucket=my-docker-cache,name=myapp,mode=max \
--tag myapp:latest .
Option D: Local cache with permanent runners
If your CI runners have persistent storage (self-hosted runners, GitLab runners with a shared volume), you can export the cache to a local directory.
docker buildx build \
--cache-from type=local,src=/ci-cache/myapp \
--cache-to type=local,dest=/ci-cache/myapp,mode=max \
--tag myapp:latest .
How to Implement Advanced Cache Patterns
Once you’ve nailed the basics, these patterns can squeeze out even more performance.
Parallel construction steps
BuildKit builds independent steps in parallel. If your app has a frontend and a backend that don’t depend on each other during build, split them into separate steps and let BuildKit run them simultaneously.
# These stages build in parallel
FROM node:20-alpine AS frontend
WORKDIR /frontend
COPY frontend/package.json frontend/package-lock.json ./
RUN npm ci
COPY frontend/ .
RUN npm run build
FROM python:3.12-slim AS backend
WORKDIR /backend
COPY backend/requirements.txt .
RUN pip install -r requirements.txt
COPY backend/ .
# Final stage combines both
FROM python:3.12-slim
COPY --from=backend /backend /app
COPY --from=frontend /frontend/dist /app/static
CMD ("python", "/app/main.py")
Cache warming for feature branches
Prominent branches often originate from cold caches as they diverge from the main one. You can warm the cache by specifying multiple --cache-from Source Docker checks them in order.
docker buildx build \
--cache-from type=registry,ref=registry.io/app:cache-${BRANCH} \
--cache-from type=registry,ref=registry.io/app:cache-main \
--cache-to type=registry,ref=registry.io/app:cache-${BRANCH},mode=max \
--tag registry.io/app:${BRANCH} .
If the branch cache is removed, Docker uses it. If not, it falls back to the main cache, which is usually shared by most layers. This makes a huge difference for short-lived branches.
Selective Cache Invalidation with Build Args
You can use ARG Instructions as cache boundaries. Anything above ARG remains cached, while anything below it is recreated when the arg value changes.
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
# This ARG only invalidates layers below it
ARG CACHE_BUST_CODE=1
COPY . .
RUN npm run build
# This ARG only invalidates the label
ARG GIT_SHA=unknown
LABEL git.sha=$GIT_SHA
How to measure your improvement
Optimization without measurement is just guesswork. Here’s how to prove your changes are working.
Four benchmark scenarios.
Run each scenario at least three times and take the median:
Cold construction: No cache at all (first build or later
docker builder prune)Hot construction: No change, complete cash hit
Code changes: Only the source code has been modified.
Dependency Change: Modified the package manifest.
The real world before and after numbers
Here’s what I noticed on a medium-sized Node.js project after applying the techniques from this guide:
| The scenario | Before that | After | improvement |
|---|---|---|---|
| Cold construction | 12 minutes 34 seconds | 8 minutes 10 seconds | 35% |
| Hot build (no change) | 12 minutes 34 seconds | 14 seconds | 98% |
| Only code changes | 12 minutes 34 seconds | 1 minute 52 seconds | 85% |
| Dependency Shift | 12 minutes 34 seconds | 4 minutes 20 seconds | 65% |
The “before” column is the same for all queues because without cache optimization, each build was essentially a cold build. The 85% improvement on code changes alone is the number that matters most, because that’s what happens on the majority of commits.
How to Check Cash Hit Rate
Set BUILDKIT_PROGRESS=plain To get detailed output showing which layers hit the cache:
BUILDKIT_PROGRESS=plain docker buildx build . 2>&1 | grep -E 'CACHED|DONE'
Find out. CACHED The prefix on the layers is to see your goal. CACHED On everything except the layers that actually need to be changed.
Examples of fully optimized Docker files.
Here are production-ready Docker files that you can adapt for your projects.
Node.js full stack app
# syntax=docker/dockerfile:1
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm npm ci
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup --system --gid 1001 appgroup \
&& adduser --system --uid 1001 appuser
COPY --from=builder --chown=appuser:appgroup /app/dist ./dist
COPY --from=deps /app/node_modules ./node_modules
COPY package.json ./
USER appuser
EXPOSE 3000
CMD ("node", "dist/index.js")
Python FastAPI app
# syntax=docker/dockerfile:1
FROM python:3.12-slim AS builder
WORKDIR /app
COPY requirements.txt .
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --user -r requirements.txt
FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /root/.local /root/.local
ENV PATH=/root/.local/bin:$PATH
COPY . .
EXPOSE 8000
CMD ("uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000")
Go to Microservice.
# syntax=docker/dockerfile:1
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN --mount=type=cache,target=/go/pkg/mod go mod download
COPY . .
RUN --mount=type=cache,target=/root/.cache/go-build \
CGO_ENABLED=0 go build -ldflags="-s -w" -o /app/server ./cmd/server
FROM gcr.io/distroless/static-debian12
COPY --from=builder /app/server /server
EXPOSE 8080
ENTRYPOINT ("/server")
Troubleshooting Guide
When things go wrong, check this table first:
| symbol | Possible cause | Correct. |
|---|---|---|
| All layers are regenerated each time. | COPY . . Too early, or .dockerignore is missing | moved COPY . . After installing dependencies; add .dockerignore |
| The cache is never removed in CI. | There is no cache backend configured. | add --cache-from / --cache-to With registry, gha, or s3 backend |
| The cache is hit locally but not in CI. | Different Docker versions or BuildKit are not enabled. | Set DOCKER_BUILDKIT=1 and matches the Docker version. |
| The dependency layer is always regenerated. | Source files are copied before dependencies are installed. | Use the dependency-first pattern. |
| Image size continues to increase. | Create leaky patterns in the final image | Use a multi-stage build. Just copy the runtime samples. |
| Registry cache is very slow. | mode=max Caching too many layers | Try it. mode=min Or switch to gha/s3 for faster backends. |
Quick Reference Checklist
Print it out and tape it next to your monitor:
( ) Enable BuildKit: set
DOCKER_BUILDKIT=1or usedocker buildx( ) add a composite.
.dockerignoreFile( ) Use the dependency-first pattern: copy manifest, install, then copy source.
( ) Sort the layers from least modified to most modified.
() combine.
RUNcommands that belong together (apt-get update && install)( ) Use multistage builds to separate build and runtime.
Add ().
RUN --mount=type=cacheFor package manager caches( ) transfer volatility.
ARGs (git hash, build number) down to the very last layers( ) Configure a CI/CD cache backend (registry, gha, or s3)
( ) setup cache warming for feature branches from the main branch.
() use
COPYinstead ofADDUnless you need to extract the archive.( ) benchmark all four scenarios: cold, hot, code change, dependency change
The result
I used to think that slow Docker builds were just something you had to live with. After going through this process on a few projects, I realized that once you understand one basic principle: cache is sequential, and sequence matters.
Start with dependency – first pattern and a .dockerignore. Those two changes alone will probably cut your build time in half. Then add multistage builds, mount caches, and CI/CD cache backends as needed.
Teams I’ve worked with typically see a 70-85% reduction in CI/CD pipeline times after spending a few hours on these changes. This is when you’ll come back to every single commit, every single day.
If you found this useful, consider sharing it with your team. There’s a good chance whoever wrote your Dockerfile last time didn’t know about half of these tricks. No shadows on them, I didn’t even see one until I went looking.
Blessed building.