Docker Ate Your Disk: Reclaiming 20-50GB on macOS
Docker Desktop quietly consumes 20-60GB+ on macOS through images, containers, volumes, build cache, and the Docker.raw VM disk.
Docker Desktop was using 67GB on my Mac. I didn’t notice until my build failed with “no space left on device” and I started digging.
67GB. For containers I hadn’t run in months, images I’d pulled once for a quick test, build cache from projects I’d abandoned, and a VM disk file that had ballooned to 40GB and refused to shrink.
If you’re a developer running Docker on macOS, you’re almost certainly in the same situation. Docker is designed to cache aggressively and clean up never. On Linux, Docker stores everything directly on the filesystem, so standard disk tools can see it. On macOS, Docker runs inside a Linux VM — and everything it stores is hidden inside a single disk image file. That file only grows. It never shrinks on its own.
This guide covers every category of Docker disk usage on macOS: what it is, how to find it, how to clean it, and how to stop it from coming back.
How Docker Uses Disk on macOS
Before we start cleaning, it helps to understand why Docker on macOS is a special kind of disk hog.
Docker containers are Linux. macOS is not Linux. So Docker Desktop runs a lightweight Linux virtual machine behind the scenes, and all your images, containers, volumes, and build cache live inside that VM’s virtual disk. That disk is a single file on your Mac — typically called Docker.raw or Docker.qcow2.
Here’s the problem: the VM disk file is dynamically allocated. It grows as Docker stores more data inside it. But when you delete images or containers inside Docker, the VM disk file does not shrink. The space is freed inside the VM, but the file on your Mac stays the same size.
This means Docker’s disk usage on macOS is driven by two things:
- What’s inside Docker — images, containers, volumes, build cache
- The VM disk file itself — which remembers its high-water mark
You need to address both.
Finding How Much Space Docker Uses
Start by asking Docker itself:
docker system df
This shows a breakdown by category:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 47 3 18.72GB 16.48GB (88%)
Containers 12 1 1.24GB 1.19GB (95%)
Local Volumes 23 2 8.31GB 7.89GB (95%)
Build Cache 184 0 14.2GB 14.2GB (100%)
That’s what Docker knows about. But it doesn’t tell you the size of the VM disk file on your Mac’s filesystem — which is often much larger than the sum of those numbers.
To see the VM disk file size:
ls -lh ~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw
Don’t be surprised if this file is 40-60GB even when docker system df reports only 15-20GB of actual content.
For a more detailed breakdown, add the verbose flag:
docker system df -v
This lists every image, container, and volume individually with sizes — useful for finding the worst offenders.
Images: Dangling and Unused
What they are: Every docker pull and docker build downloads or creates image layers. Over time, you accumulate images from projects you no longer work on, base images for stacks you’ve moved away from, and “dangling” images — intermediate layers that aren’t tagged and aren’t referenced by any other image.
How big they get: 5-25GB easily. A single Node.js or Python image can be 1GB+. If you work across multiple projects with different base images, this stacks up fast.
How to see them:
docker images -a
# Show only dangling images (untagged, unreferenced)
docker images -f "dangling=true"
How to clean them:
Remove only dangling images (safe, minimal impact):
docker image prune
Remove all images not used by a running container (more aggressive, frees the most space):
docker image prune -a
You’ll be prompted to confirm. Add -f to skip the confirmation prompt.
Is it safe? Dangling images are always safe to remove — they’re orphaned layers that serve no purpose. Unused images (the -a flag) are also safe, but you’ll need to re-pull them if you want to use them again. If you have a slow connection or work with large proprietary images, consider being selective.
Re-pulling is cheap: For public images, docker pull node:20 takes 30 seconds on a decent connection. The disk space you save is worth more than the re-download time.
Containers: Stopped and Dead
What they are: Every docker run creates a container. When the process exits, the container stays on disk in a “stopped” state. It retains its filesystem layer (any files written during execution) and metadata. Most developers have dozens of stopped containers they’ve forgotten about.
How big they get: Usually 100MB-2GB total, but containers that wrote large files (logs, databases, build artifacts) can individually be much larger.
How to see them:
docker ps -a
# Show only stopped containers
docker ps -a -f "status=exited"
How to clean them:
docker container prune
This removes all containers with status exited, dead, or created (never started).
Is it safe? If a container is stopped and you don’t plan to docker start it again, yes. Any data not stored in a volume is lost when the container is removed — that’s the point. If you need the data, copy it out first with docker cp.
Volumes: Orphaned Data
What they are: Docker volumes are persistent storage that survives container removal. They’re used for databases, file uploads, configuration — anything that should persist across container restarts. The problem is that volumes are not automatically deleted when the container that created them is removed.
Over time, you accumulate “dangling” volumes — volumes that no running or stopped container references. These are pure dead weight.
How big they get: 2-15GB. Database volumes (Postgres, MySQL, MongoDB) can individually be several GB. If you’ve been spinning up database containers for different projects, this adds up.
How to see them:
docker volume ls
# Show only dangling (unused) volumes
docker volume ls -f "dangling=true"
How to clean them:
docker volume prune
This removes only dangling volumes — ones not referenced by any container (running or stopped).
Is it safe? Dangling volumes are by definition not attached to any container. But they may contain data you care about — old database dumps, uploaded files, test fixtures. If in doubt, inspect a volume before removing it:
docker volume inspect <volume_name>
Check the Mountpoint path and verify what’s inside if you’re unsure.
Warning: Volume data is not recoverable after deletion. Unlike images (which can be re-pulled) and containers (which can be re-run), volume data is gone forever. Be more careful here than with other categories.
Build Cache: The Silent Hog
What it is: Docker’s BuildKit caches intermediate build steps so that subsequent builds are faster. Every RUN instruction in your Dockerfile produces a cached layer. Over time — especially if you work with multiple projects or iterate frequently — this cache grows large.
How big it gets: 5-20GB is common. If you build large images (anything with native compilation, ML dependencies, or monorepos), the cache can exceed 30GB.
How to see it:
docker system df
The “Build Cache” row shows total size and how much is reclaimable. Usually 100% is reclaimable.
How to clean it:
docker builder prune
To remove all build cache including tagged intermediate results:
docker builder prune -a
Is it safe? Completely. Build cache is a performance optimization, not data. Your next build will be slower (it has to re-execute cached steps), but everything still works. If you’re not actively iterating on Docker builds, this is free space.
The Nuclear Option
If you want to clean everything at once — all unused images, all stopped containers, all dangling volumes, and all build cache — Docker provides a single command:
docker system prune -a --volumes
This is the “start fresh” button. It keeps only:
- Running containers
- Images used by running containers
- Volumes used by running containers
Everything else is removed.
When to use it: When you haven’t done a cleanup in months and don’t have any stopped containers or images you need to preserve. I run this roughly quarterly and it typically frees 15-30GB.
When NOT to use it: If you have stopped containers with data you haven’t extracted, or named volumes with database data you still need. Review docker ps -a and docker volume ls first.
Docker.raw: The File That Never Shrinks
This is the part most guides skip, and it’s usually the biggest chunk of wasted space.
What it is: The virtual disk file for Docker’s Linux VM. All Docker data (images, containers, volumes, cache) lives inside this file. On macOS, it’s typically at:
~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw
Depending on your Docker Desktop version and architecture, it might also be at:
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.raw
~/.docker/desktop/vms/0/data/Docker.raw
The file format may be .raw or .qcow2 depending on your Docker Desktop settings.
Why it’s a problem: Docker.raw uses sparse/dynamic allocation. It grows when Docker needs more space but does not automatically shrink when space is freed inside the VM. If your Docker usage peaked at 50GB at some point — even if you’ve since pruned everything — the file stays at 50GB.
How to check its size:
ls -lh ~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw
Compare this to what docker system df reports. The difference is wasted space — freed inside the VM but not released back to macOS.
How to reclaim the space:
The most reliable way to shrink Docker.raw is to reset the Docker Desktop VM:
- Open Docker Desktop > Settings > Resources > Disk image size
- Note the current allocation
- Quit Docker Desktop completely
- Delete the disk image file:
rm ~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw - Start Docker Desktop — it creates a fresh, small disk image
- Re-pull only the images you actually need
This is the most effective single action you can take. Going from a 50GB Docker.raw to a fresh 2GB one is a common outcome.
Alternative — reduce the disk size limit:
In Docker Desktop, go to Settings > Resources > Advanced and lower the “Disk image size” slider. This sets the maximum size the VM disk can grow to. Reducing it from the default 64GB to 32GB (or whatever your actual usage requires) prevents future bloat.
After reducing the limit, Docker Desktop may need to recreate the disk image. Back up any volumes you need before doing this.
Not Just Docker Desktop: Colima and OrbStack
If you use Docker alternatives like Colima or OrbStack instead of Docker Desktop, you have the same VM disk problem — just in different locations:
| Runtime | Disk Image Path |
|---|---|
| Docker Desktop | ~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw |
| Colima | ~/.colima/default/colima.qcow2 and ~/.colima/default/diffdisk |
| OrbStack | ~/.orbstack/data/disk.img |
The same cleanup principles apply: prune Docker content first, then address the VM disk file if it’s oversized.
Prevention: Keeping Docker Lean
Cleaning up is good. Not accumulating junk in the first place is better.
Use Multi-Stage Builds
Multi-stage Dockerfiles separate the build environment from the runtime environment. Your final image contains only what’s needed to run — no compilers, no dev dependencies, no build artifacts.
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Runtime stage — small, only production dependencies
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]
This alone can reduce image sizes from 1.5GB to 200MB.
Use .dockerignore
A .dockerignore file prevents large, irrelevant files from being copied into the Docker build context. Without it, Docker sends your entire project directory to the build daemon — including node_modules, .git, test fixtures, and anything else in the folder.
node_modules
.git
*.md
.env
dist
coverage
.DS_Store
Set Up a Prune Schedule
Add a monthly cleanup to your calendar, or add it to your shell profile:
# Add to ~/.zshrc or ~/.bashrc
alias docker-cleanup='docker system prune -a --volumes -f && echo "Docker cleanup complete"'
Use —rm for Temporary Containers
When running containers you don’t need to keep:
docker run --rm -it ubuntu:22.04 bash
The --rm flag automatically removes the container when it exits — no stopped container left behind.
Lower the Disk Size Limit
In Docker Desktop settings, reduce the disk image size from the default 64GB to something reasonable for your workload. 32GB is enough for most developers. This caps the Docker.raw file growth.
The Automated Way
If you’re reading this and thinking “I do not want to remember all these commands and run them manually every month” — I built MegaCleaner for exactly this reason.
MegaCleaner’s Docker scanner covers all five categories described in this guide:
- Docker Images — finds all images, flags dangling ones as definitely safe to remove
- Docker Containers — identifies stopped and exited containers
- Docker Volumes — detects unused/orphaned volumes
- Docker Build Cache — shows reclaimable build cache with exact sizes
- VM Disk Files — locates Docker.raw, qcow2, Colima, and OrbStack disk images and reports their sizes
It pulls data directly from the Docker CLI (same docker images, docker ps, docker volume ls, and docker system df commands described above) and presents everything in a single view with confidence levels: “definitely safe” for dangling images and build cache, “probably safe” for stopped containers and unused volumes, “verify first” for VM disk files.
And Docker is just one of 29 scanners (21 developer tools + 8 system categories). If you’re also running Xcode, node_modules, Python environments, Rust/Cargo, Homebrew, or any of the other tools developers accumulate — MegaCleaner scans all of them in seconds.
- Scanning is always free — see the full breakdown before paying anything
- $49 one-time — not a subscription
- Everything goes to Trash — fully undoable
Download at megacleaner.app.
Quick Reference: All Commands in One Block
Bookmark this. You’ll need it again in three months.
docker system df # Summary by category
docker system df -v # Detailed per-item breakdown
# ── Clean by category ────────────────────────────────
docker image prune # Remove dangling images only
docker image prune -a # Remove ALL unused images
docker container prune # Remove stopped containers
docker volume prune # Remove orphaned volumes
docker builder prune # Remove build cache
docker builder prune -a # Remove ALL build cache
# ── Nuclear option ───────────────────────────────────
docker system prune -a --volumes # Remove everything unused
# ── Check VM disk file size ──────────────────────────
ls -lh ~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw
# ── Prevent container accumulation ───────────────────
docker run --rm -it <image> <cmd> # Auto-remove on exit
# ── Clean alias (add to ~/.zshrc) ────────────────────
alias docker-cleanup='docker system prune -a --volumes -f'
Summary
Docker on macOS consumes disk through five channels: images, containers, volumes, build cache, and the VM disk file. The first four are standard Docker content that docker system prune handles. The fifth — Docker.raw — is the one most people miss, and it’s often the biggest offender because it never shrinks automatically.
A full cleanup looks like this:
- Run
docker system dfto see the damage - Prune each category (or use
docker system prune -a --volumesfor everything) - Check the Docker.raw file size — if it’s much larger than your Docker content, reset it
- Lower the disk size limit in Docker Desktop settings to prevent future bloat
- Adopt prevention habits: multi-stage builds, .dockerignore,
--rmflag, regular prune schedules
Do this quarterly and Docker will never silently eat 50GB of your disk again. Or scan with MegaCleaner and handle it in under a minute.