

Photo by author
# Introduction
Docker We’ve simplified how we build and deploy applications. But when you’re starting out Learning Dockerthe terms can often be confused. You’ll hear terms like “images,” “containers,” and “volumes” thrown around without really understanding how they fit together. This article will help you understand the basic Docker concepts you need to know.
Let’s begin.
# 1. Docker image
A docker image is a sample containing everything Your application needs to run: code, runtime, libraries, environment variables, and configuration files.
The images are incredible. Once you create an image, it doesn’t change. It works the same way for eliminating environment-specific bugs on your laptop, your co-worker’s machine, and in production.
Here’s how you create an image from a doc file. A Docker file is a recipe that describes how you build an image:
docker build -t my-python-app:1.0 . -t Flag tags your image with name and version. . Tells Docker to look for a Docker file in the current directory. Once built, this icon becomes a reusable template for your application.
# 2. Docker container
A container is what you get when you run an image. This is an isolated environment where your application is actually executed.
docker run -d -p 8000:8000 my-python-app:1.0 -d The flag runs the container in the background. -p 8000:8000 Maps 8000 to port 8000 on your host, making your app accessible on localhost:8000 in the container.
You can run multiple containers from the same image. They work independently. This way you test different versions simultaneously by running ten copies of the same application or scale horizontally.
Containers are lightweight. Unlike virtual machines, they do not boot a full operating system. They start in seconds and share the host’s kernel.
# 3. Docker file
A Docker file contains instructions for building an image. This is a text file that tells Docker how to configure your application’s environment.
Here is a doc file for the Flask application:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ("python", "app.py")Let’s break down each instruction:
FROM python:3.11-slim– Start with a base image that has Python 3.11 installed. The thin variant is smaller than the standard image.WORKDIR /app– Set the working directory to /app. All subsequent commands run from here.COPY requirements.txt .– Just copy the requirements file, not all your code yet.RUN pip install --no-cache-dir -r requirements.txt– Install python dependencies. The -no-cache-dir flag keeps the image size small.COPY . .– Now copy the rest of your application code.EXPOSE 8000– Document that app uses port 8000.CMD ("python", "app.py")– Specify a command to run when the container starts.
The order of these instructions is important to your build, which is why we need to understand layers.
# 4. Image layers
Each directive in a Docker file creates a new layer. These layers stack on top of each other to create the final image.
Docker caches every layer. When you rebuild an image, Docker checks if each layer needs to be rebuilt. If nothing changes, it reuses the cached layer instead of rebuilding.
This is why we copy requirements.txt Before copying the entire application. Your dependencies change less frequently than your code. When you edit app.pyDocker reused the cached layer that installed the dependencies and only rebuilt the layers after the code copy.
Here is the layer structure from our doc file:
- Image of base python (
FROMJeez - Set working directory (
WORKDIRJeez - Copy
requirements.txtFor centuries.COPYJeez - install dependencies (
RUN pip installJeez - Copy the application code (
COPYJeez - Metadata about the port (
EXPOSEJeez - default command (
CMDJeez
If you only change your Python code, Docker just rebuilds the layers. Layers 1–4 come from cache, making construction very fast. Understanding the layers helps you Write an efficient docfile. Put frequently changing files at the beginning and at the end.
# 5. Docker volumes
Containers are temporary. When you delete a container, everything inside disappears, including the data to your application.
Docker volumes Solve this problem. They are directories that exist outside of the container file system and persist after the container is removed.
docker run -d \
-v postgres-data:/var/lib/postgresql/data \
postgres:15This creates a nominal volume postgres-data And rides on it /var/lib/postgresql/data Your database files inside the container are protected from container restarts and deletions.
You can also mount directories from your host machine, which is useful during development:
docker run -d \
-v $(pwd):/app \
-p 8000:8000 \
my-python-app:1.0This mounts your current directory in the container /app. Changes you make to files on your host are reflected instantly in the container, enabling live development without rebuilding the image.
There are three types of mountains:
- Named volumes For centuries.
postgres-data:/path) – managed by Docker, which is great for production data - Bind mounts For centuries.
/host/path:/container/path) – mount any host directory, good for development - tmpfs mounts – Store data in memory only, useful for temporary files
# 6. Docker Hub
Docker Hub There is a public registry where people share Docker images. When you write FROM python:3.11-slimDocker pulls this image from Docker Hub.
You can find images at:
and download them to your machine:
docker pull redis:7-alpineYou can also push your images to share with others or deploy to servers:
docker tag my-python-app:1.0 username/my-python-app:1.0
docker push username/my-python-app:1.0Docker Hub hosts official images similar to popular software postgresqlfor , for , for , . Radiusfor , for , for , . nginxfor , for , for , . The pythonand thousands more. These software are maintained by the creators and follow best practices.
For private projects, you can create private repositories on Docker Hub or use alternative registries Amazon Flexible Container Registry (ECR)for , for , for , . Google Container Registry (GCR)or Azure Container Registry (ACR).
# 7. Docker Compose
Real applications require multiple services. A typical web app has a The python Posterior, a postgresql Database, a Redis cacheand maybe a worker process.
Docker Compose Lets you define all these services in a single Yet Another Markup Language (Yaml) File and manage them together.
a docker-compose.yml file:
version: '3.8'
services:
web:
build: .
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://postgres:secret@db:5432/myapp
- REDIS_URL=redis://cache:6379
depends_on:
- db
- cache
volumes:
- .:/app
db:
image: postgres:15-alpine
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=myapp
cache:
image: redis:7-alpine
volumes:
postgres-data:Now start your entire application stack with one command:
This starts three containers: webfor , for , for , . dband cache. Docker handles networking automatically: a web service can reach a database on a hostname db And again on the hostname cache.
To stop everything, run:
Rebuilding after code changes:
docker-compose up -d --buildDocker Compose is essential for development environment. Instead of installing PostgreSQL and Redis on your machine, you run them in a container with a single command.
# 8. Container networks
When you run multiple containers, they need to talk to each other. Docker creates a virtual Networks that connect containers.
By default, Docker Compose creates a network for all the services you specify docker-compose.yml. Containers use service names as hostnames. In our example, Postgres connects to SQL using a web container db:5432 Because db The name of the service is
You can also create custom networks manually:
docker network create my-app-network
docker run -d --network my-app-network --name api my-python-app:1.0
docker run -d --network my-app-network --name cache redis:7Now api The container may arrive at Red cache:6379. Docker provides several network drivers, of which you will often use the following:
- Bridge – Default network for containers on the same host
- Host – The container uses the host’s network directly (no isolation).
- None – The container has no network access
Networks provide isolation. Containers on different networks cannot communicate unless they are explicitly connected. This is useful for security because you can separate your front-end, back-end and database networks.
To see all networks, run:
To inspect a network and see which containers are connected, run:
docker network inspect my-app-network# 9. Environment Variables and Docker Secrets
Hardcoding the configuration is asking for trouble. Your database password should not be the same in development and production. Your API keys should definitely not reside in your codebase.
Docker handles this through Environmental variables. Pass them at runtime -e or --env flag, and your container gets its configuration without backing values ​​into the image.
Docker Compose makes it clean. Point to one .env File and keep your secrets out of version control. exchange in .env.production When you set environment variables directly in your compose file, or define them if they are insensitive.
Docker Secrets Take it for a more productive environment, especially in crowd mode. rather than environmental variables—which can do Show in logs or process list – Secrets are encrypted during transit and at rest, then mounted as files in the container. Access only the services they need. They are designed for passwords, tokens, certificates, and anything else that would be disastrous if leaked.
The pattern is simple: sequence-separated code. Use standard format for sensitive data and environment variables for secrets.
# 10. Container Registry
Docker Hub works fine for public images, but you don’t want your company’s application images to be publicly available. A container registry is a private storage for your Docker images. Popular options include:
For each of the above options, you can follow similar procedures to publish, capture and use images. For example, you would do the following with ECR.
Your local machine or Continuous Integration and Continuous Deployment (CI/CD) The system first proves its identity to the ECR. This allows Docker to securely communicate with your private image registry instead of the public one. A locally built Docker image is given a fully qualified name that includes:
- AWS account registry address
- Repository name
- Image version
This step tells Docker where the image will reside in ECR. The image is then uploaded to a private ECR repository. Once pushed, the image is centrally stored, versioned, and available to authorized systems.
Production servers authenticate with ECR and download the image from the private registry. This keeps your deployment pipeline fast and secure. Instead of building images on production servers (slow and requires access to source code), you build once, push to the registry, and pull to all servers.
Many CI/CD systems integrate with container registries. yours GitHub Actions The workflow builds the image, pushes it to ECR, and your Kubernetes cluster pulls it automatically.
# wrap up
These ten concepts form the foundation of Docker. How these fit into a typical workflow:
- Write a docfile with instructions for your app, and create an image from the docfile
- Run the container from the image
- Use volumes to hold data
- Set environment variables and secrets for configuration and sensitive information
- a
docker-compose.ymlFor multi-service apps and let Docker Networks connect your containers - Push your image to the registry, drag it anywhere and run it
Start by containerizing a simple Python script. Add dependencies with a requirements.txt file then introduce a database using Docker Compose. Each step builds on previous concepts. Once you understand these basics, Docker is not complicated. It is simply a tool that permanently packages applications and runs them in an isolated environment.
Happy exploring!
Bala Priya c is a developer and technical writer from India. She loves working at the intersection of mathematics, programming, data science, and content creation. His areas of interest and expertise include devops, data science, and natural language processing. She enjoys reading, writing, coding and coffee! Currently, she is working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces and more. Bala also engages resource reviews and coding lessons.