NGINX is a versatile, high-performance server primarily used as a web server, reverse proxy, load balancer, and HTTP cache. It plays a crucial role in modern web infrastructure due to its efficiency and scalability. Here’s a detailed explanation of what NGINX is and why it is so popular:
Core Functions
1. Web Server:
* Static File Serving:
NGINX is highly efficient at serving static content such as HTML, CSS, JavaScript, images, and videos, making it ideal for websites that need to quickly load resources.
* Dynamic Content Handling:
While NGINX itself does not directly process dynamic content (e.g., PHP, Ruby, or Pythonode), it is designed to work seamlessly with backend application servers (like PHP-FPM, Node.js, etc.) to deliver dynamic web applications.
2. Reverse Proxy:
* Request Forwarding:
As a reverse proxy, NGINX receives client requests and forwards them to one or more backend servers. This setup helps to offload tasks such as SSL/TLS termination (decrypting HTTPS traffic), caching responses, or compressing content before sending it back to the client.
* Improved Security and Load Management:
By acting as a shield between the public internet and internal servers, NGINX can provide additional layers of security, manage traffic spikes, and facilitate scalability.
3. Load Balancer:
* Distributing Traffic:
NGINX can distribute incoming network traffic across multiple backend servers using various algorithms (round-robin, least connections, IP-hash, etc.). This ensures that no single server bears too much load, which improves overall system reliability and user experience.
* Fault Tolerance:
In the event that one server goes down, NGINX can redirect traffic to healthy servers, thereby increasing the robustness of the infrastructure.
4.HTTP Cache:
* Caching Content:
NGINX can cache responses from backend services. This not only speeds up content delivery but also reduces the load on backend servers by serving cached responses for subsequent requests.
WordPress is a free and open-source content management system (CMS) that allows you to create and manage websites easily, even without deep technical knowledge.
Key Features of WordPress:
- User-Friendly Interface: Write, edit, and manage content through an intuitive dashboard.
- Themes & Plugins: Thousands of free and premium themes for design, and plugins to extend functionality (e.g., SEO, contact forms, security).
- Customizable: Full access to the code for developers who want to build custom themes or plugins.
- Open Source: Built with PHP and MySQL, licensed under the GPL.
- Community Support: Massive global community, with extensive documentation and support forums.
- PHP-FPM stands for FastCGI Process Manager.
- It’s a specialized PHP interpreter designed to handle multiple concurrent PHP requests efficiently.
- PHP-FPM runs as a separate service/process pool that manages PHP workers waiting to process requests.
Benefits of PHP-FPM vs plain PHP
| Feature | PHP CLI (php) |
PHP-FPM |
|---|---|---|
| Intended Use | Command line scripts, cron jobs | Web server environment |
| Handles Multiple Requests | No (one script at a time) | Yes (multiple PHP workers) |
| Performance | Limited concurrency | High concurrency & process management |
| Integration with Web Servers | Minimal | Full FastCGI support for NGINX/Apache |
| Process Management | None | Manages pools, worker lifetimes, dynamic spawning |
Adminer is a lightweight, full-featured database management tool written in a single PHP file. It’s used to interact with databases via a web interface, much like phpMyAdmin, but it’s simpler, faster, and more portable.
Key Features:
- Supports multiple databases: MySQL, PostgreSQL, SQLite, MS SQL, Oracle, MongoDB (via plugins), and others.
- Single-file deployment: Just one PHP file to upload and run.
- Secure by design: Minimal footprint, CSRF protection, and session-based login.
- User-friendly UI: Clean and fast interface to manage tables, run queries, edit data, and manage users.
- Customizable via plugins: Extendable with additional functionality as needed.
Redis (REmote DIctionary Server) is a fast, in-memory data store used as a database, cache, and message broker. It's widely used in modern applications where performance and scalability are crucial.
What Is Redis Used For?
| Use Case | Description |
|---|---|
| Caching | Store frequently accessed data to reduce DB load. |
| Session Storage | Store user session data in web apps. |
| Message Queues | Build pub/sub or task queues (e.g., with Celery or Bull). |
| Real-Time Analytics | Track counters or metrics in real-time. |
| Leaderboard Systems | Use sorted sets for game scores, rankings, etc. |
File transfer protocol server (commonly known as FTP Server) is computer software that facilitates the secure exchange of files over a TCP/IP network. It runs the file transfer protocol (FTP), a standard communication protocol that operates at the network level, to establish a secure connection between the devices in a client-server architecture and efficiently transmit data over the internet.
Pure-FTPd is a free, open-source, and secure FTP server software designed for Unix-like operating systems (Linux, *BSD, macOS, etc.). It aims to be simple, efficient, and secure, and is often used in environments where ease of configuration and security are important.
Recommended for Docker + WordPress.
🟢 Lightweight and secure
🟢 Easy TLS/SSL setup
🟢 Virtual users support
🟢 Well-maintained and commonly used in containerized environments
🟢 Readily available as a Docker image (stilliard/pure-ftpd)
🔴 Doesn't include a web UI (CLI or config-based)
Best for: Simple, secure, and scriptable FTP container setups.
✅ Why Pure-FTPd?
- Works well in Docker
- Secure by default
- Can limit access to only the WordPress volume
- Supports TLS and passive mode (important for production)
Passive mode (PASV) is one of the two modes in FTP used to establish data connections between the client and server (the other is active mode). It’s particularly important when the client is behind a firewall or NAT — like in most modern setups (including Docker or home networks).
FTP uses two TCP connections:
- Control connection – on port 21, used for commands like login, list, upload, download.
- Data connection – used for transferring files or directory listings.
How the data connection is opened differs between active and passive mode.
- The client initiates both the control and data connections.
- The server tells the client which port to connect to for data.
- Typically used behind firewalls and NAT because inbound connections to the client are blocked, but outbound ones are allowed.
1. Client connects to server:21 (control)
2. Client sends PASV command
3. Server replies with: "Connect to me on IP:x,y,z,w and port P"
4. Client connects to server:port (data)
5. Data (like file or listing) is transferred- Servers (like Pure-FTPd) must declare a passive port range (e.g., 30000–30042).
- You expose these in
docker-compose.ymlor-pflags. - You configure the server to advertise the public IP and port.
Netdata is a real-time performance monitoring and troubleshooting tool for systems and applications. It's open-source and designed to be lightweight, easy to install, and visually rich, making it ideal for both system admins and developers who want instant insights into their infrastructure.
What Netdata Does:
- Monitors: CPU, memory, disks, network, services, applications (like MySQL, NGINX, Docker, etc.), and more.
- Visualizes: Beautiful, interactive dashboards with live updates (per second or faster).
- Alerts: Comes with pre-configured health alarms and supports custom alert rules.
- Troubleshoots: Helps you identify bottlenecks, misbehaving processes, or system issues quickly.
This is when Dockerfile instructions are executed to assemble an image.
- Dockerfile starts
FROM, COPY, RUN, etc. instructions are processed. - Copies config + entrypoint
COPY ./entrypoint.sh /usr/local/bin/
COPY ./my-mariadb-server.cnf /etc/mysql/conf.d/ - Image is built
Final image is created with your app + config + entrypoint baked in.
Tagged, ready for docker run.
At this point, nothing is "running" yet — it's just building the image layer by layer.
This is when you start a container from that image (docker run, docker-compose up, etc.).
- Container starts
Docker launches the container based on the image. - entrypoint.sh executes
The image defines ENTRYPOINT ["entrypoint.sh"], so this script runs first. - Custom configs like my-mariadb-server.cnf are picked up
entrypoint.sh or mysqld reads /etc/mysql/conf.d/my-mariadb-server.cnf.
The script may also set env vars, create users, init DBs, etc. - mysqld starts
The final command in entrypoint.sh usually ends with:
This passes control to the CMD (e.g., CMD ["mysqld"]), launching the DB server.exec "$@"
Build Time:
Dockerfile:
↓
Copies config + entrypoint
↓
Sets up image
Run Time:
Container starts
↓
entrypoint.sh executes
↓
my-mariadb-server.cnf configures mysqld
↓
mysqld starts (via CMD or exec "$@")Writing a Dockerfile is all about defining how to build a Docker image for your application. It is like a blueprint. Here’s a simple breakdown, followed by an example:
# 1. Base image
FROM <base-image>
# 2. Metadata (optional but recommended)
LABEL maintainer="yourname@example.com"
# 3. Set working directory
WORKDIR /app
# 4. Copy files from host to container
COPY . .
# 5. Install dependencies
RUN <command to install>
# 6. Expose ports (for apps with networking)
EXPOSE <port-number>
# 7. Set environment variables (optional)
ENV VAR_NAME=value
# 8. Run the application
CMD ["executable", "param1", "param2"]
1. `FROM` – Always first. Sets the base image.
2. `LABEL / ENV` – Optional metadata or configuration.
3. `WORKDIR` – Before any file operations; sets the context for COPY and RUN.
4. `COPY / ADD` – Copy app files after WORKDIR is set.
5. `RUN` – Install dependencies. Run commands.
6. `EXPOSE` – Informational. Comes after setup.
7. `CMD / ENTRYPOINT` – Last, as it defines the default behavior.
FROM: Specifies the base image to use (must be the first non-comment instruction)
RUN: Runs a command in a new container and creates a new image layer
COPY: Copies files/folders from your local machine into the image
ADD: Similar to COPY, but supports URLs and extracting archives
CMD: Sets the default command to run when the container starts
ENTRYPOINT: Sets the main executable, allowing CMD to act as its default arguments
ENV: Sets an environment variable
WORKDIR: Sets the working directory for subsequent commands
EXPOSE: Documents the port the container listens on (informational only)
ARG: Defines build-time variables
LABEL: Adds metadata to the image (e.g., maintainer info)
USER: Sets the user under which to run the container processes
VOLUME: Declares mount points for external storage
must be written in uppercase (by convention and practice). But technically...Docker does not care about case — it’s not case-sensitive.This means run, Run, or RUN will work the same.
Compare CMD and ENTRYPOINT
| Feature | ENTRYPOINT | CMD |
|---|---|---|
| Purpose | Main process | Default arguments |
| Override | Harder to override | Easy to override via docker run |
| Syntax | Usually ["executable", "arg"] | ["arg1", "arg2"] |
Example 1, just use CMD
FROM alpine
CMD ["echo", "Hello from CMD"]Run,
docker run myimage echo "Hi there"Output is,
Hi thereCMD is overrided bydocker run
Example 2, just use ENTRYPOINT
FROM alpine
ENTRYPOINT ["echo", "This is ENTRYPOINT:"]Run,
docker run myimage "Hello"Output is,
This is ENTRYPOINT: HelloENTRYPOINT cant be overrided bydocker run, but just added the argument after it
Example 3, use both CMD and ENTRYPOINT
FROM alpine
ENTRYPOINT ["echo", "Message:"]
CMD ["Default message"]Run,
docker run myimageOutput is,
Message: Default messageRun,
docker run myimage "Custom message"Output is,
Message: Custom messageIn this case, CMD is treated as the default argument to ENTRYPOINT, rather than the command itself.
Always use --no-cache when installing packages in Alpine
Use \ to split long RUN commands across multiple lines
Keep COPY/ADD close to RUN commands that use the copied files
Order matters: changing earlier instructions will invalidate Docker cache for all later steps
When you run:
docker build -t my-image .Docker reads and executes the Dockerfile line-by-line to build a new image.
-
Dockerfile is Parsed
Docker reads the Dockerfile top to bottom. Each instruction (FROM,RUN,COPY, etc.) creates a layer in the image (except some special cases likeARG,ENV,LABEL). -
Base Image is Pulled
FROM alpine:3.20Docker pulls this base image from Docker Hub (if not already in local cache). This is the starting point of the image.
-
Instructions are Executed One by One
Each instruction creates a new intermediate container, runs the command inside that container, and then commits the result as a new image layer. -
Caching Is Used
Docker caches each layer. If nothing has changed in an instruction or its context, Docker will reuse the cached result to speed up builds.
So, order matters — changing a line early in the Dockerfile can invalidate the cache for all subsequent lines. -
Final Image Is Built
After all instructions are executed, Docker packages the final state of the last container as an image and tags it (e.g., my-image:latest).
set -eux is a commonly used command combination in shell scripts to improve the transparency, robustness, and debuggability of script execution. It is also very useful in the RUN command of a Dockerfile.
You will often see a more complete version written as:
-e: Exit the script immediately if any command fails (i.e., returns a non-zero exit status).
-u: Exit the script with an error if any unset (undefined) variable is used (helps catch typos in variable names).
-x: Print each command before executing it (useful for debugging).
Docker Compose is a tool that helps you define and run multi-container Docker applications. It uses a YAML file (docker-compose.yml) to configure your application's services, networks, and volumes, making it easier to orchestrate complex setups with multiple containers (like a web server, database, and reverse proxy).
- Define multiple services (e.g. NGINX, MariaDB, WordPress)
- Specify how each service should run — image to use, ports, environment variables, mounted volumes, etc.
- Set up dependencies between services, so one container waits for another to be ready
- Automatically create Docker networks and volumes to allow containers to talk to each other and persist data
A docker-compose.yml file typically includes the following main sections:
Specifies the Compose file format version.
version: '3.8' # or '3', '2.4', etc., depending on your Docker versionDefines the different containers (services) that make up your application.
Each service has options like:
image: Docker image to use.build: Path to Dockerfile to build a custom image.ports: Host-to-container port mapping.volumes: Mount volumes for persistence or code sharing.environment: Environment variables.depends_on: Define service startup order.networks: Which network(s) the service is attached to. Example:
services:
web:
build: ./web
ports:
- "8080:80"
volumes:
- ./web:/var/www/html
depends_on:
- db
db:
image: mariadb:10.5
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- db_data:/var/lib/mysqlDefines named volumes that can be reused across services.
volumes:
db_data:Defines custom networks (bridge, host, or overlay).
networks:
my_network:
driver: bridgerestart:: Controls restart policy (always,on-failure, etc.).command:: Override the default command in the container.entrypoint:: Override the default entrypoint.healthcheck:: Define health check instructions.
Example: A Full Setup
version: '3.8'
services:
nginx:
image: nginx:alpine
ports:
- "443:443"
- "80:80"
volumes:
- ./nginx/conf:/etc/nginx/conf.d
depends_on:
- wordpress
wordpress:
image: wordpress:php8.2-fpm
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: example
volumes:
- wordpress_data:/var/www/html
depends_on:
- db
db:
image: mariadb:10.5
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- db_data:/var/lib/mysql
volumes:
wordpress_data:
db_data:You can start the entire application stack with a single command:
docker-compose up -dIn shell scripting, commands return an exit code, not true/false like in other languages.
- Exit code 0 → success → true
- Exit code 1 (or other non-zero) → failure → false
This is a Shell loop that works like this:
-- It runs the command after until
-- If the command fails (i.e., returns non-zero exit status), then the body inside do ... done will execute
--This continues looping until the command succeeds
In simple words:
-- "Keep doing something until the condition becomes true (success)."
syntax:
while <condition>; do
# commands to run while the condition is true
doneHow it works:
1.The condition (usually a command or expression) is evaluated.
2.If the exit status is 0 (i.e., the command succeeds, which means “true” in shell terms), then the block inside do ... done runs.
3.After running the block, it goes back and checks the condition again.
4.This repeats as long as the condition stays true.
5.When the condition returns a non-zero exit code (i.e., fails or is “false”), the loop stops.
Example
counter=1
while [ $counter -le 5 ]; do
echo "Counter is $counter"
counter=$((counter + 1))
doneOutput:
Counter is 1
Counter is 2
Counter is 3
Counter is 4
Counter is 5Key Difference in One Line
| : Syntax | : Loop runs when... | : Ends when... |
|---|---|---|
| while loop | Condition is true (exit code 0) | Condition is false (non-zero) |
| -------------- | ---------------------------------- | ------------------------------------ |
| until loop | Condition is false (non-zero) | Condition is true (exit code 0) |
if [ condition ]; then
# commands to run if condition is true
fiif [ condition ]; then
# true block
else
# false block
fiif [ condition1 ]; then
# commands if condition1 is true
elif [ condition2 ]; then
# commands if condition2 is true
else
# commands if none are true
fiInteger comparisons:
| Operator | Meaning |
|---|---|
-eq |
equal to |
-ne |
not equal to |
-lt |
less than |
-le |
less than or equal to |
-gt |
greater than |
-ge |
greater than or equal |
Example:
if [ $x -gt 10 ]; then echo "x > 10"; fiString comparisons:
| Operator | Meaning |
|---|---|
= |
equal to |
!= |
not equal to |
-z |
string is empty |
-n |
string is not empty |
Example:
if [ "$name" = "admin" ]; then echo "Welcome"; fiFile checks:
| Operator | Meaning |
|---|---|
-e file |
file exists |
-f file |
file is a regular file |
-d file |
file is a directory |
-r file |
file is readable |
-w file |
file is writable |
-x file |
file is executable |
Example:
if [ -f /etc/passwd ]; then echo "Found passwd file"; fiAlways add spaces around the brackets:
if [ "$var" = "hello" ]; then ...❌ Wrong: [ "$var"="hello" ]
✅ Right: [ "$var" = "hello" ]
Quote variables to prevent word-splitting or errors
There must be spaces around the square brackets and inside them:
if [ $x -gt 3 ]; then echo "x is greater than 3"; fi❌ Wrong: [$x -gt 3]
✅ Right: [ $x -gt 3 ]
Parameter expansion is a feature in Bash that allows you to manipulate and validate variable values when using them.
The basic syntax is:
${VARIABLE}Common Forms of Parameter Expansion (with examples)
| Syntax | Meaning | Example |
|---|---|---|
${VAR} |
Get the value of VAR |
echo ${NAME} |
${VAR:-default} |
Use default if VAR is unset or empty |
echo ${NAME:-Anonymous} |
${VAR:=default} |
Assign default to VAR if it's unset or empty |
echo ${NAME:=Default} |
${VAR:?error} |
Show error and exit if VAR is unset or empty |
: ${NAME:?Please set NAME} |
${VAR:+alt} |
Use alt only if VAR is set and not empty |
echo ${NAME:+Set} |
I installed debian.
-
Download the iso version of debian(https://www.debian.org/distrib/), I choosed "64-bit PC netinst iso" version.
-
Open
Oracle Virtual Box Manager......
- Update Your System
sudo apt update && sudo apt upgrade -y- Install Docker Docker isn't included by default in Debian's repositories with the latest version, so use the official Docker install script or manual setup.
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh- Install Docker Compose
The current Docker CLI includes Compose as a plugin (
docker composeinstead ofdocker-compose). To check:
docker compose versionIf not present, install manually:
sudo apt install docker-compose-plugin- Enable Docker to Run Without sudo (Optional but useful)
sudo usermod -aG docker $USER
newgrp docker- (Optional but Useful) Install Supporting Tools These are helpful during development and troubleshooting:
sudo apt install -y vim htop curl wget git openssl openssl-server net-tools- Configure DNS for Local Domain (e.g.,
jingwu.42.fr) If your project requires access via a local domain:
- Edit /etc/hosts on your host (or within VM if testing locally):
127.0.0.1 jingwu.42.fr- sshd_config and ssh_config
If you can't find
sshd_configandssh_configfiles insshfolder, it means you haven't installopensslandopenssl-serveryet.
sshd_config:
sudo vim /etc/ssh/sshd_configuncomment port and PermitRootLogin, change the port to 4241, and PermitRootLogin to no.

ssh_config:
sudo vim /etc/ssh/ssh_configuncomment port, and change it to 4241.

After the changes you can check if the ports have been changed successfully by below command:
systemctl status ssh | grep 4241-
In the VirtualBox Manager: Setting->Network, In "Adapter 1" set "Attached to" to "NAT"(it should be the default one). Click "Advanced" -> "Port Forwarding"

-
In "Port Forwarding Rules" page, click the add button on the top left, set the "Host Port" and "Guest Port" to 4241.

- connect with vm
ssh localhost -p 4241Note: here is using lowercase 'p'.
2. copy the project from physical machine to the vm
scp -P 4241 -r /home/jingwu/projects/inception jingwu@127.0.0.1:/home/jingwuNote: here is using uppercase 'P'.
remember:
- change the port "4241" to the one you use
- change "/home/jingwu/projects/inception" to the local path of your project;
- change the user name "jingwu" in "jingwu@127.0.0.1" to your vm username;
- change "/home/jingwu" to the correct remote path;
After "make", if you see the below error:

Reason: This means your user (jingwu) does not have permission to use Docker directly. To fix it:
- switch to root
su -- run below command
usermod -aG docker jingwu- restart the VM
- After restart , using
groupscommand to check if thedockerlisted.(should be listed)
Suggest to add the user you use into sudo file:
su -After input the root password, then open sudoers
vim /etc/sudoersAt the # User privilege specification section added:
root ALL=(ALL:ALL) ALL
jingwu ALL=(ALL:ALL) NOPASSWD: ALLChange jingwu to your username.
Then run the below command:
usermod -aG sudo jingwuAgain, change jingwu to your username.
After successfully make the Makefile, we need to make sure all the containers are runnning successfully.
- Use
docker psto shows a list of running containers on your system.
docker ps- To make sure if each container runs correctly and successfully, we need to check logs of each container:
docker logs nginxdocker logs mariadbdocker logs wordpressIf all the containers are up, and each container's log are no errors, then it means all the containers are running successfully.
Frontend's address is https://jingwu.42.fr, remember to change jingwu to your login name.

Addresshttps://jingwu.42.fr/wp-admin, you can use the admin user you setted in'.env' file to login.

- Get into the mariadb container
docker exec -t mariadb mariadbSHOW DATABASES;SHOW TABLES;
5. Get into wordpress database
USE wordpress_db;- Checking the user table, there should have two users we created
select * from wp_users;docker exec -it redis redis-cli -a <your_redis_password> set testkey "hello"docker exec -it redis redis-cli -a <your_redis_password> get testkeyNext testing steps After docker-compose up: 1️⃣ Try connecting via an FTP client:
ftp localhostcurl -I http://localhost:8080
And visite the website using 'http://localhost:8080'

Website 'http://localhost:8082'

Docker is a tool that packages your app together with everything it needs to run — like the code, libraries, and settings — into a container.
✅ Think of a container like a lunchbox.
Inside the lunchbox, you have everything your meal (app) needs — no matter where you open it, it works the same.
So whether you’re on your laptop, your colleague’s PC, or a cloud server — the app runs the same way.
⚙️ How does Docker work?
1️⃣ You write instructions (a Dockerfile) that describe:
- What base system to use (e.g., Debian, Alpine)
- What files to copy
- What commands to run
2️⃣ Docker builds an image from that file (a kind of app snapshot).
3️⃣ You tell Docker to run the image → it creates a container and your app runs inside it.
Docker Compose helps you run multiple containers at once, and connect them easily.
Think of Compose like a kitchen recipe for a whole meal — not just the lunchbox for one dish.
It tells Docker:
- Start the web app container
- Start the database container
- Link them together
- Open port 8080 so I can access it
You write these instructions in a simple YAML file (usually docker-compose.yml).
When using the image without docker compose, you run the image manually, you do something like:
docker build -t myapp .
docker run -p 8080:80 myappYou are:
- Starting a single container at a time.
- Manually managing options (e.g. ports, volumes, env vars).
- If you have multiple services (e.g. app + db), you must start and network them yourself.
If you using the image with docker compose, you:
- Still use the exact same image (or build the same one from the Dockerfile).
- But you define how to run it in a docker-compose.yml file.
Compose: - Automatically sets up networking (so web can reach db by name).
- Manages multiple containers as one unit.
- Makes it easy to start/stop/rebuild all containers (docker-compose up, down).
Docker vs Docker Compose: commands to build, run, and stop
| Action | 🐳 Docker (single container) | 🧩 Docker Compose (multi-container) |
|---|---|---|
| Build image | docker build -t myapp . |
docker compose build (if using build: in YAML) |
| Run container | docker run -d -p 8080:80 myapp |
docker compose up or docker-compose up -d (for detached mode) |
| Stop container | docker stop <container_name_or_id> |
docker compose stop (stops containers but keeps network/volumes) docker compose down (stops + removes containers, network, default volumes) |
core difference:
- VMs (Virtual Machines) virtualize entire computers — each VM runs its own OS + kernel + apps.
- Docker (containers) virtualizes at the app level — containers share the host OS kernel but isolate the app environment.
Why Docker shines
- Speed: You can spin up a container in seconds → great for CI, testing, microservices.
- Density: You can run 10s or 100s of containers where you might run a few VMs.
- Dev → Prod consistency: Containers ensure “works on my machine” = “works in prod”.
- Easy orchestration: Tools like Docker Compose, Swarm, Kubernetes.
When VMs still make sense, if you need:
- Full OS isolation (e.g., for untrusted workloads)
- Different kernel versions
- Legacy OSes
A Docker network is how containers talk to:
- each other (container-to-container communication)
- the outside world (internet or your machine)
You can think of it as the virtual bridge or switch that connects your containers.
Types of Docker networks (simple view):
| Network type | What it means |
|---|---|
bridge |
Default for single-host Docker. Containers connect via a private network and can talk using container names. |
host |
Container shares the host’s network (no isolation). |
none |
No network. The container is fully isolated. |
custom bridge |
Like bridge but created by you → lets containers talk using names and gives more control. |
Related commands: docker network ls and docker network inspect












