Docker & Kubernetes: The Practical Guide by Maximilian Schwarzmüller
This repository is to summarize this long lecture and it would not include much code.
Docker version 20.10.7, build f0df350
Click to Contract/Expend
Why would we want an independent, standardized "application package"? \
- We want to have the exact same environment for development and production
-> This ensures that it works exactly as tested - It should be easy to share a common development environment/ setup with (new) employees and colleagues
- We don't want to uninstall and re-install local dependencies and runtimes all the time
Install Docker Extension on VS Code
docker build .
#=> writing image sha256:b41ebb6d624069022efc4835523b3a18a587eae911a4885dc1dc081b17b7511c# docker run b41ebb6d624069022efc4835523b3a18a587eae911a4885dc1dc081b17b7511c
docker run -p 3000:3000 b41ebb6d624069022efc4835523b3a18a587eae911a4885dc1dc081b17b7511cdocker ps
#CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
#d53a7b8732e8 b41ebb6d6240 "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp naughty_mayerdocker stop naughty_mayer- Create
docker run node
# NodeJS offers an "interactive mode" where you can run basic Node commands (the "REPL"). That's what he's referring to.
# The history of docker Process Status
docker ps -a
# Dive into "node" container to interact
docker run -it node- Create Dockerfile and code
- Create Docker image
docker build .
#=> => writing image sha256:d9c36df3c92ef2cb043b296a4341544fc68ff6235c1fea9cd8ec6a658817af2- Run the container based on the created image
docker run d9c36df3c92ef2cb043b296a4341544fc68ff6235c1fea9cd8ec6a658817af2
# http://localhost doesn't work
# -p : publish
# 3000 : Port I want to access
# 80 : Expose port on Dockerfile
docker run -p 3000:80 d9c36df3c92ef2cb043b296a4341544fc68ff6235c1fea9cd8ec6a658817af2- Stop the docker container
# See docker containers currently running without -a
docker ps
# quizzical_chandrasekhar is the given name
docker stop quizzical_chandrasekharWhen docker build . but only some code changed not package.json,
# Copy package.json before npm install
COPY package.json /app
# This won't be executed again unless package.json changes
RUN npm install
# This will be executed always
COPY . /app# help
docker --help
docker ps --help# Running with the attached mode (foreground, listening)
# either Container ID or Name work
docker start -a nifty_archimedes
docker run -p 3000:80 25c8a7da66bd# Running with the detached mode (background)
docker start nifty_archimedes
docker run -p 3000:80 -d 25c8a7da66bd# Attaching to a container
docker attach nifty_archimedes
docker logs -f nifty_archimedes
# Showing logs of a detached container
docker logs nifty_archimedes# To interact with an utility application not web server
docker build .
# -i: interactive, -t: Allocate a pseudo-TTY
docker run -it 66b7c26c279eb426620747dbd8b25c5dd410a2161fbbc743e8db2bc7dafe9f2
# -a: attach, -i: interactive
docker start -ai priceless_tereshkova# remove docker containers
docker rm blissful_goodall
docker rm blissful_goodall nifty_archimedes romantic_grothendieck
# images list
docker images
# remove images and layers on the image
# It won't be deleted if there is any running/stopped container from the image
docker rmi 52bdb6aaae5a d9c36df3c92e
# remove all images
docker rmi prune# -p -rm : Automatically remove the container when it exits
docker run -p 3000:80 -d --rm 0b260664df6fdocker image inspect 66b7c26c279e
# Those layers are based on Docker file commands and the original image on FROMUse case
: copying out the latest log files from the running container
docker cp dummy/. thirsty_yalow:/test
rm dummy/test.txt
docker cp thirsty_yalow:/test dummy/.
docker cp thirsty_yalow:/test/test.txt dummy/.# naming containers
docker run -p 3000:80 -d -rm --name goalsapp 0b260664df6f
# naming & tagging images (NAME:TAG)
docker build -t goals:latest .
# test running
docker run -p 3000:80 -d --rm --name goalsapp goals:latestMaximilian clarified the version/tag of node and python on Dockerfile. FROM node:14 FROM python:3.7 That looks better for sure.
# docker build -t pcsmomo/node-hello-world .
docker tag goals:latest pcsmomo/node-hello-world
# it clones from the old image
docker push pcsmomo/node-hello-world
# access denided
docker login
docker push pcsmomo/node-hello-world
# it pushes exclude libraries that existed on docker hub# remove all images, except images related to running containers
docker image prune -a
docker pull pcsmomo/node-hello-world
docker run -p 3000:80 --rm pcsmomo/node-hello-world
docker rmi pcsmomo/node-hello-world
docker run -p 3000:80 --rm pcsmomo/node-hello-world
# If the image doesn't exist on local, it will reach the hub automatically⚠ Warning: It will find locally first even if the latest version is on the hub
- Application: Read-only, stored in Images
- Temporary App Data: Read + Write, temporary, stored in Containers
- e.g. entered user input
- Permanent App Data: Read + Write, permanent, stored in Containers & Volumes
- e.g. user accounts
docker build -t feedback-node .
docker run -p 3000:80 -d --name feedback-app --rm feedback-nodeAfter writing a feedback
http://localhost:3000/feedback/awesome.txt
-> awesome.txt is saved on the container only
docker stop feedback-app
# the container is deleted now due to --rm flag
docker run -p 3000:80 -d --name feedback-app feedback-nodehttp://localhost:3000/feedback/awesome.txt
-> Can't reach awesome.txt because it's removed when the container deleted.
docker stop feedback-app
docker start feedback-apphttp://localhost:3000/feedback/awesome.txt
-> awesome.txt exists
Volumes are folders on my host machine hard drive which are mounted (“made available”, mapped) into containers
# Remove the old container and create a new container
docker build -t feedback-node:volumes .
docker stop feedback-app
docker rm feedback-app
docker run -p 3000:80 -d --name feedback-app --rm feedback-node:volumeshttp://localhost:3000
-> It won't save the file because a cross-device error
# Remove the old image and create a new image
docker logs feedback-app
# UnhandledPromiseRejectionWarning: Error: EXDEV: cross-device link not permitted, rename '/app/temp/awesome.txt' -> '/app/feedback/awesome.txt'
docker stop feedback-app
docker rmi feedback-app
# Fix server.js and rebuild the container
docker build -t feedback-node:volumes .
docker run -p 3000:80 -d --name feedback-app --rm feedback-node:volumeshttp://localhost:3000 -> Submit awesome feedback again
# Kill the old container(--rm) and run a new container
docker stop feedback-app
docker run -p 3000:80 -d --name feedback-app --rm feedback-node:volumeshttp://localhost:3000/feedback/awesome.txt
-> WTF? still awesome.txt doesn't exist
Anonymous Volumes will be removed automatically, when the container started with --rm, was stopped(and removed).
However, if a container is started without --rm, the anonymous volume would NOT be removed, even if you remove the container.
And a new anonymous volume will be created when docker is re-created and re-run
# Check and delete the Anonymous Volume
docker volume --help
docker volume ls
# DRIVER VOLUME NAME
# local 4919100018b2e0443ff8933050148acb34801a0a98769d6af084879fce152936
docker stop feedback-app
docker volume ls
# the volume has been removedDelete VOLUME on Dockerfile
docker rmi feedback-node:volumes
# Use a Named Volume : It is not attached to a container
# -v [volume name]:[container-internal path]
docker build -t feedback-node:volumes .
docker run -d -p 3000:80 --rm --name feedback-app -v feedback:/app/feedback feedback-node:volumeshttp://localhost:3000 -> Submit awesome feedback again
# Stop/remove the container and run a new container
docker stop feedback-app
docker volume ls
# DRIVER VOLUME NAME
# local feedback
docker run -d -p 3000:80 --rm --name feedback-app -v feedback:/app/feedback feedback-node:volumeshttp://localhost:3000/feedback/awesome.txt -> Ta-da
# add "-v [absolute path of local machine]:[container-internal path]"
# This option is for a developer mode to reflect changes rapidly.
# it will clash and remove the container.
docker run -d -p 3000:80 --rm --name feedback-app -v feedback:/app/feedback -v "/Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/03_data-volumes/03_data-volumes-01":/app feedback-node:volumes
# without --rm, it will still clash.
docker run -d -p 3000:80 --name feedback-app -v feedback:/app/feedback -v "/Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/03_data-volumes/03_data-volumes-01":/app feedback-node:volumes
# or docker run -d -p 3000:80 --name feedback-app -v feedback:/app/feedback -v pwd:/app feedback-node:volumes
docker ps -a
docker logs feedback-app
# Error: Cannot find module 'express'# add "v /app/node_modules" -> Connected to an anonymous volume
# equivalent to "VOLUME [ "/app/node_modules" ]" on Dockerfile
# -v /app/node_modules : Then /app folder will not overwrite them
docker run -d -p 3000:80 --rm --name feedback-app -v feedback:/app/feedback -v "/Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/03_data-volumes/03_data-volumes-01":/app -v /app/node_modules feedback-node:volumesIf feedback.html on local changes, it will display on the browser.
After changing package.json and Dockerfile
docker rmi feedback-node:volumes
docker build -t feedback-node:volumes .
docker run -d -p 3000:80 --rm --name feedback-app -v feedback:/app/feedback -v "/Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/03_data-volumes/03_data-volumes-01":/app -v /app/node_modules feedback-node:volumes
docker run -d -p 3000:80 --rm --name feedback-app -v feedback:/app/feedback -v pwd:/app -v /app/node_modules feedback-node:volumesChange server.js
http://localhost:3000 -> Submit awesome feedback again
docker logs feedback-appWe have used all different approaches
- docker run –v /app/data ... : Anonymous Volume
- docker run –v [volume name]:/app/data ... : Named Volume
- docker run –v [physical path]:/app/data ... : Bind Mount
# add ":ro" -> Docker container can't write on this volume
# connect /app/temp to an anonymous volume
docker run -d -p 3000:80 --rm --name feedback-app -v feedback:/app/feedback -v "/Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/03_data-volumes/03_data-volumes-01":/app:ro -v /app/temp -v /app/node_modules feedback-node:volumesdocker volume create --help
docker volume create feedback-files
docker volume inspect feedback
# "Mountpoint": "/var/lib/docker/volumes/feedback/_data"
# The path is inside of the virtual machine docker created
docker volume rm feedback-files-v [absolute path of local machine]:[container-internal path]
Bind Mounts option is for a developer mode to reflect changes rapidly.
Better keep "COPY" in Dockerfile, so it creates a snapshot in the production
# Using ENV from Dockerfile
docker build -t feedback-node:env .
docker run -d --rm -p 3000:80 --name feedback-app -v feedback:/app/feedback -v "/Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/03_data-volumes/03_data-volumes-01:/app:ro" -v /app/temp -v /app/node_modules feedback-node:env
# Using runtime ENVironment variables
# --env or -e
docker run -d --rm -p 3000:8000 --env PORT=8000 --name feedback-app -v feedback:/app/feedback -v "/Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/03_data-volumes/03_data-volumes-01:/app:ro" -v /app/temp -v /app/node_modules feedback-node:env
docker run -d --rm -p 3000:8000 -e PORT=8000 --name feedback-app -v feedback:/app/feedback -v "/Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/03_data-volumes/03_data-volumes-01:/app:ro" -v /app/temp -v /app/node_modules feedback-node:env
# Using .env file
docker run -d --rm -p 3000:8000 --env-file ./.env --name feedback-app -v feedback:/app/feedback -v "/Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/03_data-volumes/03_data-volumes-01:/app:ro" -v /app/temp -v /app/node_modules feedback-node:env⚠ Warning: ENV on Dockerfile can be exposed through "docker history <image>"
For credentials and private keys, use .env and do not commit to github.
# Using Dockerfile
docker build -t feedback-node:web-app .
# Manipulate ARG on Dockerfile
docker build -t feedback-node:dev --build-arg DEFAULT_PORT=8000 .docker build -t favorites-node .
# it clasehs as it can't connect 'mongodb://localhost:27017/swfavorites'
docker run --name favorites -d --rm -p 3000:3000 favorites-node
docker run --name favorites --rm -p 3000:3000 favorites-node
# comment the mongoose part on app.js
docker run --name favorites -d --rm -p 3000:3000 favorites-nodehttp://localhost:3000/movies -> works
http://localhost:3000/people -> works
Change localhost to "host.docker.internal" on app.js
Re build the image and run
http://localhost:3000/favorites -> works if mongodb is installed on the host machine
docker run mongo
docker run -d --name mongodb mongo
docker container inspect mongodb
# "IPAddress": "172.17.0.2",
# Change "host.docker.internal" to "172.17.0.2" on app.js
docker build -t favorites-node .
docker run --name favorites -d --rm -p 3000:3000 favorites-node
# Now two containers are running
docker psRun Postman and send data
// http://localhost:3000/favorites
// Method : Post
// Body -> Raw, JSON
{
"name": "A New Hope",
"type": "movie",
"url": "http://swapi.dev/api/films/1/"
}http://localhost:3000/favorites -> works
# Create a new network
docker stop favorites
docker stop mongodb
docker container prune
docker run -d --name mongodb --network favorites-net mongo
# docker: Error response from daemon: network favorites-net not found.
docker network --help
docker network create favorites-net
# with --network
# it doesn't need -p flag
docker rm mongodb
docker run -d --name mongodb --network favorites-net mongo
# Change "172.17.0.2" to "mongodb" on app.js
# If both are the name network, using the container name, "mongodb" works
docker build -t favorites-node .
docker run --name favorites --network favorites-net -d --rm -p 3000:3000 favorites-nodeMongoDB Server
docker run --name mongodb --rm -d -p 27017:27017 mongoBackend Server
backend % docker build -t goals-node .
backend % docker run --name goals-backend --rm -d -p 80:80 goals-nodeFrontend Server
frontend % docker build -t goals-react .
# it will stop the server
frontend % docker run --name goals-frontend --rm -d -p 3000:3000 goals-react
# add -it -> -i: interactive, -t: Allocate a pseudo-TTY
# React project should run with -it flag
frontend % docker run --name goals-frontend --rm -d -p 3000:3000 -it goals-reactdocker network create goals-net
# MongoDB Server
# We no longer need to publish ports
docker run --name mongodb --rm -d --network goals-net mongo
# Backend Server not publishing 80 port
# Need to fix app.js to use the mongodb container name
backend % docker build -t goals-node .
backend % docker run --name goals-backend --rm -d --network goals-net goals-node
# Frontend Server
# NO need to fix App.js to use the goals-backend container name
# Because it is working on the browser so it still needs to use localhost
frontend % docker run --name goals-frontend --rm -d -p 3000:3000 -it goals-react
# Bakenc Server publishing the port
backend % docker run --name goals-backend --rm -d -p 80:80 --network goals-net goals-nodeMongo DB Connection String URI Format
# create data volume to connect mongodb data
docker run --name mongodb -v data:/data/db --rm -d --network goals-net mongo
# Add Authentication
docker stop mongodb
docker volume rm data
docker run --name mongodb -v data:/data/db --rm -d --network goals-net -e MONGO_INITDB_ROOT_USERNAME=noah -e MONGO_INITDB_ROOT_PASSWORD=secret mongo
# Add MongoDB authentication data to app.js and rebuild the backend server
backend % docker build -t goals-node .
backend % docker run --name goals-backend --rm -d -p 80:80 --network goals-net goals-node# Add nodemon
backend % docker build -t goals-node .
# create logs volume to connect /app/logs
# the longer path has precedence than shorter path : /app/logs > /app
# -v /app/node_modules : Then /app folder will not overwrite them
backend % docker run --name goals-backend -v "/Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/05_docker_multi/backend:/app" -v logs:/app/logs -v /app/node_modules --rm -d -p 80:80 --network goals-net goals-node# Add ENV MONGODB_USERNAME and ENV MONGODB_PASSWORD to Dockerfile
backend % docker build -t goals-node .
# add -e MONGODB_USERNAME=noah
# it will overwrite MONGODB_USERNAME from Dockerfile
backend % docker run --name goals-backend -v "/Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/05_docker_multi/backend:/app" -v logs:/app/logs -v /app/node_modules -e MONGODB_USERNAME=noah --rm -d -p 80:80 --network goals-net goals-nodefrontend % docker run --name goals-frontend \
-v /Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/05_docker_multi/frontend/src:/app/src \
--rm \
-d \
-p 3000:3000 \
-it \
goals-reactCompose file version 3 reference
docker image prune -a
docker-compose up
# detached mode
docker-compose up -d
# removing containers and networks
docker-compose down
# including volumes
docker-compose down -vdocker-compose up -d
docker-compose downThe services were created under the name "05_docker_multi_backend_1" and"05_docker_multi_mongodb_1"
The backend server, 'mongodb://mongodb:27017/course-goals' is connecting to mongodb not to 05_docker_multi_mongodb_1
As we create the service name, mongodb on docker-compose.yaml, it works just fine.
✅ MongoDB + Node Backend Server + React (create-react-app) Server, succeeded
docker-compose up -d
# Creating network "05_docker_multi_default" with the default driver
# Creating 05_docker_multi_mongodb_1 ... done
# Creating 05_docker_multi_backend_1 ... done
# Creating 05_docker_multi_frontend_1 ... done
docker-compose down# it only builds but doesn't start containers
docker-compose build# a long process... to use only "npm init"
docker run -it -d node
docker exec friendly_mendel node -v
docker exec -it friendly_mendel npm init
# it will create package.json, but inside the container
docker stop friendly_mendel
docker container rm friendly_mendel
# Make the process short
docker run -it node npm initdocker build -t node-util .
docker run -it -v /Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/06_docker_utility-container:/app node-util npm init
# package.json is crated on the local host machinedocker build -t mynpm .
docker run -it -v /Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/06_docker_utility-container:/app mynpm init
docker run -it -v /Users/noah/Documents/Study/Study_devops/udemy/docker-kubernetes/docker-kubernetes-git/06_docker_utility-container:/app mynpm install express --savedocker-compose run --rm npm-container inithomestead is laravel's default database name
Laravel installation via composer
docker-compose run --rm composer create-project laravel/laravel .docker-compose up --help
# Usage: up [options] [--scale SERVICE=NUM...] [--] [SERVICE...]
docker-compose up -d server php mysql
# nginx server is exited
docker logs 07_docker_laravel-php_server_1
# nginx: [emerg] "server" directive is not allowed here in /etc/nginx/nginx.conf:1
# fix docker-compose.yaml
docker-compose down
docker-compose up -d server php mysql
# add dependencies on docker-compose.yaml
docker-compose down
docker-compose up -d server
# this is working correctly
# but it will not rebuild images if the images exist
# add --build
# It will be quick as it is using cached one from the layer
docker-compose down
docker-compose up -d --build serverAdd h1 tag on "src/resources/views/welcome.blade.php" to test.
http://localhost:8000 -> h1 tag appears
# database migration? create tables
docker-compose run --rm artisan migratedocker-compose down
docker-compose up -d --build server
# http://localhost:8000 -> Permission denied
# add the permision on php.dockerfile
docker-compose down
docker-compose up -d --build serveraddgroup laravel and adduser laravel
✅ Nginx + PHP + MySQL, All Servers Succeeded
✅ Composer + Artisan (+ NPM), All Utility Containers Succeeded
# 1. Create a laravel project to /src
docker-compose run --rm composer create-project laravel/laravel .
# 2. Change database variables on /src/.env file
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=homestead
DB_USERNAME=homestead
DB_PASSWORD=secret
# 3. Run servers
docker-compose up -d --build server
# 4. Migrate the database (Why does it need?)
docker-compose run --rm artisan migrate
# ERROR: Service 'artisan' failed to build : The command '/bin/sh -c docker-php-ext-install pdo pdo_mysql' returned a non-zero code: 11
# Failed at the first attempt
# probably a permission issue?
docker-compose run --rm artisan migrate
# Migration table created successfully.
# 5. Clean up
docker-compose down
docker volume rm [volumes]
(docker network rm [networks])
docker image rm [images]Click to Contract/Expend
Deploy to AWS EC2
- Create and launch EC2 instance, VPC, and security group
- Configure security group to expose all required ports to WWW
- Connect to instance (SSH), install Docker, and run the container
docker build -t node-dep-example .
docker run -d --rm --name node-dep -p 80:80 node-dep-example- Go to AWS EC2
- Launch Instance
- Select Amazon Linux 2 AMI
- Choose all default options.
- Create new key pairs file -> save it as "example-1.cer" on my local machine
- Launch
On Instance
- Click Connect
- Choose SSH Client and follow the steps
- chmod 400 example-1.cer
- sudo ssh -i "example-1.cer" ec2-user@ec2-[X-XX-XXX-XX].ap-southeast-2.compute.amazonaws.com (IP address is different when restarted)
sudo yum update -y
sudo amazon-linux-extras install docker
sudo service docker startdocker build -t node-dep-example-1-aws .
docker tag node-dep-example-1-aws pcsmomo/node-example-1-aws
docker login
docker push pcsmomo/node-example-1-awssudo docker run -d --rm -p 80:80 pcsmomo/node-example-1-awshttp://3.26.113.49/ -> This site can't be reached \
Allow HTTP from Security Group on AWS
- EC2 -> My Instance running -> Security -> Select the Security groups
- Add inbound rules, HTTP from anywhere
http://3.26.113.49/ -> Works
# Change source codes
docker build -t node-dep-example-1-aws .
docker tag node-dep-example-1-aws pcsmomo/node-example-1-aws
docker push pcsmomo/node-example-1-awssudo docker pull pcsmomo/node-example-1-aws
sudo docker run -d --rm -p 80:80 pcsmomo/node-example-1-aws- Connect AWS ECS (Elastic Container Service) and Click Get Started
- Container definition -> Custom-app -> Configure
- (This configuration is docker run [options])
- Container name: node-demo (--name)
- image: pcsmomo/node-example-1-aws
- Port mappings: 80 (-p 80:80)
- Environment - Entry Point, Command, Working directory, and Environment variables
- Storage and Logging
- Storage is equivalent to (-v)
- Check on Log configuration to see logs
- Task definition
- Compatibilities FARGATE (Serverless, it runs only when it is executed, cost-effective)
- Service: we could set up Load Balancer, but not now
- Cluster: multiple containers would run in this same Cluster
- Create!
- View Service -> tasks -> click running task -> find the Public IP and go!
# Change source codes
docker build -t node-dep-example-1-aws .
docker tag node-dep-example-1-aws pcsmomo/node-example-1-aws
docker push pcsmomo/node-example-1-aws- ECS -> Cluster -> default -> Tasks -> click running task definition (not task)
- Create new revision -> Create -> Action -> Update Service -> Skip to review -> Update Service
- Service -> Tasks -> New task with status Provisioning, Penging, Running
The first task will be removed automatically - Click the new task -> Find the Public IP and go! (different IP though)
Adding a Load Balancer to a Fargate task
The backend and MongoDB Containers are not in the same docker network
But when they are in the same cluster on ECS, they can use localhost.
@mongodb:27017/ -> @${process.env.MONGODB_URL}:27017/
Set up MONGODB_URL=mongodb on local as compose service name is mongodb
And separately set up MONGODB_URL variable on AWS ECS.
docker build -t goals-node ./backend
docker tag goals-node pcsmomo/goals-node
docker push pcsmomo/goals-node- Create Cluster
- AWS ECS -> Cluster -> Create Cluster
- Networking Only -> Next
- Cluster Name: goals-app
- Create VPC: check (Take a memo of name of VPC)
- Create, it takes a couple of minutes
- View Cluster
- Create Tasks first (Services are based on tasks)
- AWS ECS -> Task Definitions -> Create new Task Definition
- FARGATE -> Next Step
- Task Definition Name: goals
- Task Role : ecsTaskExecutionRole
- Task Memory : 0.5GB (The smallest one)
- Task CPU : 0.25 vCPU (The smallest one)
- Add container
- container name: goals-backend
- image: pcsmomo/goals-node
- Port mappings: 80
- Environment
- (Because the Dockerfile is using "npm start" to use nodemon for the developer mode.)
- command: node, app.js
- Environment variables
- MONGODB_USERNAME=max
- MONGODB_PASSWORD=secret
- MONGODB_URL=localhost
- Add
- Add container
- container name: mongodb
- image: mongo
- Port mappings: 27017
- Environment
- Environment variables
- MONGO_INITDB_ROOT_USERNAME=max
- MONGO_INITDB_ROOT_PASSWORD=secret
- Environment variables
- Create
- Create Service
- AWS ECS -> Cluster -> Services -> Create : Configure service
- Launch type: FARGATE
- Task Definition: goals
- Service name: goals-service
- Number of tasks: 1
- Next Step
- Configure network
- Cluster VPC: choose the one when the cluster created (vpc-0803a9dc38bf99d7e (10.0.0.0/16))
- Subnets: Choose both subnets available (ap-southeast-2a, ap-southeast-2b)
- Auto-assign public IP: ENABLED
- Load balancer type: Application Load Balancer (No load balancer is found)
- Click EC2 Console to create a load balancer
- Application Load Balancer, Configure
- Name: ecs-lb
- VPC: choose the same VPC (vpc-0803a9dc38bf99d7e (10.0.0.0/16))
- Availability Zones: check both (ap-southeast-2a, ap-southeast-2b)
- Next: Configure Security Settings
- Configure Security Settings : Basic (As we are not using HTTPS now)
- (Changed)Configure Security Groups : check both default and goals--xxxx (This opens port 80 to incoming traffic)
- Configure Routing
- Name: tg
- Target type: IP
- (Changed)Health checkes
- Protocol: HTTP
- Path: /goals
- Register Targets: As is, ECS is automatically registering targets here.
- Next: Review -> Create
- Application Load Balancer, Configure
- Refresh Load balancer name and choose ecs-lb
- Container name : port : goals-backend:80:80 -> Add to load balancer
- target group name: tg
- Next step
- Set Auto Scaling (optional) : Do not adjust the service’s desired count
- Review -> Create Service
- AWS ECS -> Cluster -> Services -> Create : Configure service
Clusters -> goals-app -> Tasks -> Click the running task -> Two Containers are pending -> Runnings -> Connect to the Public IP 13.211.219.9
http://13.211.219.9 -> This site can’t be reached 13.211.219.9 refused to connect.
The lecture said the load balancer is not configured correctly. See the next lecture.
AWS EC2 -> Load Balancers -> ecs-lb -> DNS name (This is the endpoint)
But still can't reach it. Something was wrong with the target group.
Clusters -> goals-app -> Tasks -> Stopped
You can see some stopped tasks. It means something went wrong, so the load balancer is recreating the tasks. (another meaning is that the load balancer works fine)
- AWS EC2 -> Target Groups -> tg (the one we created) -> Health Checks -> Edit -> change Path from "/" to "/goals"
- AWS EC2 -> Load Balancers -> ecs-lb -> Security groups -> Add goals-xxxxx one beside the default one
It doesn't work for me. So, I created a new revision of Task Definition: goals and updated the service with that one.
✅ It works!!!!!!!, succeeded
Run Postman and send data
// http://ecs-lb-2034865568.ap-southeast-2.elb.amazonaws.com/goals
// Method : Post
// Body -> Raw, JSON
{
"text": "A first test!"
}
// http://ecs-lb-2034865568.ap-southeast-2.elb.amazonaws.com/goals
// Method : Get
{
"goals": [
{
"id": "60e15115465c540021231195",
"text": "A first test!"
}
]
}
// http://ecs-lb-2034865568.ap-southeast-2.elb.amazonaws.com/goals/60e15115465c540021231195
// Method : Delete
{
"message": "Deleted goal!"
}# Change app.js and re-launch the app
docker build -t goals-node ./backend
docker tag goals-node pcsmomo/goals-node
docker push pcsmomo/goals-nodeAWS ECS -> Clusters -> goals-app -> Services -> goals-service -> Update -> Force new deployment: Check -> Skip to Review -> Update Service
No need to create a new revision
!The service created the new task and the stored data has been lost.
- AWS ECS -> Task Definitions -> goals:latest -> Create new revision
- Add volume
- Name: data
- Volume type: EFS
- File system ID
- Click Amazon EFS console to create a new file system
- Create a file system
- Name: db-storage
- Virtual Private Cloud(VPC): choose the same VPC (vpc-0803a9dc38bf99d7e)
- Customize
- Next: Network access -> we would have two subnet masks
- New tab: AWS EC2 -> Security Groups -> Create security group
- Security group name: efs-sc
- Description: multiple container example sc to be added to the new EFS, db-storage
- VPC: the same VPC (vpc-0803a9dc38bf99d7e)
- Add Inbound rule
- Type: NFS
- Source: Security Groups - goals--xxxx | sg-xxxxxxx (managin my containers)
- Create security group
- Previous and Next to refresh
- Choose the new security group, efs-sc instead of the default one for both subnet masks
- Next: File system policy
- Next: Review and create
- Create
- Create a file system
- refresh File system and select db-storage
- Click Amazon EFS console to create a new file system
- Access point: None (You can read the document if you don't want to create a new EFS and use several access points on this volume)
- Add
- (This is a little bit as defining "data" volume with docker-compose )
- Connecting to the container
- click mongodb container
- Mount points
- Source Volume: data (the EFS volume name)
- Container path: /data/db
- (just the same as docker-compose.yaml, mongodb service)
- Update
- Mount points
- click mongodb container
- Create
- Action -> Update Service
- Platform version: Latest (When using EFS, "Latest" sometimes fails to run container then choose "1.4.0")
- Force new deployment: Check
- Skip to review
- Update Service
- Tasks -> the new task will be PROVISIONING, PENDING, and RUNNING
Run Postman and save data
// http://ecs-lb-2034865568.ap-southeast-2.elb.amazonaws.com/goals
// Method : Post
// Body -> Raw, JSON
{
"text": "A third test!"
}Restart the service, then a new task will be created
AWS ECS -> Clusters -> goals-app -> Services -> goals-service -> Update -> Force new deployment: Check -> Skip to Review -> Update Service
⚠ Warning 1: If I update service several times before the previous deployment finishes, those will be in a queue and will be processed in order
⚠ Warning 2: In this scenario, the old task will be stopped, when the new task passes its health check.
While both tasks are running at the same time, if users write data on both tasks, it will all write on the same EFS.
we can stop the old task manually to prevent this problem.
However, we will replace the mongodb container with a different solution soon.
I guess it's MongoDB Atlas.
AWS setting part is always challenging, 18 minute lecture got me for almost 2 hours to complete the setting such as "145. Using EFS Volumes with ECS"
We can use the mongodb container for development and MongoDB Atlas for production.
However, the db versions should be the same, otherwise we could possibly use new or deprecated features between the versions.
- Atlas -> Current Project -> Network Access -> ADD IP ADDRESS -> ALLOW ACCESS FROM ANYWHERE
- Atlas -> Current Project -> Database Access -> ADD NEW DATABASE USER
- username: max
- password: 8D8mEKSXoFlGaVkj (Autogenerate Secure Password)
- Grant specific privileges or Read and write to any database
- readWrite @ goals-dev
- readWrite @ goals (production)
Update backend.env and Test
docker-compose up
# DB ConnectedTest with Postman http://localhost:goals -> works fine
# Change app.js and backend.env and re-launch the app
docker build -t goals-node ./backend
docker tag goals-node pcsmomo/goals-node
docker push pcsmomo/goals-node- AWS ECS -> Task Definitions -> goals:latest -> Create new revision
- Delete db container and related volumes
- Container Definitions -> mongodb -> delete
- AWS Elastic File System (EFS) -> db-storage (fs-011d2539) -> Delete
- AWS EC2 -> Security Groups -> efs-sc -> Delete
- Make sure to delete "data" volume on this task definition
- Change Backend Configurations
- Container Definitions -> goals-backend
- MONGODB_URL: noahcluster.pvxa3.mongodb.net
- MONGODB_PASSWORD: 8D8mEKSXoFlGaVkj
- MONGODB_NAME: goals
- Container Definitions -> goals-backend
- Create
- Action -> Update Service
- Platform version: Latest (It's not using EFS anymore so no need to select 1.4.0)
- Force new deployment: Check
- Skip to review
- Update Service
If docker image is deployed to docker hub again, only Update Service is needed
AWS ECS -> Clusters -> goals-app -> Services -> goals-service -> Update -> Force new deployment: Check -> Skip to Review -> Update Service
I forgot to delete volume part on the task definition after deleting EFS.
Because of this, new tasks failed again and again...😢
Frontend projects need an extra process, "build" due to JSX which browsers cannot understand.
Docker - Use multi-stage builds
docker build -f frontend/Dockerfile.prod -t goals-react ./frontend
docker tag goals-react pcsmomo/goals-react
docker push pcsmomo/goals-react- AWS ECS -> Task Definitions -> goals:latest -> Create new revision
- Add Container
- container name: goals-frontend
- image: pcsmomo/goals-react
- Port mappings: 80
- Startup Dependency Ordering
- Container name: goals-backend
- Condition: SUCCESS
- Add
- ⚠Create button is disabled
- Because backend and frontend containers are using the same port. 80
- Container ports and protocols combination must be unique within a Task definition
- Cancel
Create a new task definition for goals-react
- AWS ECS -> Task Definitions -> Create new Task Definition
- FARGATE -> Next Step
- Task Definition Name: goals-react
- Task Role : ecsTaskExecutionRole (the same as the backend)
- Task Memory : 0.5GB (minimum amount)
- Task CPU : 0.25 vCPU (minimum amount)
- Add container
- container name: goals-frontend
- image: pcsmomo/goals-react
- Port mappings: 80
- Add
- Create
- FARGATE -> Next Step
- Create a new load balancer
- Click EC2 Console to create a load balancer
- Application Load Balancer, Configure
- Name: goals-react-lb
- Scheme: internet-facing
- VPC: choose the same VPC (vpc-0803a9dc38bf99d7e (10.0.0.0/16))
- Availability Zones: check both (ap-southeast-2a, ap-southeast-2b)
- Next: Configure Security Settings
- Configure Security Settings : Basic (As we are not using HTTPS now)
- Configure Security Groups : check both default and goals--xxxx (This opens port 80 to incoming traffic)
- Configure Routing
- Target group: New target group
- Name: react-tg
- Target type: IP
- Health checkes
- Protocol: HTTP
- Path: /
- Register Targets: As is, ECS is automatically registering targets here.
- Next: Review
- Create
- DNS name: goals-react-lb-1862629005.ap-southeast-2.elb.amazonaws.com
- Application Load Balancer, Configure
- Click EC2 Console to create a load balancer
⚠ So now, the url in App.js need to be changed as we have two separates services for backend and frontend
docker build -f frontend/Dockerfile.prod -t goals-react ./frontend
docker tag goals-react pcsmomo/goals-react
docker push pcsmomo/goals-react- Create Service
- AWS ECS -> Cluster -> Services -> Create : Configure service
- Launch type: FARGATE
- Task Definition: goals-react
- Cluster: goals-app
- Service name: goals-react-service
- Number of tasks: 1
- Deployment type: Rolling update
- Next Step
- Configure network
- Cluster VPC: choose the one when the cluster created (vpc-0803a9dc38bf99d7e (10.0.0.0/16))
- Subnets: Choose both subnets available (ap-southeast-2a, ap-southeast-2b)
- Security groups: Select existing security group (goals--3617, exposing port 80)
- Auto-assign public IP: ENABLED
- Load balancer type: Application Load Balancer (No load balancer is found)
- Load balancer name: goals-react-lb
- Container name : port : goals-frontend:80:80 -> Add to load balancer
- target group name: react-tg
- Next step
- Set Auto Scaling (optional) : Do not adjust the service’s desired count
- Review
- Create Service
- AWS ECS -> Cluster -> Services -> Create : Configure service
- Tasks -> the new task will be PROVISIONING, PENDING, and RUNNING
✅ Node Server + React Server on AWS, succeeded Front: http://goals-react-lb-1862629005.ap-southeast-2.elb.amazonaws.com Backend: http://ecs-lb-2034865568.ap-southeast-2.elb.amazonaws.com/goals
# --target build
docker build --target build -f frontend/Dockerfile.prod -t goals-react ./frontendIt will only build the node part and stop it before FROM nginx
This option would be helpful, if we have complex dockerfiles with multiple stages.
Click to Contract/Expend
Kubernetes is like Docker-Compose for multiple machines
- What do I need to do
- Create the Cluster and the Node Instance (Worker + Master Nodes)
- Setup API Server, kublelet and other Kubernetes services / software on Nodes
- Create other (cloud) provider resources that might be needed (e.g. Load Balancer, File systems)
- What Kubernetes does
- Create objects (e.g. Pods) and manage them
- Monitor Pods and re-create them, scale Pods etc.
- Kubernetes utilizes the provided (cloud) resources to apply your configuration/goals
Two tools (more tools) and hypervision : Docker
- kubectl: The Kubernetes command-line tool, like a president running commands against Kubernetes clusters
- Install kubectl binary with curl on macOS (https://www.virtualbox.org/wiki/Downloads) : binary
- ln -s ./kubectl /usr/local/bin/kubectl
# 1. Download the latest release:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
# 2. Validate the binary (optional)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256"
echo "$(<kubectl.sha256) kubectl" | shasum -a 256 --check
# > kubectl: OK
# 3. Make the kubectl binary executable.
chmod +x ./kubectl
# 4. Move the kubectl binary to a file location on your system PATH.
sudo mv ./kubectl /usr/local/bin/kubectl
sudo chown root: /usr/local/bin/kubectl
# 5. Test to ensure the version you installed is up-to-date:
kubectl version --client
# Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"darwin/amd64"}- minikube: Local Kubernetes. Dummy cluster for developer
- Installation : binary
- Start with docker driver
# Installation
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
sudo install minikube-darwin-amd64 /usr/local/bin/minikube
# 2. Start my cluster
minikube start --driver=docker
# it creates a docker image and a running container
# 3. Check minikube status
minikube status
# 4. See minikube web dashboard
minikube dashboarddocker build -t kub-first-app .
# Check minikube running
minikube status
# if it's not running
# minikube start --driver=docker
kubectl help
kubectl create # to see create help
# kubectl is automatically connecting to minikube
# kubectl create deployment first-app --image=kub-first-app
# kubectl get deployments
# kubectl get pods
# kubectl delete deployment first-app
## We can see the deployment and pod but they are not ready 0/1
## Because kub-first-app is only on my local machine.
## So kubectl cannot find the image from the minikube cluster
docker tag kub-first-app pcsmomo/kub-first-app
docker push pcsmomo/kub-first-app
kubectl create deployment first-app --image=pcsmomo/kub-first-app
kubectl get deployments
# NAME READY UP-TO-DATE AVAILABLE AGE
# first-app 1/1 1 1 36s
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# first-app-67468bb98f-l5v9d 1/1 Running 0 38s
minikube dashboard
# can see all details- ClusterIP : default, reachable from inside of the Cluster
- NodePort : This is exposed on IP and port of worker nodes
- LoadBalancer : Most commonly used accessable from outside
kubectl expose deployment first-app --type=LoadBalancer --port=8080
kubectl get services
# EXTERNAL-IP keeps <pending>
# this command is for a local specific purpose
minikube service first-app
# http://127.0.0.1:56557/http://127.0.0.1:56557/error -> Exit node server and throw an error
-> The server is down but it rolls back and starting restart the service.
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# first-app-67468bb98f-l5v9d 0/1 Error 1 5h22m
# After a while
# NAME READY STATUS RESTARTS AGE
# first-app-67468bb98f-l5v9d 1/1 Running 2 5h22mReplica : An instance of a Pod
# Scale up
kubectl scale deployment/first-app --replicas=3
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# first-app-67468bb98f-l5v9d 1/1 Running 2 5h32m
# first-app-67468bb98f-bkzhv 0/1 ContainerCreating 0 2s
# first-app-67468bb98f-s9qgt 0/1 ContainerCreating 0 2shttp://127.0.0.1:56557/error -> One pot is down but still be able to connent the same url and connected the other running pod.
# CrashLoopBackOff
# NAME READY STATUS RESTARTS AGE
# first-app-67468bb98f-l5v9d 1/1 CrashLoopBackOff 3 5h32m# Scale down
kubectl scale deployment/first-app --replicas=1
kubectl get pods# After editing app.js
docker build -t pcsmomo/kub-first-app .
docker push pcsmomo/kub-first-app
kubectl get deployments
# Clarify the container and the new image path
# Check the container name "kub-first-app" inside the pod
kubectl set image deployment/first-app kub-first-app=pcsmomo/kub-first-app
# New image won't be adjusted because the tag name hasn't been changed
docker build -t pcsmomo/kub-first-app:2 .
docker push pcsmomo/kub-first-app:2
kubectl set image deployment/first-app kub-first-app=pcsmomo/kub-first-app:2
# deployment.apps/first-app image updated
kubectl rollout status deployment/first-app
# deployment "first-app" successfully rolled out# make an error
kubectl set image deployment/first-app kub-first-app=pcsmomo/kub-first-app:3
kubectl rollout status deployment/first-app
# Waiting for deployment "first-app" rollout to finish: 1 old replicas are pending termination...
# The new pod failed to run, so the old pod is still running.
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# first-app-567948dbdb-vgbq8 0/1 ErrImagePull 0 16s
# first-app-567948dbdb-vgbq8 0/1 ImagePullBackOff 0 73s
# Roll back to the healthily working pod
kubectl rollout undo deployment/first-app
#deployment.apps/first-app rolled back
kubectl get pods
# The errored pod has been removed
# NAME READY STATUS RESTARTS AGE
# first-app-fdff796fc-gqf75 1/1 Running 0 17m
kubectl rollout history deployment/first-app
kubectl rollout history deployment/first-app --revision=3
# Roll back to specific revision
kubectl rollout undo deployment/first-app --to-revision=1
kubectl get pods
# It runs the old pod, but terminates the current one.
# NAME READY STATUS RESTARTS AGE
# first-app-67468bb98f-v2zlk 1/1 Running 0 7s
# first-app-fdff796fc-gqf75 1/1 Terminating 0 18mkubectl delete service first-app
# minicube create the service but kubectl deletes it
kubectl delete deployment first-app
# deployement will delete podsIf all those commands could be overwhelming.
Let's make it like docker-compose
- Imperative Approach : docker run, kubectl create/expose
- Declarative Approach : docker-compose, kubectl apply
kubectl apply -f=deployment.yamlkubectl apply -f=deployment.yaml
kubectl get podskubectl apply -f service.yaml
kubectl get services
minikube service backendUpdate yaml files and just apply again
# All ways work either imperatively and declaratively
kubectl delete deployment name
kubectl delete service name
kubectl delete -f=deployment.yaml,service.yaml
kubectl delete -f deployment.yaml -f service.yamlkubectl delete -f=deployment.yaml -f=service.yaml
kubectl apply -f master-deployment.yamlkubectl delete -f=master-deployment.yaml
# add label on deployment.yaml and service.yaml
kubectl apply -f=deployment.yaml -f=service.yaml
kubectl delete deployment,service -l group=example# after adding livenessProbe
kubectl apply -f=deployment.yaml -f=service.yaml
minikube service backendPull the updated image
- Change tag
- image: pcsmomo/kub-first-app:3
- Use latest
- image: pcsmomo/kub-first-app:latest
- Add Pull Policy
- image: pcsmomo/kub-first-app:2
- imagePullPolicy: Always
docker build -t pcsmomo/kub-first-app:2 .
docker push pcsmomo/kub-first-app:2
kubectl apply -f=deployment.yaml -f=service.yaml
kubectl delete -f=deployment.yaml -f=service.yamldocker-compose up -d --buildRun Postman and send data
// http://localhost/story
// Method : Post
// Body -> Raw, JSON
{
"text": "A first test!"
}
// http://localhost/story
// Method : Get
{
"story": "The first test~~!\n"
}docker-compose down
docker-compose up -d --build
# the data is still storedKubernetes Volumes lifetime depends on the Pod lifetime
However, Kubernetes Volume is more powerful than Docker Volume
docker build -t pcsmomo/kub-data-demo .
docker push pcsmomo/kub-data-demo
kubectl apply -f=service.yaml -f=deployment.yaml
minikube service story-serviceWe will scratch three types among many many volume types.
- emptyDir
- hostPath
- csi
# change app.js
docker build -t pcsmomo/kub-data-demo:1 .
docker push pcsmomo/kub-data-demo:1
kubectl apply -f=deployment.yaml
# After saving data, if it clashes with /error, all data will be gone.
# as volume lifetime depends on the pod's lifetime
# Add volume on deployment.yaml
kubectl apply -f=deployment.yaml
# http://127.0.0.1:51643/story -> {"message": "Failed to open file."}
# Because of emptyDir: {}
# But now, after saving data, if it clashes with /error, all data will be still there.The down size of emptyDir is when we have more than one replicas
If there are multiple nodes, hostPath is not good enough but it is better approach than emptyDir
# change replicas:2
kubectl apply -f=deployment.yaml
# change to hostPath
kubectl apply -f=deployment.yamlContainer Storage Interface (CSI) volume is kind of special and flexible.
As long as venders(AWS, Azure, Etc.) support this type, we can use csi type
Persistent Volume are detached from nodes and pods
So emptyDir and hostPath types are not available.
- E.g. Gi, Mi
# Storage Class
kubectl get sc
#NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
#standard (default) k8s.io/minikube-hostpath Delete Immediate false 33h
kubectl apply -f=host-pv.yaml
kubectl apply -f=host-pvc.yaml
kubectl apply -f=deployment.yaml
kubectl get pv
#NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
#host-pv 1Gi RWO Retain Bound default/host-pvc standard 22s
kubectl get pvc
# NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
# host-pvc Bound host-pv 1Gi RWO standard 47s# change app.js
docker build -t pcsmomo/kub-data-demo:2 .
docker push pcsmomo/kub-data-demo:2
kubectl apply -f=deployment.yamlkubectl apply -f=environment.yaml
kubectl get configmap
kubectl delete -f=deployment.yaml
kubectl apply -f=deployment.yamldocker-compose up -d --buildPostman test
// http://localhost:8000/tasks
// Method : Post
// Header -> Key: Authorization, Value: Noah abc
// Body -> Raw, JSON
{
"text": "A first task",
"title": "Do this, too!"
}# check no deployment and service running except the default service
kubectl get deployments
kubectl get services
users-api % docker build -t pcsmomo/kub-demo-users .
users-api % docker push pcsmomo/kub-demo-users
kubernetes % kubectl apply -f=users-deployment.yamlkubernetes % kubectl apply -f=users-service.yaml
minikube service users-service
# http://127.0.0.1:56269auth-api % docker build -t pcsmomo/kub-demo-auth .
auth-api % docker push pcsmomo/kub-demo-auth
users-api % docker build -t pcsmomo/kub-demo-users .
users-api % docker push pcsmomo/kub-demo-usersno need to create service for auth as we don't want to expose this to the outside world
kubernetes % kubectl apply -f=users-deployment.yaml
kubectl describe pods
# resources:
# limits:
# memory: '128Mi'
# cpu: '500m'
# It occurs 'Insufficient cpu' warning and new pods are stuck in pending- Docker is using AUTH_ADDRESS: auth
- as it can approach container name under the same network
- Kubernetes is using AUTH_ADDRESS: localhost
- as it can communicate with localhost under the same pod
kubernetes % kubectl apply -f=auth-deployment.yaml,auth-service.yaml
kubectl get services
# change localhost to the ClusterIP from auth-service
kubernetes % kubectl apply -f=users-deployment.yaml
# change to use "AUTH_SERVICE_SERVICE_HOST" kubernetes auth generated
users-api % docker build -t pcsmomo/kub-demo-users .
users-api % docker push pcsmomo/kub-demo-users
kubernetes % kubectl delete -f=users-deployment.yaml
kubernetes % kubectl apply -f=users-deployment.yamlKubernetes clusters come with build-in service, CoreDNS
So we can use [service name].default (default namespace)
tasks-api % docker build -t pcsmomo/kub-demo-tasks .
tasks-api % docker push pcsmomo/kub-demo-tasks
kubernetes % kubectl apply -f=tasks-service.yaml -f=tasks-deployment.yaml
minikube service tasks-servicetask pod does not run. getting weird error message.
I think.. when docker push, some layers are "Mounted from pcsmomo/kub-demo-users"
Can't solve this problem now.
kubectl logs tasks-deployment-647c85d66c-vr7rb
# Error: Cannot find module '/app/users-app.js'
kubectl describe pod tasks-deployment-647c85d66c-9nl9x
# Normal Pulled 2s kubelet Successfully pulled image "pcxxxmo/kub-demo-tasks:latest" in 3.70906067s
# Warning BackOff 0s (x2 over 1s) kubelet Back-off restarting failed containerfrontend % docker build -t pcsmomo/kub-demo-frontend .
frontend % docker push pcsmomo/kub-demo-frontend
docker run -p 80:80 --rm -d pcsmomo/kub-demo-frontend
# When fetch tasks, we get CORS error
# Add headers related to CORS on task-app.js
tasks-api % docker build -t pcsmomo/kub-demo-tasks .
tasks-api % docker push pcsmomo/kub-demo-tasks
kubernetes % kubectl delete -f=tasks-deployment.yaml
kubernetes % kubectl apply -f=tasks-deployment.yaml
# add Authorization headers on App.js
frontend % docker build -t pcsmomo/kub-demo-frontend .
frontend % docker push pcsmomo/kub-demo-frontend
docker stop frontendserver
docker run -p 80:80 --rm -d pcsmomo/kub-demo-frontend
# All features work
docker stop frontendserverkubernetes % kubectl apply -f=frontend-service.yaml -f=frontend-deployment.yaml
minikube service frontend-service- Using a Reverse Proxy for the Frontend
Reverse Proxy
frontend % docker build -t pcsmomo/kub-demo-frontend .
frontend % docker push pcsmomo/kub-demo-frontend
kubernetes % kubectl delete -f=frontend-deployment.yaml
kubernetes % kubectl apply -f=frontend-deployment.yamlClick to Contract/Expend
users-api % docker build -t pcsmomo/kub-dep-users .
users-api % docker push pcsmomo/kub-dep-users
auth-api % docker build -t pcsmomo/kub-dep-auth .
auth-api % docker push pcsmomo/kub-dep-auth
# For testing myself
kubernetes % kubectl apply -f=users.yaml,auth.yaml
minikube service users-servicePOSTMAN Test
// http://127.0.0.1:56279/signup
// Method : Post
// Body -> Raw, JSON
{
"email": "test@test.com",
"password": "testpass"
}
// Result
{
"message": "User created.",
"user": {
"_id": "60e67c119bb1ae5f727c841b",
"email": "test@test.com",
"password": "$2a$12$hWFQeJyU9nGY8Vb1Wiz/O.gn7Rt0d90dPiK6sBeBA7dlys9aMhY9C",
"__v": 0
}
}
// http://127.0.0.1:56279/login
// Method : Post
// Body -> Raw, JSON
{
"email": "test@test.com",
"password": "testpass"
}
// Result
{
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE2MjU3MTc4NTEsImV4cCI6MTYyNTcyMTQ1MX0.Qq3OnxXdHFlcI-Bhvxhqnc_Nt8l5hw0lOLciX60KxiU",
"userId": "60e67c119bb1ae5f727c841b"
}- AWS EKS (Elastic Kubernetes Service)
- cluster name: kub-dep-demo
- Next step
- Configure cluster
- kubernetes version: 1.17
- Create Role
- IAM -> Roles -> Create role
- AWS service -> EKS -> EKS - Cluster -> Next: Permissions
- Permissions -> Next: Tags
- Tags : Next: Review
- Review : Role name: eksClusterRole -> Create Role
- Cluster Service Role: refresh and choose eksClusterRole
- Next
- Specify networking
- AWS CloudFormation -> Create stack
- Create stack
- Specify stack details
- Stack name: eksVpc
- Next
- Tags -> Next
- Review -> Create stack
- VPC : refresh and choose eksVpc
- Cluster endpoint access: Public and private
- Next
- AWS CloudFormation -> Create stack
- Configure logging : Next
- Review: Create
subl /Users/noah/.kube/config
cp config config.minikube # create a backup- AWS My security credentials -> Create access key and download the csv file
aws configure
# AWS Access Key ID [None]: ABCDE
# AWS Secret Access Key [None]: ZXYW
# Default region name [None]: ap-southeast-2
# Default output format [None]:
aws eks --region ap-southeast-2 update-kubeconfig --name kub-dep-demo
# It adds my EKS cluster configurations to /Users/noah/.kube/config \
minikube delete
kubectl get pods
# kubectl is connected to my EKS Cluster now- Amazon EKS Cluster - kub-dep-demo
- Compute -> Add Node Group
- Configure Node Group
- Name: demo-dep-nodes
- Create Role
- IAM -> Roles -> Create role
- AWS service -> EC2 -> Next: Permissions
- Permissions
- AmazonEKSWorkerNodePolicy
- AmazonEKS_CNI_Policy
- AmazonEC2ContainerRegistryReadOnly
- Next: Tags
- Tags : Next: Review
- Review : Role name: eksNodeGroup -> Create Role
- Cluster Service Role: refresh and choose eksNodeGroup
- Next
- Set compute and scaling configuration
- Instance types: t3.small (t3.micro can fail)
- Next
- Specify networking
- Allow remote access to nodes: disabled
- Next
- Review -> Create
- Configure Node Group
AWS EC2 -> Instances -> Two instances are running
It is all set up for the cluster.
Now, it works as minikube but in AWS
kubernetes % kubectl apply -f=auth.yaml -f=users.yaml
kubectl get pods
kubectl get services
# minikube does not have External IP but now we have
# aca2d4a6bd8c9448683bdfa982300344-329379899.ap-southeast-2.elb.amazonaws.comAWS EC2 -> Load Balancers -> one load balancer has been created
POSTMAN Test
// aca2d4a6bd8c9448683bdfa982300344-329379899.ap-southeast-2.elb.amazonaws.com/signup
// Method : Post
// Body -> Raw, JSON
{
"email": "test2@test.com",
"password": "testpass"
}
// Result
{
"message": "User created.",
"user": {
"_id": "60e78cb8d906e70e9e45fd77",
"email": "test2@test.com",
"password": "$2a$12$Bs4h4K1LifbqGDLbXDGtYuYBZ0QCxszXxfPCsIJ1JwPEU0T5q/GoG",
"__v": 0
}
}
// aca2d4a6bd8c9448683bdfa982300344-329379899.ap-southeast-2.elb.amazonaws.com/login
// Method : Post
// Body -> Raw, JSON
{
"email": "test@test.com",
"password": "testpass"
}
// Result
{
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE2MjU3ODc2NDgsImV4cCI6MTYyNTc5MTI0OH0.Ij9-kBdiDZeux09JhtjvCbX_mLi5puESjp-X9ZttoYA",
"userId": "60e67c119bb1ae5f727c841b"
}# Change users.yaml
kubernetes % kubectl apply -f=users.yaml
kubectl get pods✅ Kubernetes deploying to AWS EKS, succeeded
As we have two nodes, we cannot use emptyDir nor hostPath.
# Install aws efs csi driver
kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.3"We need this EFS driver since AWS EFS is not supported as a volume type otherwise.
Create a security group
- AWS EC2 -> Security Groups -> Create security group
- Security group name: eks-efs-sg
- Description: for efs
- VPC: eksVpc-VPC
- Add Inbound rule
- Type: NFS
- Source
- New Tab -> VPC -> Your VPCs -> eksVpc-VPC -> IPv4 CIDR -> copy 192.168.0.0/16
- Custom: 192.168.0.0/16
- Create security group
- sg-04009b9e7c10462ba - eks-efs-sg
Create EFS
- Click Amazon EFS console to create a new file system
- Create a file system
- Name: eks-efs
- Virtual Private Cloud(VPC): eksVpc-VPC
- Customize
- File system setting -> Next
- Network access
- we would have two availability zones
- click x on default security groups
- choose eks-efs-sg on both zones
- Next
- File system policy -> Next
- Review and create -> Create
- eks-efs - fs-a49fa49c
- Create a file system
(AWS EFS CSI Driver Kubernetes Example)(https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes/static_provisioning)
users-api % docker build -t pcsmomo/kub-dep-users .
users-api % docker push pcsmomo/kub-dep-users
kubernetes % kubectl delete deployment users-deployment
kubernetes % kubectl apply -f=users.yamlPOSTMAN Test
// aca2d4a6bd8c9448683bdfa982300344-329379899.ap-southeast-2.elb.amazonaws.com/signup
// Method : Post
// Body -> Raw, JSON
{
"email": "test3@test.com",
"password": "testpass"
}
// aca2d4a6bd8c9448683bdfa982300344-329379899.ap-southeast-2.elb.amazonaws.com/logs
// Method : Get
// Result
{
"logs": [
"2021-07-09T00:31:15.343Z - 60e798d338e9fcaf3283d5d6 - test3@test.com",
""
]
}Check EFS
Amazon EFS -> File systems -> fs-a49fa49c -> Monitoring
One more test, set replicas: 0 and restore
# change replicas:0 on users.yaml
kubernetes % kubectl apply -f=users.yamlaca2d4a6bd8c9448683bdfa982300344-329379899.ap-southeast-2.elb.amazonaws.com/logs
-> the log data is still stored
users-api % docker build -t pcsmomo/kub-dep-users .
users-api % docker push pcsmomo/kub-dep-users
auth-api % docker build -t pcsmomo/kub-dep-auth .
auth-api % docker push pcsmomo/kub-dep-auth
tasks-api % docker build -t pcsmomo/kub-dep-tasks .
tasks-api % docker push pcsmomo/kub-dep-tasks
kubernetes % kubectl delete -f=users.yaml -f=auth.yaml -f=tasks.yaml
kubernetes % kubectl apply -f=users.yaml -f=auth.yaml -f=tasks.yaml
kubectl get services
# users-service: a301c635f10d049ac917cbfc5c43a82f-2086223343.ap-southeast-2.elb.amazonaws.com
# tasks-service: abcc03a4e1af34b379149fce6d56679b-825297456.ap-southeast-2.elb.amazonaws.comPOSTMAN Test
// Tasks-service
// abcc03a4e1af34b379149fce6d56679b-825297456.ap-southeast-2.elb.amazonaws.com
// Method : Get
// Result
{"message":"Could not authenticate user."}
// Users-service
// a301c635f10d049ac917cbfc5c43a82f-2086223343.ap-southeast-2.elb.amazonaws.com/login
// Method : Post
// Body -> Raw, JSON
{
"email": "test3@test.com",
"password": "testpass"
}
// Result
{
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiI2MGU2N2MxMTliYjFhZTVmNzI3Yzg0MWIiLCJpYXQiOjE2MjU3OTMwNTQsImV4cCI6MTYyNTc5NjY1NH0.ePIHEqC_mci0rd4HsFO5MLYr4z0Qn-qNXPB5_Lijb9U",
"userId": "60e67c119bb1ae5f727c841b"
}
// Tasks-service
// abcc03a4e1af34b379149fce6d56679b-825297456.ap-southeast-2.elb.amazonaws.com
// Method : Get
// Header -> Key: Authorization, Value: Noah eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiI2MGU2N2MxMTliYjFhZTVmNzI3Yzg0MWIiLCJpYXQiOjE2MjU3OTMwNTQsImV4cCI6MTYyNTc5NjY1NH0.ePIHEqC_mci0rd4HsFO5MLYr4z0Qn-qNXPB5_Lijb9U
// Result
{"tasks":[]}
// abcc03a4e1af34b379149fce6d56679b-825297456.ap-southeast-2.elb.amazonaws.com
// Method : Post
// Header -> Key: Authorization, Value: Noah eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiI2MGU2N2MxMTliYjFhZTVmNzI3Yzg0MWIiLCJpYXQiOjE2MjU3OTMwNTQsImV4cCI6MTYyNTc5NjY1NH0.ePIHEqC_mci0rd4HsFO5MLYr4z0Qn-qNXPB5_Lijb9U
// Body -> Raw, JSON
{
"title": "Learn Docker",
"text": "Learn it in-depth!!"
}
// Result
{
"task": {
"_id": "60e7a2ddf9d5146059636d05",
"title": "Learn Docker",
"text": "Learn it in-depth!!",
"user": "60e67c119bb1ae5f727c841b",
"__v": 0
}
}✅ Kubernetes deploying with two services to AWS EKS, succeeded
- git rebase -i HEAD~2
- git stash
- It might be a good idea of using the same name for tags on every service for this app
