Docker is a platform that allows to define, create, run, and coordinate containers (a virtual "operating system"?). Docker is different than a virtual machine because it virtualizes the OS-level primitives instead of the machine's hardware. So, it abstraction is "lightweight" than a standard virtual machine (i.e., operating system virtualization instead of hardware virtualization)
Why do we use Docker?
-
Deploy and distribute software in a fast and repeatable way.
A use case is to deploy a web service "in the cloud". Each remote machine can have a different operating system, resources, and can be virtualized in different ways... If the remote machine installs Docker then we can just deploy our software with Docker in a uniform (and reliable) way.
-
Continuous Integration:
A common use case is when we want to run a set of tests on a system (a single software or a distributed system). We want to fix the environment (or environments) used to test our system (e.g., test it different versions of Java) and run the tests in a clean environment (for automatic continuous integration on GitHub, see TravisCI).
Some use cases we are more interested in:
-
Create replication packages for our experiments: when we publish a paper we can set up docker instead than a virtual machine (advantages: more reusable, faster to create, easier to distribute)
-
Try different development environments quickly, without polluting your system. For example run a program requiring a completely different version of clibc or run a program that works on linux on Mac or Windows.
-
Have and share the development and execution environments for our tools: for example, we could use Docker to define the machine used to develop ROS.
-
Another test case (similar to continuous integration) is to compile, run, and evaluate the student's programming exercises where every time the student upload a new program we want to compile it run some test cases. Clearly we want separation (e.g., not having a student's assignment polluting the results of another student's assignment), so every time we want to run a new, separate system that already has all the necessary dependencies installed.
Docker also allows to define a composition and orchestration of different containers, useful to specify the deployment of distributed systems (e.g., multiple services).
Limitations: the container uses the guest operating system. So, in principle to run Windows you need Windows. Docker on Mac and Windows "cheats" to run Linux containers, since it uses a Linux kernel underneath (virtualized somehow). In practice some combinations do not work, like running a Windows container on MacOs.
Test if your docker installation works:
$ docker run hello-worldRun bash in a Ubuntu system:
$ docker run -it ubuntu bash-
An
imagedefines the system we run. Above,ubuntuis the name of an existing image. -
When we execute
docker runwe create acontainer: a container instantiates an image. We can instantiate how many containers we want. -
Where is the code defining the Ubuntu system, the
ubuntuimage?Some magic: Docker is already configured to look for an image in a remote registry called Docker Hub.
In practice:
-
you download the images locally (more in the
docker imagescommand) -
you can push your images on DockerHub (so everyone can download them)
-
you can create your own registry (e.g., a company registry) if you need to.
-
We define an image printing hello world and then we instantiate a container running that image.
A Dockerfile defines (declaratively) a docker image. Create a file named Dockerfile in a separate directory:
# Base the container on the ubuntu image
FROM ubuntu:18.04
# Execute the command when the container RUNS
CMD echo "Hello Cosynus!"
We create the image from the Dockerfile (docker build command):
$ docker build -t hellocosynus .We can check if we have the hellocosynus image:
$ docker image lsdocker image ls list all images we have on our local computer.
$ docker run -it --name hellocontainer hellocosynusFROM ubuntu:18.04
# Run a command when CREATING the image
RUN apt-get update -y
RUN apt-get install -y sudo
RUN apt-get install -y figlet toilet
RUN echo "Hello Cosynus!" > hellomsg.txt
# Execute the command when the container runs
CMD figlet -kp < hellomsg.txt
Let's rebuild the image and the container:
$ docker build -t hellocosynus .
$ docker run -it --name hellocontainer hellocosynusSee what containers are running (none now):
$ docker container lsSee the list of the stopped containers:
$ docker container ls -aRestart a container:
$ docker container start -i hellocontainerGetting rid of all the stopped containers and dangling images:
docker container pruneAlternatives: start a container with the --rm flag (i.e., docker run -it --rm hellocosynus, removes the container when its execution terminates); remove the container (e.g., docker container rm container_id).
FROM hellocosynus
# Run additional commands (to change the hellocosynus image)
RUN apt-get update -y
RUN apt-get install -y openssh-server
################################################################################
# Set up a new user
################################################################################
RUN mkdir -p /var/run/sshd
RUN chmod 0755 /var/run/sshd
RUN useradd --groups sudo -m cosynus
RUN chown -R cosynus /home/cosynus
# set the right access for ssh
RUN chmod 644 /home/sergio/.ssh
# set up bash as default shell (minor)
# RUN sed -i "s:home/cosynus\:/bin/sh:home/cosynus\:/bin/bash:" /etc/passwd
# In the real world you should set up the access using a ssh key
# Having the password in clear in the container is a bad practice.
RUN echo "cosynus:password" | chpasswd
# Expose the port 22 of the container
EXPOSE 22
# copy the entrypoint.sh (script that run the ssh service)
# entrypoint.sh becomes part of the image!
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
#
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
# CMD is executed after ENTRYPOINT
# here it is a busyloop to keep the container running
CMD tail -f /dev/null
-
We build on the previous image
hellocosynus: you can extend from an existing image. -
In the Dockerfilie there is a lot of "garbage" to create a user and setting up the ssh server (don't care too much about that).
-
EXPOSE 22tells Docker to expose the network port to the outside world (otherwise, the container cannot be accessed). -
COPYcopies a file (took from the current directory where we executedocker build) in the image. -
ENTRYPOINTis the first command executed when running the container. We execute the script entrypoint.sh that starts the ssh service.Only the last ENTRYPOINT instruction in the Dockerfile will have an effect. Also read CMD vs ENTRYPOINT to understand the difference.
-
CMD tail -f /dev/nulljust runs an infinite loop that prevents the termination of the contrainer execution.
More commands and documentation about writing a Dockerfile
We then build the image ssh:
$ docker build -t ssh .Run the container:
$ docker run -di -p 3200:22 --name sshcontainer sshNotice that this time:
- We run the container in background (detached,
-doption). - We map the port 22 of the container to the port 3200 of the host: that is, we will be able to connect to the container service on port 22 (
ssh) by connecting to the host (our system) on port 3200.
$ docker container lsWe should have a container running.
Now we can connect to the ssh server (with password password):
$ ssh -p 3200 cosynus@localhostConnect to the server:
$ ssh -p 3200 cosynus@localhost
$ touch cosynushasbeenhere
$ ls
$ pwdLet's restart the container:
$ docker container stop sshcontainer
$ docker container start sshcontainerAnd check again the container's status:
$ ssh -p 3200 cosynus@localhost
$ ls
$ pwdThe file cosynushasbeenhere is there.
Careful: changes to the image (e.g., installing software) should be done at in the Dockerfile and not on the container.
We can run another container on a different port:
$ docker run -di -p 3201:22 --name sshcontainer2 sshWe can check if the file cosynushasbeenhere is there (of course not).
FROM ssh
RUN mkdir /home/cosynus/persistWe create a new image:
$ docker build -t fs .Run a container binding a directory to the container's filesystem:
$ docker run -di -p 3200:22 --name fscontainer --mount type=bind,source=`cd ~/ && pwd`,target=/home/cosynus/persist fsThere are other options for the mount (e.g., read only filesystem).
