Using Docker, provide a simple orchestrated solution for containerised infrastructure that meets the following scenario and requirements:
Given a server running an application container (based on image tactivos/devops-challenge:0.0.1) linked to a backing service container generated by the following compose file:
version: "2"
services:
app:
image: tactivos/devops-challenge:0.0.1
restart: unless-stopped
depends_on:
- db
expose:
- "3000"
ports:
- "9001:3000"
db:
image: mongo
Upgrade the solution so it is possible to deploy a new version (based on image tactivos/devops-challenge:0.0.2) of the application while generating no downtime for users. Uptime should be verifiable with a health check by repeatedly calling the
/pingendpoint on that same host.
Your solution must include any and all scripts involved for making this work, as well as a thorough description of how your solution was designed/devised.
-
Bonus points 1: explain how you would limit/control the amount of compute/memory resources accessible by any member of your solution.
-
Bonus points 2: explain how you would make this solution to be Highly Available and able to scale horizontally while distributing traffic evenly, granted you're running in a cloud provider as Azure, AWS, etc. and can rely on their infrastructure, but knowing any VM can be recycled at any time (though never more than 1 at a time).
-
Bonus points 3: explain how service discovery would work in your solution provided for "Bonus points 2" whether it's the same application or multiple microservices, and how you would monitor the health and logs of this solution.
The process is as important as the final result, that's why we ask you to:
- Keep (and share) a log of the most important decisions you made at the end of the exercise.
- Deliver your solution as a github repository with enough context and information so it can be analysed/tested by our team. Include everything that you consider relevant when you are about to send the results.
The challenge: I think uses nginx in mode balancer on server. If tactivos/devops-challenge:0.0.2 is deployed is only necessary reload the config nginx, this assures the web app uptime.
Then i question: if exist solution in the community?. I read options but my idea are in the best solutions of the community.
But at read again the question and see 'a server'. How implement my theory in a server? And i found a solution.
I will now present a solution where Nginx is configured to delegate the request to on node executing service, all in one server.
- Server/VM SO: Linux Ubuntu 14.04
- Sudo privileges , Install on server: Nginx Web Server,Docker-enginer, Docker-compose (or use Ansible)
Note: The repository has a ansible playbook to install the solution in your server.
-
Install locally ansible
-
Configure ssh authorized_keys in the server on sudor user
-
Configure ansible inventory
-
Execute ansible
Install ansible locally:
$ sudo su -
$ apt-get install software-properties-common
$ apt-add-repository ppa:ansible/ansible
$ apt-get update
$ apt-get install ansible
$ exit #if not user root connect to server
$ cp inventory.dist inventoryExample inventory file:
[yor-server-name-tag]
ip.for.you.server
Run:
$ ansible-playbook -i inventory deploy-playbook.yml -l yor-server-name-tag
-
Connect to server
-
Run next command
$ sudo su -
$ apt-get install software-properties-common apt-transport-https ca-certificates
$ apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
$ echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" > /etc/apt/sources.list.d/docker.list
$ add-apt-repository ppa:nginx/stable
$ apt-get update
$ apt-get install linux-image-extra-$(uname -r) linux-image-generic-lts-trusty git nginx docker-engine
$ curl -L https://github.com/docker/compose/releases/download/1.6.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose
$ git clone https://github.com/megui88/devops-challenge.git
$ cd devops-challenge
$ cp roles/deploy/templates/app.conf /etc/nginx/conf.d/
$ cp roles/deploy/templates/app_upstream.conf /etc/nginx/sites-available/
$ mkdir -p /var/docker
$ cp ./*.sh /var/docker
$ cp ./docker-compose.yml /var/docker
$ cd /var/docker
$ docker-compose -f docker-compose.yml up -d #or use deploy.sh
$ echo "upstream app_servers {server $(docker port "docker_app_1" 3000);}" > /etc/nginx/sites-available/app_upstream.conf
$ ln -s /etc/nginx/sites-available/app_upstream.conf /etc/nginx/sites-enabled/
$ rm /etc/nginx/sites-enabled/default
$ service nginx restartnow visit in the browser you server ip in port 80
In the moments app is deployed in v0.0.1 version because is versioned, but if you needs other version or other images follow next step.
- Localy run ansible
$ ansible-playbook -i ./inventory -l yor-server-name-tag deploy-playbook.yml -e "image=tactivos/devops-challenge tag=0.0.2"-
Connect to server
-
Run next commands
$ sudo su -
$ cd /var/docker
$ ./deploy.sh tactivos/devops-challenge 0.0.2
Now explains how add the endpoint /ping on app
Originaly the image docker not has this endpoint, i don't have the Dockerfile or sources. I can't make the solution in conventional process. First thinking modify image, use "echo 'new code'" because not have any terminal editor and then commit de version and push new image
The upgrade process not has downtime as it is required. This is because the new version is deployed in parallel and nginx expect not to have petition waiting to reload the configuration and send new requests to the new version.
- Bonus points 1: All members have access to repository and docker-compose.yml file. They can modify this file and run deploy/upgrade process. docker-compose supports limiting containers
- Bonus points 2: This solution provides the process deploy, provision and upgrades for the cloud solutions and scaling strategis. Add the new instance to ansible inventory file and run deploy-playbook.yml to new server, and upgrade if necessary and adding to balancer. This solution provide the upgrade process, you have the posibility upgrade one or more VM and try incorporate to balancer
- Bonus points 1: Using load balancer (nginx, LBC, etc.) replace by the service discovery because the concept is target a domain and wait the response of a service behind to balancer. If is necessary optimize to recycling the vm because contain different applications in differents ports, this solution need a minimal change and implement one services discovery, but is not dificult. The Application run in Docker is necessary send error or logs to other service (other app, elasticsearch, rabbitMQ, centri, etc.) or write in a common volume, This way i get have access to logs and monitor health.
