The purpose of this repository is to provide a simple way to install the Correctomatic in a single VPS server, to have a cheap working version of the system. The playbook will install in the VPS:
- Docker
- A Docker registry
- Nginx acting as a reverse proxy for the registry
- A Redis server
- PENGING: the correctomatic processes
If you want to install the Correctomatic in multiple servers, you could probably reuse the roles defined in this playbook
The playbook must be configured modifying the inventories/<environment>/group_vars/all/config.yml file. The most important entries are:
development_mode: should benofor production. If you want to run the playbook in development mode, follow the instructions in the corresponding section.registry.domain: update the domain to the one you will use for the correctomatic's internal registry.docker.domain: update the domain to a valid value in your domain, it will point to localhost in production, but you will probably use it for debugging.lets_encrypt_email: TO-DO
You must create the production's secrets file before running the playbook. The file is located at inventories/prod/group_vars/all/secrets.yml. The file should contain the entries in .secrets.template.yml. Encrypt the file with ansible-vault:
ansible-vault encrypt inventories/prod/group_vars/all/secrets.ymlInstall ansible, you will need it to run the playbook. Usually done with pipx install --include-deps ansible.
Run the playbook in development mode:
ansible-playbook playbook.ymlRun the playbook in production mode:
ansible-playbook playbook.yml -i inventories/prod/hosts --ask-vault-passIf you want to run only a specific tag, you can use the --tags option. For example, to run only the nginx and docker roles:
ansible-playbook playbook.yml --tags docker,nginxTO-DO: nginx configuration for the API
The correctomatic works with a private registry (usually, the correction images are kept private) There is another file with documentation on the private registry
There are two databases, one for the API and one for the App. There is a playbook to dump them; the dumps are downloaded to ./backups folder.
You can run the playbook with tags if you want to dump only one of the databases. Omit the tags to dump both databases:
ansible-playbook utils/db_backup.yml --tags api,appor, in production:
ansible-playbook utils/db_backup.yml --tags api,app -i inventories/prod/hosts --ask-vault-passThere is also a playbook to restore the databases. You must provide the database name and the file to restore. For example, to restore the API database with a dump file:
ansible-playbook utils/db_restore.yml -e "db=api" -e "file=./backups/20241119065235_correctomatic.dump.sql.gz"If you want to run the playbook in development mode (for testing changes, for example) follow the instructions in this section.
You will need to create some entries in /etc/hosts to reply the DNS entries that the correctomatic would have in a real deployment:
192.168.56.56 correctomatic_vps
192.168.56.56 <your registry domain, ie, registry.my.correctomatic.com>
# For connecting to the VPS docker's server:
192.168.56.56 <your docker domain, ie, docker.my.correctomatic.com>
-
Install an Ubuntu 22.04 server. The playbook expects a user
ansiblewith passwordansible(you can change the password modifyingsecrets.yml) -
Configure the network. You will need two networks in the virtual machine:
- One NAT network, so the VPS can connect to the internet
- One host only network so you can access the VPS from your host
Folow this steps:
- Create a NAT network using the VirtualBox network manager. Assign the
10.10.10.0/24address to the network, the virtual machine will have the address10.10.10.10. - Create a host only network using the VirtualBox network manager. The address will be
192.168.56.1/24. You don't need to have DHCP enabled. - Configure the interfaces in the virtual machine.
- Add two network interfaces: the first will be connected to the NAT network, and the second to the host only network.
- Create the file
/etc/netplan/01-netcfg.yamlwith this content (adapt the nameservers to your network settings):
network:
version: 2
ethernets:
enp0s3:
dhcp4: no
addresses:
- 10.10.10.10/24
routes:
- to: default
via: 10.10.10.1
nameservers:
addresses:
- 8.8.8.8
- 8.8.4.4
enp0s8:
dhcp4: no
addresses:
- 192.168.56.56/24Alternatively, you can use a bridged network. In that case, you will need to assign a fixed IP to the virtual machine, either by configuring the DHCP of your network or by modifying the netplan.
- Generate a ssh key. This will generate a
id_ansiblekey pair in~/.ssh:
ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_ansible -C "ansible@correctomatic_vps"- Copy the key to the VPS:
ssh-copy-id -i ~/.ssh/id_ansible ansible@correctomatic_vpsAt this point, create a snapshot and name it clean_state. You can restore this snapshot later to retry the ansible playbook with a clean machine. There is a script, restore_snapshot.sh, for restoring that snapshot automatically.
If you want to connect to the Docker server in the VPS from your local machine, you need to download the certificates from the VPS and configure the Docker client to use them. Note that this will only work if the playbook has run in development mode. If not, the docker server is not accessible from the outside.
- Download the certificates. YOU MUST DO THIS EACH TIME THE CERTIFICATES ARE REGENERATED.
- Test the connection.
- Optional: create a Docker context for future connections. YOU MUST DO THIS EACH TIME THE CERTIFICATES ARE REGENERATED.
The certificates are stored in the VPS in the following paths:
- CA certificate:
/etc/docker/ca/ca-certificate.pem - Client certificate:
/etc/docker/certs/correctomatic-client-certificate.pem - Client private key:
/etc/docker/certs/correctomatic-private-key.pem
They should be copied to the local machine in a directory. For example, you can use ~/.correctomatic/certs/, and the names are, by convention, ca.pem, cert.pem, and key.pem. There is an script that does this for you:utils\docker_download_certs.sh), you will need to have sshpass installed before running it.
The connection can be tested using environment variables. The following commands should be executed in the local machine:
# This must be the domain configured in config.yml
VPS_HOST=dev.docker.correctomatic.org
export DOCKER_HOST=tcp://$VPS_HOST:2376
export DOCKER_CERT_PATH=~/.correctomatic/certs
export DOCKER_TLS_VERIFY=1
docker infoRemember to unset the variables when you are done:
unset DOCKER_HOST
unset DOCKER_CERT_PATH
unset DOCKER_TLS_VERIFYYou can create a docker context to avoid setting the environment variables each time you want to connect to the VPS. There is
a script that does this for you: utils\docker_create_context.sh. Update the script first to use the correct domain name. You will need to recreate the context each time the certificates are regenerated.
Once the context is created, you can activate it and run docker commands as usual, but they will be executed in the VPS:
docker context use correctomatic_vps
docker info
docker image pull alpine:latext
...To switch back to the local context run docker context use default.
There is a container, pretty, that can be used to format the logs of the correctomatic processes. You can use it like this:
docker logs --follow correctomatic-app | docker exec -i pretty pino-prettyThe Correctomatic uses Dockerode to interact with the Docker daemon. Here you have
an example to test the connection using Dockerode (you will need to add the dockerode
dependency to your project):
import fs from 'fs';
import path from 'path';
import os from 'os'; // Importing os module for accessing home directory
import Docker from 'dockerode';
// Get the user's home directory
const homeDir = os.homedir();
const certDir = path.join(homeDir, '.correctomatic', 'certs');
// Define paths to your certificate files relative to the home directory
const caPath = path.join(certDir, 'ca.pem');
const certPath = path.join(certDir, 'cert.pem');
const keyPath = path.join(certDir, 'key.pem');
// Read certificate files synchronously
const ca = fs.readFileSync(caPath);
const cert = fs.readFileSync(certPath);
const key = fs.readFileSync(keyPath);
const docker = new Docker({
host: 'dev.docker.correctomatic.org',
port: 2376,
ca,
cert,
key
});
// Example: List containers
docker.listContainers({ all: true }, function (err, containers) {
if (err) {
return console.error('Error:', err);
}
console.log('Containers:', containers);
});The VPS's Redis server can be accessed using redis-cli, use the same password defined in secrets/redis_password.yml. Take in account that the redis server won't be accesible in production mode, the firewall ports are closed and redis is listening only at localhost:
redis-cli -h 192.168.56.56 -p 6379 -a 'your_password'
192.168.56.56:6379> ping
PONGThere is a docker compose file, /utils/docker_compose_dashboards.yml, that can be used to run RedisInsight and BullMQ dashboards:
REDIS_PASSWORD=<password> docker compose -f utils/docker_compose_dashboards.yml upIf you prefer to launch them by hand, to use RedisInsight web frontend to debug the server:
# This is for keeping configuration, run
# docker volume rm redisinsight when done
docker volume create vps-redisinsight
docker run \
--rm \
--network host \
--name VPS-redisinsight \
-v vps-redisinsight:/data \
redis/redisinsightThe server can be accessed at http://localhost:5540.
The container will have an address in host's network. For configuring RedisInsight, Redis server will be accesible at the VPS host only IP (192.168.56.56) port 6379.
If you want to debug BullMQ, run a web dashboard with:
docker run \
--rm \
--network host \
--name VPS-bullmq \
igrek8/bullmq-dashboard \
--redis-host 192.168.56.56 \
--bullmq-prefix bull \
--host 192.168.56.1 \
--redis-password <redis password here>"The server can be accessed at http://localhost:3000.