This repository contains the docker-compose and configuration files for the deployment of the Data Acquisition, Visualization and Monitoring Platform. It makes use of our MQTT Connector container and our Calculator system, in charge of receiving and storing the sensors' data into an InfluxDB. Additionally, Grafana is served as a web application for both data visualization and monitoring. Upcoming services: TEC data receiver, notificaiton and alerting systems, and data fusion.
Install Docker and Docker-Compose.
-
Clone the repo.
git clone https://github.com/encresearch/data-assimilation-system.git -
Build images and run containers.
Development
The
docker-compose.dev.ymlfile builds the docker image using the files on your local machine and not pulled from our docker hub (our current production container). That way, all your local changes will take place when building the containers. This is the easiest way to develop new feature and fix bugs for our system. Simply clone all the currently used systems in your local machine and use ourdocker-compose.dev.ymlto build and run the images based on your local files. This will also add your local files into the containers, so that your local changes take place immediately. The following instructions will work for Unix systems (Mac and Linux). For Windows systems, just change the basic terminal commands for their Windows alternatives.First, start by making a new directory where you will store all our backend systems (ex. enc-research):
mkdir enc-research # Make new directory cd enc-research # Move inside the new directoryThen, clone all our backend systems inside the new directory:
git clone https://github.com/encresearch/data-assimilation-system git clone https://github.com/encresearch/connector git clone https://github.com/encresearch/calculator git clone https://github.com/encresearch/publisher git clone https://github.com/encresearch/fusor git clone https://github.com/encresearch/inspector git clone https://github.com/encresearch/notifier git clone https://github.com/encresearch/subscriberNavigate inside your local
data-assimilation-systemrepo, and copy thedocker-compose.dev.ymlfile into the parent directory (the one that is storing all our systems, in this caseenc-research/):cd data-assimilation-platform # Navigate to this repo cp docker-compose.dev.yml ../ # Make a copy to the parent direcotry cd ../ # Navigate back into the parent directoryAt this point, you can simply spin up the systems using docker-compose:
docker-compose -f docker-compose.dev.yml up -dThis command will spin up
connector,calculator,nginx,grafana, andinfluxdb. We will be adding the other systems in the future. Go ahead to the browser and navigate tolocalhost. You should see Grafana's web interface.Now you have two options, you could either run our
publishersystem right from the hardware (Raspberry Pi), or you could simulate a publisher system:# From the parent directory: cd publisher # Navigate into your local publisher repo python3 -m venv env # Create a new virtual environment source env/bin/activate # Activate your virtual enviornment pip install --upgrade pip # Update your Pip version pip install -r requirements/dev.txt # Install the dev dependencies python run.py # This will run in simulation modeTo see what containers are currently running:
docker psTo see the
logsof a container:docker logs name_of_the_containerTo "attach" to the output terminal of one of the containers (This is very helpful when debbuging and/or using a debugger like
ipdb):docker attach name_of_the_containerTo restart a container. Let's say you made some local changes and you want to test them right away. Most of the time you will have to restart your container so that it runs with the newly added code:
docker restart name_of_the_containerTo SSH into a container:
docker exec -it sh # Depending on the base image you might have to change "sh" for "/bin/bash" or "/bin/sh"To stop and remove containers, networks and images created by up (external volumes won't be removed):
docker-compose -f docker-compose.dev.yml downProduction
This file includes the creation of persistent docker volumes and contains its own database containers. For testing environments, use the dev yml file.
docker-compose -f docker-compose.yml up -d
InfluxDB will be used to store our incoming sensor data. For data persistency, a docker volume influx_data is created. To access the CLI thorugh a container, enter docker exec -it influxdb influx. For configuration and environment variables, see here.
Grafana will be used as web application for data visualization. It runs on port 3000 on the container, which is then mapped to port 80 on the host. For data persistency, a volume named grafana_data is created.
Configuration
The configuration file can be found in the grafana/ directory. As of now, the only used configuration is pointing the root url to the host's IP. For more info, see grafana configuration docs.
Python application that receives raw measurements data from an MQTT Broker, calculates actual values and stores them in InfluxDB before sending an event message to calculator.
Reads raw data from InfluxDB, converts it into the desired values, and adds the data back into new measurements in influxdb based on their location and sensor before notifying inspector about it.
Python application that constantly scans data and detects breaking of thresholds values and triggers an alert as an email trough notifier.
A service that queries a database containing subscribers (and the topics they are subscribed to), and sends an email to the subscribers of the respective topic specified by inspector.
Python flask UI application for users to update the alerting email notifications to be received by notifier.
Earthquake Precursors Machine Learning algorithms application.
Plugin-driven server agent for collecting and reporting metrics about the hosting server. Data is also stored in InfluxDB. Configuration file can also be found in the telegraf/ directory.
Checks changes made to the images that containers were originally started from. If a change is detected, it will automatically restart the container using the new image. For more info, visit here. This service will only run in production.
Pull requests and stars are always welcome. To contribute, please fetch, create an issue explaining the bug or feature request, create a branch off this issue and submit a pull request.

