This repository now uses a single container ("combined_app") to run all three processes:
- Streamlit App
- MQTT Broker
- Background Task
The processes are coordinated by main.py.
- Docker
- Docker Compose
- [Recommended] Visual Studio Code with the following extensions installed:
- Mypy (ms-python.mypy-type-checker)
- Pylint (ms-python.pylint)
- Black Formatter (ms-python.black-formatter)
NOTE: Using these extensions will provide in-editor warnings and help you format your code correctly before running the linter.
NOTE: you have this stored and running in a folder called integration as the docker-compose.yml file references the current directory, auto-generated names will rely on this (so some steps below won't work if you call it something else).
-
Clone the repository:
git clone https://github.com/Hurtec/integration.git cd integration -
Create configuration files (documented below)
.secrets/secrets.toml(for app login & router info)
-
Build and start the combined Docker container:
docker-compose up --build
The combined container will start web_app, background_task, and mqtt_broker via main.py.
-
Access the Web app running in Docker:
Open your web browser and go to
http://localhost:8088.
For local development without connecting to real external services (like the Peplink router), you can use the "mock mode":
-
The docker-compose.yml file already includes the
MOCK_MODE: "true"environment variable for local development. -
When running in mock mode:
- The background task will generate simulated GPS data instead of connecting to the router
- The background task will generate simulated vehicle data with ignition on/off events:
- Initial state: ignition ON (ign_state=True)
- After ~60 seconds: ignition OFF (ign_state=False)
- No connection to the router is attempted at all (no GPS or vehicle data fetching)
- The web interface will display a "MOCK MODE" indicator
- All other functionality remains unchanged:
- Real PostgreSQL database is still used
- Real MQTT broker is still used
- All background tasks and heartbeats run normally
-
Mock mode is completely independent from the environment setting (test/prod/dev):
- You can run mock mode with any environment setting
- The environment setting controls AWS IoT topic prefixes
- Mock mode only affects GPS and vehicle data generation
-
This allows for testing and development without requiring actual hardware connections.
Since you're running the application inside Docker, you need to execute the hash_password.py script within the Docker container. Follow these steps:
-
Run the
hash_password.pyScript Inside Docker:First, build to make sure the container is up-to-date:
docker-compose up --build --remove-orphans
Then stop the container, and run:
docker-compose run --rm combined_app python hash_password.py
- What This Does:
docker-compose run: Runs a one-time command against thecombined_appservice defined in yourdocker-compose.yml.--rm: Automatically removes the container after the command completes.combined_app: The service where the script will run.python hash_password.py: Executes thehash_password.pyscript inside the container.
- What This Does:
-
Enter Your Desired Password:
When prompted, input the password you wish to hash.
Enter password to hash: your_password_here
The script will output something like:
Hashed password: $2b$12$Saltsaltsaltsaltsaltsaltedhashedpassword1234567890
-
Create or update
secrets.tomlwith the Hashed Password:Open the
.secrets/secrets.tomlfile located at/src/.secrets/secrets.tomland replace the placeholder with your newly generated hashed password.[passwords] # Follow the rule: username = "hashed_password" admin = "$2b$12$Saltsaltsaltsaltsaltsaltedhashedpassword1234567890" # Replace with the actual bcrypt hashed password
The Background Task requires its own set of credentials to interact with the Peplink router's API. Follow these steps to configure the Background Task secrets:
-
Create or update
.secrets/secrets.toml:On the local machine where you are building the Docker containers, create a directory named
.secretsin the root of the project and then create a file namedbackground_task_secrets.tomlinside it./src/.secrets/secrets.toml -
Add Peplink API Credentials:
Populate the
.secrets/secrets.tomlfile with your Peplink router's IP address, a an admin UserId and Password. The credentials can be found inSettings > Device System Management.[peplink_api] router_ip = "192.168.82.1" router_userid = "admin" router_password = "some-hashed-value-here"Note: the Peplink API documentation can be found at: Peplink API Documentation. We are using the Transit Pro E at the time of writing.
The Web app requires a login. You can set up credentials by creating a file .secrets/secrets.toml with the following content, changing the username(s) and password(s) as needed - see above if you need to generate a password.
- Go to the "Upload Certificate Files" section.
- Choose a certificate file (e.g.,
ca.pem,cert.crt,private.key) and upload it. - The uploaded files will be saved to the shared volume.
- Go to the "Enter Configuration" section.
- Enter the tenant name and device name.
- Click the "Save Configuration" button.
- The configuration will be saved to the shared volume.
- Go to the "View MQTT Messages" section.
- The app will display the MQTT messages stored in the shared SQLite database (
shared_data.db).
The application uses a shared SQLite database (shared_data.db) to monitor the status of the MQTT Broker and Background Task services. Status information is stored in the service_status table within the database and is displayed in the Web app.
- Open the Web app at
http://localhost:8088. - Navigate to the "Service Status" section.
- View the current status of the MQTT Broker and Background Task services, including when they started and their local and UTC times.
This feature ensures that you can monitor whether all necessary configurations are in place and whether the services are running correctly.
The application uses PostgreSQL for data storage.
This project uses uv as the Python package manager for a streamlined development experience.
You need to start your local integration-postgres container before running alembic commands.
To start the container, run:
docker-compose up -d postgres-
Install
uv(if not already installed):Follow the instructions at https://github.com/astral-sh/uv
-
Create a virtual environment:
uv venv
-
Activate the virtual environment:
- On Windows:
.\.venv\Scripts\activate
- On macOS/Linux:
source .venv/bin/activate
- On Windows:
-
Install the required packages:
uv pip install -r requirements-combined.txt uv pip install -r requirements-dev.txt
-
Run Alembic commands:
First, change to the
srcdirectory, E.gcd src.# With environment activated alembic revision --autogenerate -m "migration_name" alembic upgrade head # Or using uv run (without activating environment) uv run alembic revision --autogenerate -m "migration_name" uv run alembic upgrade head
When creating a migration:
- Come up with a short migration name that describes what database change you're making, e.g., "add_bms_data".
- Run the command
alembic revision --autogenerate -m "migration_name"- this should create a new migration file insrc\migrations\versions. - Review the migration file carefully and make sure it looks correct - it can be difficult to "undo" or change things in the database, so it's better to get it right the first time if possible. If you need to make further changes, you can delete the migration file and re-generate it at this stage.
- Once you're happy with the migration, with postgres running as per the step above, run
alembic upgrade headto apply the changes. - If there are no errors, you should be able to run the whole app now and the database changes should be applied.
Note: The application will automatically detect if it's running locally or in Docker and use the appropriate database host (localhost for local development, postgres for Docker containers).
To run the tests, you'll need to have your Python environment set up using uv as described in the Database Management section.
Run the tests using docker-compose (so that you don't need a local python environment set up):
docker-compose -p integration-dev -f docker-compose-dev.yml up --build --force-recreate test --remove-orphansIf you have a local Python environment set up:
-
Ensure you're in the correct environment:
source .venv/bin/activate # On Linux/macOS .\.venv\Scripts\activate # On Windows
-
Install test dependencies (if not already done):
uv pip install -r requirements-dev.txt uv pip install -r requirements-combined.txt
-
Run the tests:
-
With environment activated:
# On Linux/macOS PYTHONPATH=src pytest tests/ -v --timeout=30 # On Windows PowerShell $env:PYTHONPATH = 'src'; pytest tests/ -v --timeout=30
- Run a specific test file:
# On Linux/macOS PYTHONPATH=src pytest tests/test_background_task.py -v --timeout=30 # On Windows PowerShell $env:PYTHONPATH = 'src'; pytest tests/test_background_task.py -v --timeout=30
- Run a specific test:
# On Linux/macOS PYTHONPATH=src pytest tests/test_background_task.py::test_process_gps_data_success -v --timeout=30 # On Windows PowerShell $env:PYTHONPATH = 'src'; pytest tests/test_background_task.py::test_process_gps_data_success -v --timeout=30
- Run a specific test file:
-
Using
uv run(environment activation not needed):# On Linux/macOS (uv run inherits current env vars) PYTHONPATH=src uv run pytest tests/ -v --timeout=30 # On Windows PowerShell $env:PYTHONPATH = 'src'; uv run pytest tests/ -v --timeout=30
Note: On Windows PowerShell, use
$env:PYTHONPATH = "src"instead when running directly:$env:PYTHONPATH = "src"; pytest tests/ -v --timeout=30
When using
uv runon PowerShell, you might need to set the environment variable differently depending on your shell configuration or useuv run --env PYTHONPATH=src -- ...syntax if supported. -
You can run linting and tests separately or together using Docker Compose.
Run the linter using docker-compose (so that you don't need a local python environment set up):
docker-compose -p integration-dev -f docker-compose-dev.yml up --build --force-recreate linter --remove-orphansRun the tests using docker-compose:
docker-compose -p integration-dev -f docker-compose-dev.yml up --build --force-recreate test --remove-orphansRun both linting and tests in one go:
docker-compose -p integration-dev -f docker-compose-dev.yml up --build --force-recreate lint-test --remove-orphansIf you have run the steps above around managing database migrations, you may already have a local python virtual environment set up with uv. If that is the case, it may be quicker to run linting locally:
-
Ensure you're in the correct environment:
source .venv/bin/activate # On Linux/macOS .\.venv\Scripts\activate # On Windows
-
Ensure requirements are installed (if not already done):
uv pip install -r requirements-dev.txt uv pip install -r requirements-combined.txt
-
Run linters:
# To run Black formatter (with environment activated) black . # Or using uv run uv run black . # To run Pylint (with environment activated) # Use specific paths to avoid linting packages in the virtual environment pylint src/ --rcfile=pylintrc --ignore=migrations # Or using uv run (from project root) uv run pylint src/ --rcfile=pylintrc --ignore=migrations # To run mypy (with environment activated) mypy --install-types mypy --config-file=mypy.ini src # Or using uv run uv run mypy --install-types uv run mypy --config-file=mypy.ini src
Note: When running Pylint, always specify explicit paths (
src/andtests/) to avoid analyzing files in the virtual environment's site-packages directory.
This repository uses GitHub Actions to automatically validate code and deploy Docker images.
-
PR Build Workflow
- Runs automatically when a pull request is created against the main branch
- Validates code quality (formatting, linting, type checking)
- Runs all tests to ensure functionality
- Skips execution when only documentation files are changed (files in the
documentation/directory or any Markdown files)
-
Main Branch Build Workflow
- Runs automatically when changes are merged to the main branch
- Performs the same validation checks as the PR workflow
- Builds and pushes the Docker image to DockerHub
- Skips execution when only documentation files are changed (files in the
documentation/directory or any Markdown files)
To enable the Docker image publishing and application configuration, you need to configure:
-
In your GitHub repository settings under "Secrets and variables" > "Actions":
Secrets (for sensitive information):
- Add the secret:
DOCKERHUB_TOKEN(your DockerHub Personal Access Token) - Add the secret:
ADMIN_PASSWORD(bcrypt hashed password for admin user) - Add the secret:
ROUTER_PASSWORD(password for the Peplink router)
Variables (for non-sensitive information):
- Add the variable:
DOCKERHUB_USERNAME(your DockerHub username) - Add the variable:
ROUTER_IP(IP address of the Peplink router) - Add the variable:
ROUTER_USERID(user ID for the Peplink router, typically "admin")
- Add the secret:
To create a DockerHub Personal Access Token:
- Log in to Docker Hub
- Go to Account Settings > Security
- Create a new access token with appropriate permissions
- Copy the token value (you won't be able to see it again)
For the ADMIN_PASSWORD secret:
- Store your plain password in the
ADMIN_PASSWORDGitHub secret - The GitHub Actions workflow will automatically hash it during the build process using bcrypt
- This ensures the password is properly formatted for the web app's login system
If you need to deploy manually, follow these steps:
-
Docker Hub Account:
- Ensure you have a PAT (Personal Access Token) from Docker Hub for the HurtecDev repo.
- Ensure your system supports multi-platform builds:
docker buildx create --use --name multi-platform-builder docker buildx inspect --bootstrap
- Login to docker CLI with
docker login -u hurtecdev - Enter password from PAT
- If making changes to the mosquitto container:
docker buildx build --platform linux/arm/v7,linux/arm64,linux/amd64 --push -t hurtecdev/sensible-defaults-eclipse-mosquitto:latest -f Dockerfile.mosquitto .
- If making changes to the main code (we only need linux/arm64 for the current routers):
docker buildx build --platform linux/arm64 --push -t hurtecdev/hurtec-dev:latest -f Dockerfile.combined-arm .
Pull latest images.
- Remote web admin into the router and go to Advanced > Docker (E.g. https://???.peplink.com/cgi-bin/MANGA/index.cgi?mode=config&option=docker)
- If you need to update the mosquitto one (rare, only needed first time on router most likely)
- Press
Click here to search and download Docker images - Search for
hurtecdev/sensible-defaults-eclipse-mosquittoand download it - Search for
postgresand download it
- Press
- Press
Click here to pull Docker image. - Enter the image name (e.g.
hurtecdev/hurtec-dev), usernamehurtecdevand password (from PAT)
We use a custom mosquitto image because the default eclipse-mosquitto image doesn't support providing config via environment variables or CLI, so we can't set defaults to get it working.
If separate vlan, do "--network vlan1"
docker run --ip=192.168.82.5 --name mosquitto -p 1883:1883 hurtecdev/sensible-defaults-eclipse-mosquittodocker run --ip=192.168.82.6 --name postgres -p 5432:5432 -e POSTGRES_USER=hurtec -e POSTGRES_PASSWORD=hurtec -e POSTGRES_DB=hurtec postgresdocker run --ip=192.168.82.7 --name hurtec_combined_app -p 8088:8088 -e MQTT_BROKER_ADDRESS=192.168.82.5 -e DATABASE_URL=postgresql://hurtec:hurtec@192.168.82.6/hurtec hurtecdev/hurtec-devThis repository includes an EventCatalog project in the documentation directory that documents our event-driven architecture.
To work on the EventCatalog documentation locally:
-
Navigate to the documentation directory:
cd documentation -
Install dependencies:
npm install
-
Start the development server:
npm run dev
-
Open your browser and go to
http://localhost:3000to view the documentation.
To build the documentation for production:
cd documentation
npm run buildThe built files will be in the documentation/dist directory. These can be served using any static file hosting service if needed.
If you use an AI coding tool like cursor to GitHub Copilot, you should create a custom settings file to tell it to use the correct python environment and to use the correct linter and formatter.
We ignore these files from source control as they may be specific to the user, but if you have useful tips for what to add, include them here in the README.md file.
{
"environment": {
"conda": {
"environment": "integration",
"activateCommand": "./.venv/Scripts/activate"
},
"shell": "powershell"
},
"codeQuality": {
"typeChecking": {
"tool": "mypy",
"enabled": true
},
"linting": {
"tool": "pylint",
"enabled": true
}
},
"editor": {
"formatOnSave": true,
"defaultFormatter": "black"
}
}
Create a .vscode/settings.json file with the following configuration. Adjust the python.defaultInterpreterPath based on your environment setup (conda, venv, or uv's .venv).
{
// Example for Conda:
// "python.defaultInterpreterPath": "C:\\Users\\YourUser\\anaconda3\\envs\\integration\\python.exe",
// "python.condaPath": "C:\\Users\\YourUser\\anaconda3\\Scripts\\conda.exe",
// Example for venv/uv:
"python.defaultInterpreterPath": "${workspaceFolder}/.venv/Scripts/python.exe", // Windows
// "python.defaultInterpreterPath": "${workspaceFolder}/.venv/bin/python", // Linux/macOS
"python.analysis.typeCheckingMode": "basic", // Or "strict"
"python.linting.enabled": true,
"python.linting.pylintEnabled": true,
"python.formatting.provider": "black",
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports": "explicit"
},
"[python]": {
"editor.defaultFormatter": "ms-python.black-formatter",
"editor.formatOnSave": true
},
"mypy-type-checker.args": [
"--config-file=../pyproject.toml" // Assuming pyproject.toml is in the root
],
"pylint.args": [
"--rcfile=../pyproject.toml" // Assuming pyproject.toml is in the root
],
"github.copilot.enable": {
"*": true
},
"github.copilot.advanced": {} // Keep default advanced settings unless needed
}This configuration:
- Sets up Python interpreter path (adjust as needed for your setup)
- Enables type checking (mypy extension recommended)
- Enables linting with pylint (pylint extension recommended)
- Uses Black as the formatter (black formatter extension recommended)
- Enables format on save
- Configures GitHub Copilot
- Enables automatic import organization
- Points linters/type checkers to configuration in
pyproject.toml(assuming it exists at the root)
The Dockerized Web App with MQTT Broker and Background Task as custom code running via main.py in a single container (due to peplink not allowing docker networks - the original design had them separated).
Separateley there is a mosquitto broker running to support MQTT nessages.
The main data flows are as follows:
sequenceDiagram
participant background_task
participant router
participant mosquitto
participant mqtt_broker
participant AWS_IoT
background_task->>router: Get GPS Data
router->>background_task: Return GPS Data
background_task->>mosquitto: Publish "out/gps"
mosquitto->>mqtt_broker: Subscribe to "out/..."
mqtt_broker->>AWS_IoT: Send Message to <br/> {env}/{tenant_name}/{device_name}/out/gps
sequenceDiagram
participant PLC
participant Sensors_BMS_Etc
participant mosquitto
participant mqtt_broker
participant AWS_IoT
PLC->>Sensors_BMS_Etc: Get Device Data
Sensors_BMS_Etc->>PLC: Return Device Data
PLC->>mosquitto: Publish "out/device"
mosquitto->>mqtt_broker: Subscribe to "out/..."
mqtt_broker->>AWS_IoT: Send Message to <br/> {env}/{tenant_name}/{device_name}/out/device