tldr;
This repo deploys an infrastructure allowing workshop participants to access data from MongoDB through a web interface and via Jupyter notebooks.
Users are connected on the same network as the host. The host IP is fetched manually by the host (workshop organiser's computer) and precised during the deployment phase.
The original database is dumped from the DREAM server. The database name to dump locally ond the host computer and restore inside the docker-mongodb-omniboard is entered manually by the workshop organinser during the development phase.
Important:
- you need to have docker desktop running
- you need to have access login to the original mongodb to dump and restore inside the docker
set ATLAS_USER=your_username
set ATLAS_PASSWORD=your_password
set ATLAS_URL=your_cluster_url
If you fail at first run, use the cleanup_workshop.bat
User access:
DEBUG: HOST_IP value: X.X.X.X
MongoDB (with Sacred data): mongodb://X.X.X.X:27018
Omniboard (Sacred web UI): http://X.X.X.X:9004
Sacred Access JupyterLab: http://X.X.X.X:8888
JupyterHub: http://X.X.X.X:8000
Each user signs up when connecting to http://X.X.X.X:8000 and then execute the Jupyter Notebook stored in the "notebooks" folder.
A complete Docker-based workshop environment providing multi-user access to MongoDB, Omniboard visualization, and Jupyter environments for Sacred experiment data analysis.
This workshop consists of three main components:
- MongoDB + Omniboard (
docker-mongodb-omniboard/) - Database with Sacred experiments and web visualization - Sacred Access JupyterLab (
docker-sacred-access/) - Individual Jupyter environment for data analysis - JupyterHub Multi-user (
jupyterhub-deploy-docker/) - Multi-user Jupyter server with authentication
Before starting, ensure you have the following installed on your Windows machine:
- Docker Desktop for Windows (with WSL2 backend recommended)
- Git for Windows (to clone repositories if needed)
- Windows PowerShell or Command Prompt
-
Start Docker Desktop
- Ensure Docker Desktop is running and containers can be created
- Verify Docker is working: Open PowerShell and run
docker --version
-
Clone/Download Workshop Files
# If using Git git clone <repository-url> cd workshop_data
For complete workshop deployment:
-
Open Command Prompt or PowerShell as Administrator
-
Navigate to the workshop directory:
cd "C:\path\to\workshop_data"
-
Run the deployment script:
deploy_workshop.bat
The script will automatically:
- β Build and start MongoDB with Sacred data (includes automatic database restoration)
- β Set up Omniboard visualization interface
- β Create JupyterLab environment for data analysis
- β Deploy multi-user JupyterHub with authentication
- β Configure all networking between services
- β Restore Sacred experiment data (runs, metrics, fs.files, fs.chunks collections)
Once deployment is complete, you can access:
| Service | URL | Description |
|---|---|---|
| Omniboard | http://localhost:9004 |
Sacred experiments visualization |
| JupyterLab | http://localhost:8888 |
Individual Jupyter environment |
| JupyterHub | http://localhost:8000 |
Multi-user Jupyter server |
To allow other users on your network to access the workshop:
-
Find your machine's IP address:
ipconfig | findstr "IPv4"
Look for your WiFi adapter IP (usually
192.168.x.xor10.x.x.x) -
Share these URLs with workshop participants:
- Omniboard:
http://YOUR_IP:9004 - JupyterLab:
http://YOUR_IP:8888 - JupyterHub:
http://YOUR_IP:8000
Example:
http://192.168.1.100:9004 - Omniboard:
If you prefer to start components individually:
cd docker-mongodb-omniboard
docker-compose up -dcd docker-sacred-access
docker-compose up -dcd jupyterhub-deploy-docker\workshop_DREAM
docker-compose up -d| Parameter | Value |
|---|---|
| Host | localhost (or your machine's IP for network access) |
| Port | 27018 |
| Database | workshop_data |
| Authentication | None (for workshop purposes) |
from pymongo import MongoClient
from sacred.observers import MongoObserver
# Local access
client = MongoClient('mongodb://localhost:27018/')
# Network access
# client = MongoClient('mongodb://YOUR_IP:27018/')
db = client['workshop_data']
# Verify Sacred collections are loaded
collections = db.list_collection_names()
print("Available collections:", collections)
# Expected: ['runs', 'metrics', 'fs.files', 'fs.chunks', 'omniboard.settings', ...]
# Example: Query experiment runs
runs = db.runs.find()
for run in runs.limit(5):
print(f"Run {run['_id']}: {run.get('experiment', {}).get('name', 'Unknown')}")When the workshop is complete, clean up all Docker resources:
cleanup_workshop.batThis will remove all workshop-related:
- Docker containers
- Docker images
- Docker volumes
- Docker networks
Docker Desktop not running:
- Start Docker Desktop and wait for it to fully initialize
- Check system tray for Docker whale icon
Port already in use:
- Ports 8000, 8888, 9004, and 27018 must be available
- Stop other services using these ports or modify
docker-compose.ymlfiles
Permission denied:
- Run Command Prompt or PowerShell as Administrator
- Ensure your user is in the
docker-usersgroup
Cannot access from network:
- Check Windows Firewall settings
- Ensure Docker Desktop allows network access
- Verify IP address with
ipconfig
MongoDB data not loading:
- Check if Sacred collections exist:
docker exec -it <mongodb-container> mongosh workshop_data --eval "db.getCollectionNames()" - Rebuild with clean volumes:
docker-compose down -vthendocker-compose up --build - Verify dump files exist in container:
docker exec -it <mongodb-container> ls -la /docker-entrypoint-initdb.d/
Docker build failures:
- Clean Docker cache:
docker system prune -f - Rebuild without cache:
docker build --no-cache - Check disk space and Docker Desktop resources
- Check deployment status:
docker ps -ato see all containers - View service logs:
docker-compose logsin the relevant directory - MongoDB specific logs:
docker-compose logs mongodbindocker-mongodb-omniboard/ - Check database restoration: Access MongoDB container and verify collections exist
- Verify network ports:
netstat -an | findstr ":8000 :8888 :9004 :27018" - Clean restart: Use
cleanup_workshop.batthendeploy_workshop.batfor fresh deployment
- Pre-loaded Sacred experiment data with automatic restoration
- Complete Sacred collections: runs, metrics, fs.files, fs.chunks
- Web-based experiment browser and visualization
- Real-time experiment tracking
- Omniboard configuration and custom columns
- Incense library for Sacred experiment querying
- PyGWalker for interactive data visualization
- Pre-configured workshop notebooks
- Individual Docker containers per user
- Built-in user authentication and signup
- Persistent user data storage
- Network-accessible for workshop participants
The workshop includes:
- Interactive tutorials on Sacred experiment framework
- Data analysis and visualization examples using real Sacred experiment data
- Hands-on exercises with pre-loaded experiment runs and metrics
- Multi-user collaboration capabilities
- GridFS file storage examples (Sacred artifact management)
- Omniboard dashboard customization and advanced querying
The workshop database contains:
- Experiment runs with configuration, results, and metadata
- Metrics and measurements from multiple experiments
- Artifacts stored in GridFS (files, plots, models)
- Omniboard settings for dashboard customization
Happy experimenting! π