| title | AquaIntel Vision |
|---|---|
| emoji | 🌊 |
| colorFrom | blue |
| colorTo | indigo |
| sdk | docker |
| app_port | 8501 |
| pinned | false |
Production-grade underwater image enhancement, real-time threat detection, and model analytics — in one sleek dashboard.
AquaIntel Vision is a full-stack MLOps dashboard for underwater computer vision research and production deployment. Upload a degraded underwater image or video feed, and the platform instantly:
- Enhances it with a trained U-Net deep-learning model (SSIM + MSE loss)
- Detects threats — divers, boats, and foreign objects — using a fine-tuned YOLO11 detector
- Measures quality objectively with PSNR, SSIM, and brightness/contrast delta metrics
- Tracks experiments with a full model registry and comparative analytics
- U-Net image enhancement — Custom architecture trained on the UIEB underwater dataset, producing vivid, defogged output from murky underwater frames
- YOLO11 threat detection — Fine-tuned on a custom underwater annotation dataset for real-time bounding-box inference across 9+ threat classes
- Multi-model checkpoint switching — Pick any trained
.kerascheckpoint frommodels/checkpointsvia a sidebar dropdown without restarting the app - Automated model recommendation — The Model Arena tab surfaces the best-performing run from your registry based on composite PSNR/SSIM scoring
- Objective image metrics — PSNR (dB), SSIM, brightness Δ, and contrast Δ computed per inference using
scikit-image - Interactive drag comparison slider — A pure-JS side-by-side slider (no external Streamlit component needed) to visually compare original vs. enhanced output
- Session analytics dashboard — Real-time Plotly charts of per-inference latency and video-job FPS across the active session
- In-app video enhancement — Upload
.mp4/.avi, and the app processes frame-by-frame and packages a download-ready enhanced video - Batch image processing — Zip upload support: compress a folder of images, upload once, and download the enhanced batch as a ZIP archive
- Real-time modes — Webcam, video file, RTSP stream, and batch-folder processing via the headless
video_processor.pyCLI
- Dockerized — Single-command deployment with a non-root user, OpenCV system deps, and health-check endpoint baked in
- Security-first HTML rendering — All user-supplied values are
html.escaped before injection; no raw-string interpolation into HTML - GPU auto-detection —
utils/gpu.pyselects GPU/CPU at startup and applies memory-growth policy; overridable with env flags - Centralized config —
config.yamldrives augmentation profiles, model paths, dataset location, and runtime knobs — no magic constants scattered in code
- Deep-ocean dark design system — Custom CSS design tokens, glassmorphism panels,
Orbitron+Intertypography, and animated glow micro-interactions - Fully responsive — Grid layouts collapse gracefully to single-column on narrow viewports
- Plotly-themed charts — All data visualisations share the same dark-ocean colour palette
🚧 Demo link goes here once deployed to Hugging Face Spaces or Streamlit Community Cloud.
| Platform | Link |
|---|---|
| Hugging Face Spaces |
| Layer | Technology |
|---|---|
| UI Framework | Streamlit ≥ 1.38 |
| Enhancement Model | TensorFlow / Keras 2.x — Custom U-Net |
| Detection Model | Ultralytics YOLO11 ≥ 8.2 |
| Deep Learning (PyTorch) | PyTorch ≥ 2.0 + TorchVision |
| Computer Vision | OpenCV 4.x, Pillow |
| Data Science | NumPy, Pandas, scikit-learn |
| Image Quality | scikit-image (PSNR, SSIM) |
| Visualisation | Plotly 5.x, Seaborn, Matplotlib |
| Augmentation | Albumentations ≥ 1.4 |
| Model Export | ONNX, onnxruntime, onnxslim |
| Containerisation | Docker (Python 3.11 slim base) |
| Config | PyYAML 6.x |
- Python 3.11 (tested; other 3.x may work)
git- (Optional) NVIDIA GPU with CUDA for accelerated inference
git clone https://github.com/SohamLone77/Aquaintel_Vision.git
cd Aquaintel_Vision# Windows (PowerShell)
py -3.11 -m venv .venv311
.\.venv311\Scripts\activate# macOS / Linux
python3.11 -m venv .venv311
source .venv311/bin/activatepip install --upgrade pip
pip install -r requirements.txtstreamlit run streamlit_app.pyThe app will open automatically at http://localhost:8501.
For a fully reproducible, production-equivalent environment:
# Build
docker build -t aquaintel-vision .
# Run
docker run -p 8501:8501 aquaintel-visionThen navigate to http://localhost:8501.
The container runs as a non-root user, exposes port 8501, and includes a built-in health-check endpoint.
All runtime knobs live in config.yaml. Edit this file to change model paths, dataset location, and augmentation behaviour without touching any Python code.
# config.yaml (excerpt)
model:
checkpoint_dir: models/checkpoints
data:
raw_dir: data/raw
reference_dir: data/reference
augmentation:
enabled: true
profile: light # one of: none | light | standard | strong
flip_prob: 0.5
brightness_delta: 0.1
contrast_lower: 0.8
contrast_upper: 1.2You can override GPU behaviour with environment variables — no code edits required:
# Windows PowerShell
$env:USE_GPU = "1" # 1 = enable GPU, 0 = force CPU
$env:GPU_MEMORY_GROWTH = "1" # prevent pre-allocating all VRAM
$env:MIXED_PRECISION = "1" # enable float16 mixed precision
streamlit run streamlit_app.py# macOS / Linux
USE_GPU=1 GPU_MEMORY_GROWTH=1 streamlit run streamlit_app.py$env:DATA_PATH = "D:\datasets\uieb"
streamlit run streamlit_app.pyNote: TensorFlow 2.11+ does not support native Windows GPU. Use one of the options below.
Option A — WSL2 + CUDA (recommended for NVIDIA):
wsl
pip install tensorflow[and-cuda]Option B — Legacy native Windows (TF 2.10 + CUDA 11.2 + cuDNN 8.1):
py -3.10 -m venv .venv310
.\.venv310\Scripts\python.exe -m pip install tensorflow==2.10.* "numpy<2"| Step | What to do |
|---|---|
| 1. Select a model | Use the 🧠 Model section in the left sidebar to pick a trained checkpoint |
| 2. Choose a tab | Navigate using the top tab bar: Enhance, Detect, Video, Batch, Model Arena, Analytics |
| 3. Upload an image | On the Enhance tab, drag-and-drop or browse for a .jpg, .png, or .bmp file |
| 4. Run enhancement | Click Enhance Image and watch the Orbitron-styled metrics appear |
| 5. Compare | Drag the slider handle left/right on the comparison panel to visually inspect the before/after |
| 6. Download | Click ⬇ Download Enhanced PNG to save the result |
- Switch to the Detect tab
- Upload your image (enhanced output is auto-carried over if you came from Enhance)
- Adjust the Confidence threshold slider in the sidebar
- Click Run Detection — annotated bounding boxes and threat counts appear in real time
- Zip a folder of images:
Compress-Archive -Path .\input_images -DestinationPath batch.zip - Switch to the Batch tab and upload
batch.zip - Click Process Batch — a progress bar tracks each frame
- Download the results as a ZIP archive
- Go to the Model Arena tab
- Select two experiment runs from the dropdowns
- The app renders metric deltas, validation loss curves, and automatically crowns a winner 🏆
Aquaintel_Vision/
├── streamlit_app.py # Main dashboard (3 100+ lines)
├── production_detector.py # UnderwaterThreatDetector class
├── video_processor.py # Headless video / RTSP / webcam CLI
├── train_unet.py # U-Net training entrypoint
├── train_complete.py # Core trainer implementation
├── finetune_yolo.py # YOLO fine-tuning pipeline
├── config.yaml # Centralised runtime config
├── requirements.txt # Pinned dependency ranges
├── Dockerfile # Production container definition
├── utils/
│ ├── config_loader.py # Reads config.yaml at startup
│ └── gpu.py # TF device selection & memory growth
├── training/
│ └── data_loader.py # UIEB dataset loader
├── models/
│ └── checkpoints/ # Trained .keras checkpoints
├── data/
│ ├── raw/ # Raw (degraded) input images
│ └── reference/ # Clean reference images
└── results/
└── model_registry.json # Run metadata for all experiments
Contributions, issues, and feature requests are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch —
git checkout -b feat/my-amazing-feature - Commit your changes —
git commit -m 'feat: add amazing feature' - Push —
git push origin feat/my-amazing-feature - Open a Pull Request and describe what you changed
Please follow Conventional Commits for commit messages and ensure any new code is covered by a test in tests/.
Distributed under the MIT License. See LICENSE for full text.
MIT License
Copyright (c) 2026 Soham Lone
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
Made with 🌊 by Soham Lone · GitHub
AquaIntel Vision v2026.04
