Add interactive 3D Gradio viewer for virtual mIF visualization + UV-based setup#13
Add interactive 3D Gradio viewer for virtual mIF visualization + UV-based setup#13johnyquest7 wants to merge 1 commit intoprov-gigatime:mainfrom
Conversation
- Interactive 3D point-cloud viewer for 21 virtual mIF channels - requirements.txt for pip/uv installation (replaces conda environment.yml) - SETUP_UV.md with fast setup instructions using uv
|
If you need any information/ code change to get it merged, please let me know. |
|
Thank you for this PR, @johnyquest7 . Let me take a look. |
There was a problem hiding this comment.
Pull request overview
Adds a web-based interactive viewer to run GigaTIME inference on an uploaded H&E tile and visualize predicted virtual mIF channels as a 3D point-cloud (plus a 2D per-channel gallery), along with UV-based setup guidance.
Changes:
- Introduces a new Gradio app + separate HTTP server to render a Three.js 3D point-cloud viewer for predicted channels.
- Adds UV-based setup documentation for fast environment creation and running the viewer.
- Adds a
requirements.txtto support pip/uv installation.
Reviewed changes
Copilot reviewed 2 out of 3 changed files in this pull request and generated 10 comments.
| File | Description |
|---|---|
scripts/gigatime_3d_integrated.py |
New two-server Gradio + Three.js workflow to run inference and display 3D/2D outputs. |
SETUP_UV.md |
New UV-based setup and run instructions for the 3D viewer. |
requirements.txt |
Declares Python dependencies for pip/uv installation. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| def load_model(): | ||
| print("Loading GigaTIME model...") | ||
| model = archs.gigatime(NUM_CLASSES, INPUT_CHANNELS) | ||
| local_dir = snapshot_download(repo_id="prov-gigatime/GigaTIME") | ||
| state_dict = torch.load(os.path.join(local_dir, "model.pth"), map_location="cpu") | ||
| model.load_state_dict(state_dict) | ||
| model.eval() | ||
| device = torch.device("cuda" if torch.cuda.is_available() else "cpu") | ||
| model.to(device) | ||
| print(f"Model loaded on {device}") | ||
| return model, device | ||
|
|
||
| MODEL, DEVICE = load_model() | ||
|
|
There was a problem hiding this comment.
The model is downloaded and loaded at import time (MODEL, DEVICE = load_model()), which will block startup and can fail before the UI renders (e.g., missing HF auth, no network). Consider lazy-loading the model inside run_pipeline (or on first request) and surface a clear error in the UI instead of crashing on import.
| # Write data.json into the viewer directory | ||
| data_path = os.path.join(VIEWER_DIR, "data.json") | ||
| with open(data_path, "w") as f: | ||
| json.dump(payload, f) | ||
|
|
||
| # Return iframe pointing to the 3D viewer on port 7861 | ||
| # Add timestamp to bust cache | ||
| import time | ||
| ts = int(time.time() * 1000) | ||
| iframe = ( | ||
| f'<iframe src="http://localhost:{VIEWER_PORT}/viewer.html?t={ts}" ' | ||
| f'style="width:100%;height:700px;border:none;border-radius:10px;" ' | ||
| f'allow="accelerometer;autoplay"></iframe>' | ||
| ) | ||
| return gallery, iframe |
There was a problem hiding this comment.
data.json is overwritten on each inference run in a shared directory. With concurrent requests or multiple users, the viewer can read the wrong payload (race condition) and users will interfere with each other. Write a uniquely named JSON per run (e.g., UUID) and pass that filename/token to viewer.html so each iframe fetches the correct data.
| # Add timestamp to bust cache | ||
| import time | ||
| ts = int(time.time() * 1000) | ||
| iframe = ( | ||
| f'<iframe src="http://localhost:{VIEWER_PORT}/viewer.html?t={ts}" ' | ||
| f'style="width:100%;height:700px;border:none;border-radius:10px;" ' | ||
| f'allow="accelerometer;autoplay"></iframe>' | ||
| ) | ||
| return gallery, iframe |
There was a problem hiding this comment.
The iframe URL is hardcoded to http://localhost:7861/.... If the Gradio app is accessed from another machine (you launch on 0.0.0.0), the client’s browser will try to reach its own localhost and the 3D viewer will not load. Derive the host dynamically from the incoming request (or use a relative/proxied URL) so remote access works.
| class CORSHandler(SimpleHTTPRequestHandler): | ||
| """Serves files from VIEWER_DIR with CORS headers.""" | ||
|
|
||
| def __init__(self, *args, **kwargs): | ||
| super().__init__(*args, directory=VIEWER_DIR, **kwargs) | ||
|
|
||
| def end_headers(self): | ||
| self.send_header('Access-Control-Allow-Origin', '*') | ||
| self.send_header('Cache-Control', 'no-cache, no-store, must-revalidate') | ||
| super().end_headers() | ||
|
|
||
| def log_message(self, format, *args): | ||
| pass # Quiet | ||
|
|
||
|
|
||
| def start_viewer_server(): | ||
| server = HTTPServer(('0.0.0.0', VIEWER_PORT), CORSHandler) | ||
| print(f"3D viewer server running at http://localhost:{VIEWER_PORT}") | ||
| server.serve_forever() | ||
|
|
There was a problem hiding this comment.
The viewer server binds to 0.0.0.0 and also sends Access-Control-Allow-Origin: *. This exposes the local file server and its contents (including data.json) to the entire network and to any origin. Default to binding 127.0.0.1 and avoid wildcard CORS (or make both configurable via env vars) unless you explicitly intend network exposure.
| if __name__ == "__main__": | ||
| # Start the 3D viewer HTTP server in a background thread | ||
| t = threading.Thread(target=start_viewer_server, daemon=True) | ||
| t.start() | ||
|
|
||
| # Start Gradio | ||
| demo.launch(server_name="0.0.0.0", server_port=7860, share=False) No newline at end of file |
There was a problem hiding this comment.
demo.launch(server_name="0.0.0.0" ...) exposes the Gradio app on all interfaces by default, which is risky for a demo that downloads model weights and serves an embedded local HTTP server. Consider defaulting to 127.0.0.1 and letting users opt into LAN exposure via a flag/env var.
| # ── Static directory for the 3D viewer ──────────────────────────────────────── | ||
| VIEWER_DIR = os.path.join(SCRIPT_DIR, "_gigatime_viewer") | ||
| os.makedirs(VIEWER_DIR, exist_ok=True) | ||
|
|
There was a problem hiding this comment.
VIEWER_DIR is created inside the repo (scripts/_gigatime_viewer) and will accumulate generated artifacts (viewer.html, data.json). Consider using a temp/user cache directory (e.g., tempfile.gettempdir() or platform cache dirs) and/or ensure the generated folder is ignored by git to avoid accidental commits.
| from PIL import Image | ||
| from huggingface_hub import snapshot_download | ||
| from http.server import HTTPServer, SimpleHTTPRequestHandler | ||
| from pathlib import Path |
There was a problem hiding this comment.
from pathlib import Path is unused in this script. Removing it avoids lint noise and keeps imports minimal.
| from pathlib import Path |
| 1. Place this file in GigaTIME/scripts/ (next to archs.py) | ||
| 2. conda activate gigatime | ||
| 3. pip install gradio | ||
| 4. export HF_TOKEN=<your_huggingface_token> | ||
| 5. python gigatime_3d_integrated.py | ||
|
|
There was a problem hiding this comment.
The module docstring setup steps mention conda activate gigatime and pip install gradio, but this script also requires torch + huggingface_hub (and the repo’s code). Update the setup section to match the actual installation flow (e.g., using requirements.txt / uv) so users don’t hit missing-dependency errors.
| 1. Place this file in GigaTIME/scripts/ (next to archs.py) | |
| 2. conda activate gigatime | |
| 3. pip install gradio | |
| 4. export HF_TOKEN=<your_huggingface_token> | |
| 5. python gigatime_3d_integrated.py | |
| 1. Check out the GigaTIME repository and `cd` to the repo root | |
| 2. conda activate gigatime | |
| 3. Install the project dependencies (recommended: `uv sync`; | |
| alternatively: `pip install -r requirements.txt`) | |
| 4. export HF_TOKEN=<your_huggingface_token> | |
| 5. python scripts/gigatime_3d_integrated.py | |
| This script imports repository code (for example `archs`) and also | |
| depends on packages such as torch, gradio, Pillow, numpy, and | |
| huggingface_hub, so installing the full project dependencies is required. |
| <div id="hint">Drag to rotate · Scroll to zoom · Toggle channels in sidebar</div> | ||
| </div> | ||
| </div> | ||
| <script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js"></script> |
There was a problem hiding this comment.
The viewer loads Three.js from a public CDN without an integrity pin. For reproducibility and supply-chain safety, consider vendoring the dependency (or at least adding SRI + crossorigin and pinning a known-good version).
| <script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js"></script> | |
| <script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js" integrity="sha512-dLxUj8DqRLVQjaxAg/P6MqxsVXni4eWh05rq6ArlTc95xJ3Adxpv8uKXu95syEHCqB6f+GO6zkRgZNpmjDoE7A==" crossorigin="anonymous"></script> |
| # Quick Setup with UV | ||
|
|
||
| [UV](https://github.com/astral-sh/uv) is a fast Python package manager that resolves and installs dependencies significantly faster than pip/conda. | ||
|
|
||
| ## 1. Install UV | ||
|
|
||
| ```bash | ||
| curl -LsSf https://astral.sh/uv/install.sh | sh | ||
| ``` | ||
|
|
||
| ## 2. Clone and set up | ||
|
|
||
| ```bash | ||
| git clone https://github.com/prov-gigatime/GigaTIME.git | ||
| cd GigaTIME | ||
|
|
||
| # Create virtual environment (Python 3.11 recommended) | ||
| uv venv --python 3.11 | ||
| source .venv/bin/activate | ||
| ``` |
There was a problem hiding this comment.
PR description says requirements.txt replaces the conda environment.yml, but the repo still contains environment.yml and README.md currently recommends conda setup. Consider updating the main README (or explicitly stating UV as an alternative) so users can discover the new setup path and aren’t given conflicting instructions.
This PR adds an interactive web-based viewer (scripts/gigatime_3d_integrated.py) that runs GigaTIME inference and renders the 21 predicted mIF channels as a 3D point-cloud visualization.
Features:
Upload H&E tile → real model inference → 3D + 2D output
Three.js point clouds: one color-coded layer per protein channel, stacked above the H&E base
Sidebar: per-channel toggles, layer spacing/height sliders, auto-rotate
2D gallery tab with per-channel colored heatmaps
Single-file, no extra dependencies beyond gradio.
cd scripts/ python gigatime_3d_integrated.py or uv run gigatime_3d_integrated.py