DualProtoSeg: Simple and Efficient Design with Text- and Image-Guided Prototype Learning for Weakly Supervised Histopathology Image Segmentation
Quick setup
-
- Download the BCSS dataset and place it under the project
data/folder.
- Example layout (common):
data/train/data/val/data/test/
- Adjust dataset paths in the config if your layout differs.
- Download the BCSS dataset and place it under the project
-
- Install CONCH (project-specific):
- If a pip package is available, you can try:
# inside a venv (recommended)
python3 -m venv .venv
. .venv/bin/activate
pip install --upgrade pip
pip install open_clip_torch- Or install the CONCH repo (if provided by your team / HF):
# example - replace with the actual CONCH repo URL if needed
git clone https://github.com/MahmoodLab/conch.git
cd conch
pip install -e .-
- Install Python dependencies from
requirements.txt:
- Install Python dependencies from
# activate your virtualenv first
. .venv/bin/activate
pip install -r requirements.txtNote: For torch, pick the wheel that matches your CUDA version. Example (change cu121 to your CUDA version):
pip install --index-url https://download.pytorch.org/whl/cu121 torch torchvision-
- Run training with the Makefile
# run with defaults: CONFIG=config.yaml, GPU=0
make run
# or explicitly
make run CONFIG=config.yaml GPU=0The script will default to saving outputs under a runs/ folder if the config path has no directory component.
Notes & tips
- If your
config.yamlpoints to a CONCH checkpoint (e.g.,clip.checkpoint_path), ensure that file path exists. - To resume training from a checkpoint, pass
--resume path/to/checkpoint.pthtotrain.py(editMakefileor runpython3 train.py ...).
Parts of the utils/ folder were inspired by the excellent PBIP implementation by
QingchenTang.
We sincerely thank the authors for making their code publicly available.