Skip to content

DeepMed-Lab-ECNU/BiGen

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Historical report guided bi-modal concurrent learning for pathology report generation [MICCAI2025 oral, Best Computational Pathology Paper]

=====

Historical report guided bi-modal concurrent learning for pathology report generation. [Link]
Ling Zhang, Boxiang Yun, Qingli Li, Yan Wang
Summary: We propose a Historical Report Guided Bi-modal Concurrent Learning Framework for Pathology Report Generation (BiGen) emulating pathologists’ diagnostic reasoning (we provide the knowledge bank file, captions, WSI features with UNI as the extractor in the link data1 and WSI features with PLIP as the extractor in the link data2.) By incorporating the visual and knowledge branch module, our method is able to provide WSI-relevant rich semantic content and suppress information redundancy in WSIs.

Pre-requisites:

The Slide-Text captions were from PathText, which was collected by WsiCaption

Downloading TCGA Slides

To download diagnostic WSIs (formatted as .svs files), please refer to the NIH Genomic Data Commons Data Portal. WSIs for each cancer type can be downloaded using the GDC Data Transfer Tool.

Processing Whole Slide Images

To process WSIs, first, the tissue regions in each biopsy slide are segmented using Otsu's Segmentation on a downsampled WSI using OpenSlide. The 256 x 256 patches without spatial overlapping are extracted from the segmented tissue regions at 10x magnification. Consequently, UNI is used to encode raw image patches into 1024-dim feature vectors, which we then save as .pt files for each WSI. We achieve the pre-processing of WSIs by using CLAM

Running Experiments

Experiments can be run using the following generic command-line (These codes are modified on WsiCaption):

Training model

python main.py --mode 'Train' --n_gpu <GPUs to be used> --image_dir <SLIDE FEATURE PATH USING UNI> --image_dir_plip <SLIDE FEATURE PATH  USING PLIP> --ann_path <CAPTION PATH>--split_path <PATH to the directory containing the train/val/test splits> --bank_path <KNOWLEDGE BANK PATH> --save_dir <SAVING CKPT PATH>

Testing model

python main.py --mode 'Test' --image_dir <SLIDE FEATURE PATH USING UNI> --image_dir_plip <SLIDE FEATURE PATH  USING PLIP> --ann_path <CAPTION PATH>--split_path <PATH to the directory containing the train/val/test splits> --bank_path <KNOWLEDGE BANK PATH> --checkpoint_dir <PATH TO CKPT> --save_dir <PATH TO SAVING RESULTS>

Basic Environment

  • Linux (Tested on Ubuntu 20.04.6 LTS (Focal Fossa))
  • NVIDIA GPU (Tested on Nvidia GeForce A40) with CUDA 12.1
  • Python (3.8)

About

Historical Report Guided Bi-modal Concurrent Learning for Pathology Report Generation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages