Skip to content

MMV-Lab/manual_annotations_risk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

On the risk of manual annotations in 3D confocal microscopy image segmentation

This repository provides the code to reproduce our results from the manuscript "On the risk of manual annotations in 3D confocal microscopy image segmentation".

Setup

We use Linux for all our processing and so this repository is based on the Linux shell. But with a little tweaking, all code should run under Windows and Mac as well. Setting up all Anaconda environments is beyond this repository, we refer to the respective repository each (EmbedSeg MMV_Im2Im version, StarDist-3D, Cellpose). For downloading the data, you can use the provided environment.yml file.

Download the data

To download the DNA-dye and Lamin B1 image data published here, run python get_nuclei_data.py. To download provided annotated masks and trained models from Zenodo, just run python get_masks_models.py.

Training

To follow Cellpose and EmbedSeg API, we need to slightly reorganize all the image files for training, so you will have duplicated files in each folder, which can be removed after training. But all reorganizing steps are covered by our training scripts. To train nuclei instance segmentation models, go to the corresponding folder and run train.sh for each Cellpose and EmbedSeg and python train.py for StarDist-3D. Per default, Napari-GT masks are used as training data. But you can simply change this.

Inference

To get the predictions from our pretrained models for Cellpose or StarDist-3D, run python predict.py in the respective folder. To get EmbedSeg results, you need to run predict.sh. For your own models, you have to adjust the model path.

Evaluation

To evaluate the manual annotations, go to Evaluation => annotations and run compare_annotations.sh. This will automatically compare the volumes of each Napari-GT and Slicer-GT with bioGT. For the predictions, go to Evaluation => predictions and run compare_predictions.sh. Per default, StarDist-3D's predictions are analyzed. You can simply adjust the method variable in each Python script to analyze the other results.

Citation

If you find this repository useful, please consider citing:

@inproceedings{sonneck2023risk,
  author    = {Justin Sonneck and Shuo Zhao and Jianxu Chen},
  title     = {On the risk of manual annotations in 3D confocal microscopy image segmentation},
  booktitle = {ICCV Workshops},
  year      = {2023},
  doi       = {10.1109/ICCVW60793.2023.00421}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors