Research codebase for low-dose CT reconstruction/denoising on LoDoPaB-CT. It unifies FBP-input, sinogram-input, and GAN branches with shared training/eval tooling.
- Integrated models:
fbp_unet,fbp_edcnn,redcnn,sist,iradonmap,attrdn,fedm2ct,dugan. - Unified trainer (
src/trainer.py) and evaluator (tests/test_eval.py) with outputs underresults/andtest_results/. - FBP cache and timing helpers (
src/data/fbp_dataset.py,tests/measure_fbp_time.py). - Metrics plotting (
tests/plot_metrics.py).
configs/: training configs (reconstructor/data/optim/trainer)src/: training, model, and dataset implementationstests/: evaluation and utility scriptstest_results/: evaluation outputs
pip install -r requirements.txt
pip install dival lpipsastra-toolbox+ CUDA is used by theastra_cudabackend (fallbacks:astra_cpu/skimage).lpipsis required only when computing LPIPS metrics.
This project uses utilities from the DIVAL project for dataset loading and some reconstructor logic. The overall design, models, and features in this repository are independently implemented. Thanks to the DIVAL authors and contributors for their work.
- The data path is configured in
~/.dival/config.jsonunderlodopab_dataset/data_path. - Linux helper script:
bash src/get_lodopab.sh /path/to/lodopab- Alternatively, use
LoDoPaBDatasetand follow the interactive download prompt.
src/trainer.py currently hardcodes config_path at the bottom of the file. Update it to your desired configs/*.yaml first:
python src/trainer.pypython -m tests.test_eval --config configs/train_fbp_fbpunet.yaml --weights /path/to/weights.ptFBP cache and timing details are documented in the Notes section below.
python -m tests.plot_metrics --metrics results/sist_stop18/metrics_history.json- FBP-input:
fbp_unet,fbp_edcnn,redcnn,fedm2ct,dugan - Sinogram-input:
sist,iradonmap,attrdn - Experimental:
src/models/networks/dnst.pydefines DnST without a default config.
- Add the network in
src/models/networks/(or reuse an existing one). - Add a reconstructor in
src/models/reconstructors/and register it with@register("reconstructor", "<name>"). - Export the reconstructor from
src/models/reconstructors/__init__.py. - Create
configs/train_<name>.yamland setreconstructor.nameplus parameters. - If the model needs special inputs (multi-input / FBP auxiliary inputs), extend inference logic in
tests/test_eval.py.
Run tests/test_eval.py with model weights to evaluate on the test split (requires downloading the LoDoPaB test set and installing lpips).
-
For
fbp_*models that want to use the test-split FBP cache:- Add
--use-fbp-cache - For the first run, you may add
--generate-fbp-cache
- Add
-
For non-
fbp_*models (sinogram-input models), FBP cache is not needed:- Do not use
--use-fbp-cache/--generate-fbp-cache
- Do not use
Examples (with an existing weights path):
python -m tests.test_eval --config configs/train_fbp_fbpunet.yaml --weights src/results/fbp_unet_stop61/final.ptExample (using FBP cache):
python -m tests.test_eval --config configs/train_fbp_fbpunet.yaml --weights src/results/fbp_unet_stop61/final.pt --use-fbp-cacheExample (fbp_edcnn, verified):
python -m tests.test_eval --config configs/train_fbp_edcnn.yaml --weights src/results/fbp_edcnn_stop80/final.pt --use-fbp-cacheExample (dugan, verified):
python -m tests.test_eval --config configs/train_dugan.yaml \
--weights src/results/dugan/best_loss.pt \
--use-fbp-cache-
fbp_*models take FBP-reconstructed images as input:- If the cache exists and parameters match (
impl/filter_type/frequency_scaling), use--use-fbp-cache. - If the cache is missing or you are unsure whether parameters match, it is recommended to delete the old cache and rebuild with
--generate-fbp-cache(most reliable).
- If the cache exists and parameters match (
-
Non-
fbp_*models take sinograms as input:- Do not use
--use-fbp-cache/--generate-fbp-cache.
- Do not use
tests.test_eval logs per-sample inference time and writes it to:
test_results/<model_name>/inference_time_<model_name>.txt(or the directory set by--output_dir)
It includes t_net_mean/std/p95. If FBP cache is used, it also includes t_pre_mean and t_total_mean.
For fbp_* models, first generate a timing profile:
python -m tests.measure_fbp_time --config configs/train_fbp_fbpunet.yamlThis produces:
cache_lodopab_test_fbp.npyfbp_time.json
By default, both are written under the project root. Then set the following in the YAML:
fbp_time_path: ./fbp_time.jsonWhen evaluating with cache:
python -m tests.test_eval --config configs/train_fbp_fbpunet.yaml --weights <your_weights>.pt --use-fbp-cacheIf you only want to measure time and do not want to overwrite the cache:
python -m tests.measure_fbp_time --config configs/train_fbp_fbpunet.yaml --only-timefbp_time.json can be reused only if all of the following are the same:
- Same machine
- Same
impl(astra_cuda/skimage/astra_cpu) - Same
filter_typeandfrequency_scaling - The model is FBP-input (e.g.,
fbp_unet,fbp_swinir,fbp_edcnn)
-
General test-split evaluation: run
tests/test_eval.pywith weights. Results are written totest_results/<model>/, including:metrics_summary.jsonper_sample_metrics.jsoninference_time_<model>.txt
-
Timing for FBP-input models: run
tests.measure_fbp_time ...to generatefbp_time.json, setfbp_time_path: ./fbp_time.jsonin the YAML, then evaluate with--use-fbp-cache. -
Cache consistency: if parameters do not match, delete the old cache and rebuild with
--generate-fbp-cache.
- Disk expansion is not memory expansion: SSD affects storage only and does not increase container memory limits.
- Container memory has an upper bound (cgroup): when usage approaches
memory.max, the process may be killed by SIGKILL. - File cache counts as memory: when reading large HDF5 files from LoDoPaB, the OS page cache can accumulate and be counted against cgroup memory.
- GPU VRAM is unrelated here: CUDA OOM would show an error; this case is a CPU memory limit.
cat /sys/fs/cgroup/memory.max
cat /sys/fs/cgroup/memory.current
cat /sys/fs/cgroup/memory.events- Disable DataLoader
pin_memory(already set tofalsefor train/val intrain_iradonmap.yaml). - Explicitly set
impl: 'astra_cuda'(already added intrain_iradonmap.yaml). - Drop page cache after reading HDF5 (
posix_fadvise(DONTNEED), implemented insrc/data/lodopab_dataset.py, Linux-only; skipped on Windows).
Potential impact: pin_memory: false may slightly reduce host-to-GPU throughput, but is more stable than being killed and restarting.
- This repo integrates the official FedM2CT global SpatialNet branch (FBP-input) for fair comparison on LoDoPaB.
- It does not include the original federated/local sinogram branches or the mutual distillation/aggregation workflow.
- Config:
configs/train_fedm2ct.yaml. FBP cache behavior is the same asfbp_*models.
Training command:
python trainer.py --config configs/train_fedm2ct.yamlTo support the full federated workflow (local sinograms + mutual distillation + aggregation), the Trainer must be extended to support multiple models/optimizers/clients. This is not implemented in the current version.
Reproduction clarification:
- This repo implements "FedM2CT-Global (SpatialNet only)": input is FBP images, and only the global SpatialNet branch is kept.
- LoDoPaB does not provide multi-center/protocol metadata or federation settings, so the cross-site sharing and meta-model aggregation of CPML/FMDL cannot be reproduced.
- This implementation is used only as an image-domain baseline and does not represent full federated FedM2CT performance.
- Config:
configs/train_dugan.yaml. Defaulttrainer.gan_mode: true. Uses LoDoPaB FBP input (cache can be used, similar tofbp_*). - Model: generator + image/gradient discriminators, LS-GAN + CutMix CR, with an EMA generator used for validation/saving.
Training command:
python trainer.py --config configs/train_dugan.yamlNo impact on other models: the GAN branch is enabled only when gan_mode is true (either trainer.gan_mode or recon.gan_mode). Other models follow the original single-optimizer workflow.
Cache and evaluation: same as fbp_unet. Use the generate_cache / use_fbp_cache workflow, and evaluate via tests/test_eval.
-
All models are evaluated via
tests/test_eval.pyon the LoDoPaB test split with the same metrics (PSNR/SSIM/RMSE/LPIPS/NPS,data_range=1.0) and identical sample indices. -
Input domain must match each model’s design for a fair comparison:
-
FBP-input models:
fbp_unet,fbp_edcnn,fbp_swinir,redcnn,fedm2ct,dugan- Use FBP cache or on-the-fly FBP with the same
filter_type/frequency_scalingacross models.
- Use FBP cache or on-the-fly FBP with the same
-
Sinogram-input models:
sist,iradonmap,attrdn(operate on raw sinograms; no FBP cache). -
Image-domain model:
dnstexpects image-domain input (FBP images). If you evaluate it here, ensure the input is image-domain (e.g., FBP), otherwise results are not comparable.
-
-
DUGAN differs from the official repo only in dataset adaptation: discriminator inputs are padded to handle LoDoPaB’s 362×362 size; generator/losses remain unchanged.
-
AttrDN integration uses the attrdn-info AttRDN architecture:
- The model denoises sinograms.
- The loss is computed on sinograms (L1 + MS-SSIM) against forward-projected ground truth.
- Evaluation metrics are computed on FBP-reconstructed images for comparability with other models.
- To keep FBP output well-defined, training uses full sinograms (no patch cropping).
-
AttrDN stability tweak: MS-SSIM in
src/models/networks/attrdn/losses.pyclamps intermediate values to[1e-6, 1.0]and appliesnan_to_numon the final score. This prevents NaNs when sinogram-domain CS values become non-positive on LoDoPaB while preserving the original loss form.
- The current
trainer.pydoes not parse--configfrom the command line. - It uses a hardcoded default at the bottom of the file:
configs/train_iradonmap.yaml. - If you run
python trainer.py --config ..., it will still trainiradonmapunless you update the script to read CLI arguments.
Configure this in the YAML (if not set, it is disabled):
train:
resume_from: src/results/sist_stop18/final.pt
resume_epoch: 19Notes:
resume_fromloads model weights only; it does not restore optimizer/scheduler state.resume_epochis the starting epoch for continued training (e.g., ifstop=18, set it to19).
If you want to continue the previous validation-loss curve and early-stopping counter, set the following in the YAML:
train:
resume_metrics_path: results/sist_stop18/metrics_history.jsonDesign rationale:
metrics_history.jsonis the stable log source during training; reading it allows reuse of historical validation-loss curves.- By replaying historical validation loss, it reconstructs
best_lossandes_wait, making early stopping consistent with the state before resuming. - This is optional; if not set, it does not affect the default training flow for other models.
If training has finished but no plot was generated, you can draw it from metrics_history.json:
python -m tests.plot_metrics --metrics results/sist_stop18/metrics_history.json-
Only 5 test samples are saved by default: two from the first half, one middle, two from the second half.
- If
--num_samplesis set, indices are picked within the first N samples.
- If
-
For FBP-input models (
fbp_*,redcnn,fedm2ct,dugan):- Always save both
sinogram.png(raw observation) andfbp.png(FBP reconstruction). observation.pngequalsfbp.pngwhen--use-fbp-cacheis on; otherwise it equalssinogram.png.
- Always save both
-
For non-FBP models: only
observation.png,reconstruction.png,ground_truth.pngare saved.
@INPROCEEDINGS{edcnn,
author={Liang, Tengfei and Jin, Yi and Li, Yidong and Wang, Tao},
booktitle={2020 15th IEEE International Conference on Signal Processing (ICSP)},
title={EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising},
year={2020},
volume={1},
number={},
pages={193-198},
doi={10.1109/ICSP48669.2020.9320928}
}
@article{sist,
title={Low-Dose CT Denoising via Sinogram Inner-Structure Transformer},
author={Liutao Yang and Zhongnian Li and Rongjun Ge and Junyong Zhao and Haipeng Si and Daoqiang Zhang},
journal={IEEE Transactions on Medical Imaging},
year={2022},
volume={42},
pages={910-921},
url={https://api.semanticscholar.org/CorpusID:248006190}
}
@article{redcnn,
author = {Chen, Hu and Zhang, Yi and Kalra, Mannudeep and Lin, Feng and Chen, Yang and Liao, Peixi and Zhou, Jiliu and Wang, Ge},
year = {2017},
month = {06},
pages = {2524-2535},
title = {Low-Dose CT With a Residual Encoder-Decoder Convolutional Neural Network},
volume = {36},
journal = {IEEE Transactions on Medical Imaging},
doi = {10.1109/TMI.2017.2715284}
}
@article{attrdn,
title = {Sinogram denoising via attention residual dense convolutional neural network for low-dose computed tomography},
volume = {32},
issn = {2210-3147},
url = {https://doi.org/10.1007/s41365-021-00874-2},
doi = {10.1007/s41365-021-00874-2},
number = {4},
journal = {Nuclear Science and Techniques},
author = {Ma, Yin-Jin and Ren, Yong and Feng, Peng and He, Peng and Guo, Xiao-Dong and Wei, Biao},
month = apr,
year = {2021},
pages = {41}
}
@ARTICLE{fbpunet,
author={Jin, Kyong Hwan and McCann, Michael T. and Froustey, Emmanuel and Unser, Michael},
journal={IEEE Transactions on Image Processing},
title={Deep Convolutional Neural Network for Inverse Problems in Imaging},
year={2017},
volume={26},
number={9},
pages={4509-4522},
doi={10.1109/TIP.2017.2713099}
}
@ARTICLE{iradonmap,
author={He, Ji and Wang, Yongbo and Ma, Jianhua},
journal={IEEE Transactions on Medical Imaging},
title={Radon Inversion via Deep Learning},
year={2020},
volume={39},
number={6},
pages={2076-2087},
doi={10.1109/TMI.2020.2964266}
}
@ARTICLE{dugan,
author={Huang, Zhizhong and Zhang, Junping and Zhang, Yi and Shan, Hongming},
journal={IEEE Transactions on Instrumentation and Measurement},
title={DU-GAN: Generative Adversarial Networks With Dual-Domain U-Net-Based Discriminators for Low-Dose CT Denoising},
year={2022},
volume={71},
number={},
pages={1-12},
doi={10.1109/TIM.2021.3128703}
}