Skip to content

ipankhi/ALPA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ALPA: Taming the Tail

This repository contains the implementation of the paper "Taming the Tail: Leveraging Asymmetric Loss and Padé Approximation to Overcome Medical Image Long-Tailed Class Imbalance". The code addresses challenges in medical imaging caused by long-tailed class distributions, introducing an innovative approach based on asymmetric loss functions and Padé approximations to improve classification performance on imbalanced datasets.


Features

  • Asymmetric Loss (AL): Reduces the impact of over-represented classes while improving performance for under-represented ones.
  • Padé Approximation: Enhances numerical stability for gradient computation, addressing the skewness of class distributions.
  • K-Fold Cross-Validation: Supports robust evaluation across multiple folds.
  • Scalable Design: Supports ConvNeXt and other backbones for experiments with configurable parameters.

Loss function code

  • ALPA code is there in models/get_optimizer.py

Installation

  1. Clone the repository:

    git clone https://github.com/your-repo/alpa.git
    cd alpa
  2. Install dependencies:

    pip install -r requirements.txt
  3. Ensure GPU compatibility with PyTorch:

    conda install pytorch torchvision torchaudio pytorch-cuda=11.x -c pytorch -c nvidia

Training

To train the model with ALPA, use the following command:

python3 main.py \
    --is_master=1 \
    --train=1 \
    --gpu_ids="cuda:3" \
    --criterion="Pade" \
    --seed=0 \
    --model=convnext \
    --batchsize=128 \
    --epochs=50 \
    --img_size=256 \
    --model_path="result" \
    --save_model=1 \
    --store_name="_kfold"

Key Arguments:

  • --is_master: Specifies if the process is the main training process (1 for master).
  • --train: Set to 1 to enable training mode.
  • --gpu_ids: Specify the GPU(s) for training (e.g., "cuda:0").
  • --criterion: Set to "Pade" for the Padé Approximation loss.
  • --seed: Random seed for reproducibility.
  • --model: Backbone model for training (e.g., convnext).
  • --batchsize: Batch size for training.
  • --epochs: Total number of epochs.
  • --img_size: Input image resolution (e.g., 256x256).
  • --model_path: Path to save the trained model.
  • --save_model: Set to 1 to save the best-performing model.
  • --store_name: Suffix for the model filename to track experiments.

Evaluation

You can evaluate the trained model use test.py


Citation

If you find this code helpful for your research, please cite.


License

This project is licensed under the MIT License. See the LICENSE file for details.


Contact

For questions or feedback, feel free to reach out to pankhikashyap.research@gmail.com.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages