Skip to content

ay05h/Brain-Tumor-Segmentation-3D-Using-FPN-3D-U-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Brain Tumor Segmentation (3D) Using FPN-3D U-Net

This repository contains the implementation of a deep learning-based brain tumor segmentation model using a hybrid FPN-3D U-Net architecture. This project addresses the critical need for automated, accurate, and efficient segmentation of brain tumors in MRI scans, aiding medical professionals in diagnosis and treatment planning.


Table of Contents

  1. Project Overview
  2. Dataset
  3. Preprocessing Pipeline
  4. Model Architecture
  5. Training and Evaluation
  6. Results and Analysis
  7. How to Use
  8. References

Project Overview

Objective

This project aims to develop a 3D segmentation model that can accurately segment brain tumors into four classes:

  • Necrotic core
  • Peritumoral edema
  • Enhancing tumor
  • Background

Why Automated Brain Tumor Segmentation?

Manual segmentation of brain tumors is time-consuming, prone to variability, and dependent on expert radiologists. This project leverages deep learning to:

  • Automate segmentation tasks.
  • Provide consistent and reliable results.
  • Improve diagnostic workflows, especially in resource-limited settings.

Key Features of the Project

  • Utilizes BraTS21 Dataset, which includes four MRI modalities: FLAIR, T1, T1Gd, and T2.
  • Integrates a Feature Pyramid Network (FPN) with a 3D U-Net for enhanced multi-scale feature extraction.
  • Employs a custom Dice-Cross Entropy Loss function to handle class imbalance and improve segmentation accuracy.
  • Comprehensive preprocessing pipeline to standardize and augment data.

Dataset

The BraTS21 Dataset is a benchmark dataset for brain tumor segmentation. It contains 3D MRI scans with expert-annotated segmentation labels.

MRI Modalities

  1. FLAIR: Highlights abnormalities such as edema.
  2. T1-weighted: Provides anatomical details of brain structures.
  3. Post-contrast T1-weighted (T1Gd): Enhances active tumor regions.
  4. T2-weighted: Emphasizes fluid-rich areas and complements FLAIR.

Dataset Characteristics

  • 1,251 MRI scans.
  • Four modalities per scan.
  • Each scan is resampled to a uniform voxel size of 1 mm isotropic.
  • Segmentation annotations include four classes.

Visualization of MRI Modalities: MRI Modalities


Preprocessing Pipeline

A robust preprocessing pipeline was developed to standardize and prepare the dataset for training:

  1. Skull-Stripping:

    • Removes non-brain tissues, focusing solely on the brain.
  2. Resampling:

    • Resamples scans to a uniform voxel resolution of 1 mm isotropic.
  3. Cropping to Foreground:

    • Reduces unnecessary background regions by cropping around the brain.
  4. Intensity Normalization:

    • Standardizes voxel intensity to have zero mean and unit variance.
  5. Augmentation:

    • Includes random flips, rotations, and gamma adjustments to enhance diversity.
  6. Resizing:

    • Rescales scans to a fixed size of 128 x 128 x 128 voxels.

Example of a Preprocessed Image: Preprocessed Image


Model Architecture

The proposed model combines the 3D U-Net with a Feature Pyramid Network (FPN) to leverage multi-scale feature extraction.

Key Components

  1. Encoder:

    • Extracts hierarchical features using convolutional blocks and downsampling.
  2. FPN Layers:

    • Integrates multi-scale features for better context and detail.
  3. Decoder:

    • Reconstructs high-resolution segmentation maps using skip connections and upsampling layers.
  4. Custom Loss Function:

    • Combines Dice Loss and Cross-Entropy Loss to address imbalanced data.

FPN-3D U-Net Architecture

FPN-3D U-Net


Training and Evaluation

Training Details

  • Batch Size: 1
  • Epochs: 10
  • Learning Rate: 0.0003
  • Optimizer: Adam

Loss Function

The combined loss function:

  • Dice Loss: Measures overlap between predicted and true segmentation.
  • Cross-Entropy Loss: Penalizes incorrect class predictions.

Evaluation Metrics

  • Dice Similarity Coefficient (DSC): Evaluates segmentation accuracy.

How to Use

Clone the Repository

git clone https://github.com/your-username/brain-tumor-segmentation.git
cd brain-tumor-segmentation

Install Dependencies

pip install -r requirements.txt

Preprocess the Dataset

python preprocess.py

Train the Model

python main.py --mode train

Evaluate the Model

python main.py --mode test --weights final_model.pth

References

  1. Havaei, M., et al., "Brain Tumor Segmentation with Deep Neural Networks," Medical Image Analysis, 2017.
  2. Bakas, S., et al., "Segmentation Labels and Radiomic Features for TCGA-GBM Collection," The Cancer Imaging Archive, 2017.
  3. Çiçek, Ö., et al., "3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation," MICCAI 2016.
  4. Brain tumor image segmentation based on improved FPN,Haitao Sun, Shuai Yang, Lijuan Chen, Pingyan Liao, Xiangping Liu, Ying Liu & Ning Wang BMC Medical Imaging

About

This repository contains the codebase for 3D brain tumor segmentation using a hybrid FPN-3D U-Net model. The project was developed to address challenges in automating tumor segmentation in MRI scans, focusing on precise delineation of tumor boundaries for enhanced diagnostic insights and treatment planning.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors