Skip to content

simone-panico/Brain-MRI-SA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Brain MRI Tumor Segmentation

Semantic segmentation of brain tumors from MRI scans using a U-Net architecture implemented in PyTorch.
Built as a semester project at Kantonsschule Frauenfeld, Switzerland to explore modern CNNs for medical image analysis and transparent model reasoning (feature-map visualizations).

Visualisation

Model Training Preview Segmentation Example

To support my presentation on March 13, 2025 at the Kantonsschule Frauenfeld,
I created several animations with Manim.

These short clips explain key ideas such as convolutions and the flow of information in a neural network.
They were very helpful in making the concepts behind the U-Net easier to visualize and share with others.

You can find all animations here:

They are not only useful for presentations but also for building an intuitive understanding of how the U-Net processes MRI images.

Dataset

For this project I used the LGG Brain MRI Segmentation Dataset from Kaggle.
It contains MRI brain scans of 110 patients from The Cancer Imaging Archive (TCIA), along with manual tumor segmentation masks.

  • Each case is provided as .tif images with 3 channels (pre-contrast, FLAIR, post-contrast).
  • If one of the sequences is missing, it is replaced by FLAIR to always have 3 channels.
  • The masks are single-channel binary images (0 = background, 1 = tumor) marking abnormal regions in the brain.
  • Images and masks are paired and organized per patient.

MRI Brain Scan Example Tumor Mask Example

Feature Maps

One of the most interesting parts of this project was looking at the feature maps inside the U-Net.
They show what the network “sees” at different stages of processing the MRI images.

Feature Maps Before Training Feature Maps After Training

  • Before Training (left):
    The activations look noisy and unclear. The model has not yet learned which parts of the MRI are important, so the feature maps don’t contain meaningful structures.

  • After Training (right):
    Taken from the best-performing model (Model 4) the feature maps become much clearer. You can see how the model starts focusing on relevant areas of the brain and especially the tumor region.
    This shows that the U-Net actually learned to extract and refine useful features layer by layer.

These visualizations helped confirm that the network was learning meaningful patterns rather than random noise, and it was also pretty interesting to see it during the learning. :)

Mathematical Analysis

As part of this project, I also looked at the math behind the U-Net model.
This step was not needed to get the model running, but it helped me understand more clearly what happens inside the network.

Main Points

  • Double Convolution
    Each block applies two small convolutions with ReLU activation.
    → This lets the network pick up finer details in the MRI images.

  • Encoder (Downsampling)
    Reduces the image size step by step while keeping the important features.
    → Think of it as compressing the image into its most useful information.

  • Bottleneck
    The “lowest point” of the U-Net.
    → Here the most abstract features of the MRI are stored before reconstruction.

  • Decoder (Upsampling)
    Scales the image back up to the original size.
    → With the help of skip connections, it reuses details from the encoder so that important edges (like tumor borders) are not lost.

  • Output Layer
    A final convolution creates the segmentation mask.
    → In this case: tumor vs. non-tumor.

Why do this analysis?

By looking at the math, I could see why the U-Net works so well:
how features are compressed, why skip connections matter, and how the final output mask is built.
This gave me a better understanding of the architecture, instead of just treating it as a black box.

If you are interested in the full mathematical breakdown, you can find it in the documentation-german.pdf (page 13).


Results

Model Epochs Batch Size Loss Function Optimizer Learning Rate F1-Score Accuracy
Model_0 50 4 BCELoss Adam 0.001 0.77 ---
Model_1 100 4 BCELoss Adam 0.001 0.79 0.997
Model_2 100 8 BCELoss Adam 0.001 0.58 0.993
Model_3 100 8 BCELoss Adam 0.001 0.63 0.993
Model_4 100 4 BCELoss Adam 0.001 0.81 0.997
Model_5 100 2 BCELoss Adam 0.001 0.67 0.994

Assessment

Since this was a school project, it was formally assessed.
I received a grade of 6, which is the highest possible score in Switzerland (equivalent to an A).

The complete teacher assessment can be found here:

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors