Hello! Thanks for your great work, I enjoyed reading through the paper and the project :)
I am working on a segmentation task for breast tumors and would like to utilize your pre-trained model for this purpose. I have the following questions:
1) You wrote in another issue that you "do not have approval yet from the UK Biobank to release the labels". I suppose this also means that the final_model.pt of your Swin-BOB model pretrained on the UKBOB dataset is not yet available to the public, correct? At least I did not find a download link on the readme.md. Do you know when the UKBOB pretrained model will be made available? (e.g. 2 weeks, 2 months, end of year?)
2) What I did find were two models for BRATS and BTCV. What exactly are the weights of these models and how/on which dataset have they been trained? Have these models also been pre-trained on UKBOB, or is it just your Swin-BOB baseline model architecture which has exclusively been trained on BRATS/BTCV without prior pretraining?
I am asking because according to your proposed ETTA methodology, both models would have to have the same weights except for the norm layers due to the freezing, if they had been pretrained on UKBOB and then finetuned on BRATS/BTCV. However, the final_model.pt weights of BRATS and BTCV are entirely different in every layer.
3) In your test_etta.py you have created a new class SwinUNETRWithTTA(), which uses the model weights of the pretrained BTCV model. Inside the class, the function def _replace_bn_layers(self) replaces all nn.BatchNorm3d modules with your custom module EntropyAdaptiveBatchNorm(). I suppose this notebook shall show the ETTA process of taking a pretrained model and optimizing the BatchNorm layers on my own dataset.
However, when I looked at the layers of the pretrained BTCV model, there actually exist no nn.BatchNorm3d modules at all. What I found were modules.normalization.LayerNorm and modules.instancenorm.InstanceNorm3d. This means that no BN layers have been replaced in this script and that no fine-tuning happens. I dont know if this is on purpose? Could you please clarify the code in this notebook? Is it even necessary to utilize "EntropyAdaptiveBatchNorm()" as BN layer and replace already existing BN layers? Or can I just freeze all layers except the LayerNorm and InstanceNorm3d layers (maybe reset the weights and bias to random) and then train for n epochs?
Thank you so much in advance.
Best
Matthias
Hello! Thanks for your great work, I enjoyed reading through the paper and the project :)
I am working on a segmentation task for breast tumors and would like to utilize your pre-trained model for this purpose. I have the following questions:
1) You wrote in another issue that you "do not have approval yet from the UK Biobank to release the labels". I suppose this also means that the final_model.pt of your Swin-BOB model pretrained on the UKBOB dataset is not yet available to the public, correct? At least I did not find a download link on the readme.md. Do you know when the UKBOB pretrained model will be made available? (e.g. 2 weeks, 2 months, end of year?)
2) What I did find were two models for BRATS and BTCV. What exactly are the weights of these models and how/on which dataset have they been trained? Have these models also been pre-trained on UKBOB, or is it just your Swin-BOB baseline model architecture which has exclusively been trained on BRATS/BTCV without prior pretraining?
I am asking because according to your proposed ETTA methodology, both models would have to have the same weights except for the norm layers due to the freezing, if they had been pretrained on UKBOB and then finetuned on BRATS/BTCV. However, the final_model.pt weights of BRATS and BTCV are entirely different in every layer.
3) In your test_etta.py you have created a new class SwinUNETRWithTTA(), which uses the model weights of the pretrained BTCV model. Inside the class, the function def _replace_bn_layers(self) replaces all nn.BatchNorm3d modules with your custom module EntropyAdaptiveBatchNorm(). I suppose this notebook shall show the ETTA process of taking a pretrained model and optimizing the BatchNorm layers on my own dataset.
However, when I looked at the layers of the pretrained BTCV model, there actually exist no nn.BatchNorm3d modules at all. What I found were modules.normalization.LayerNorm and modules.instancenorm.InstanceNorm3d. This means that no BN layers have been replaced in this script and that no fine-tuning happens. I dont know if this is on purpose? Could you please clarify the code in this notebook? Is it even necessary to utilize "EntropyAdaptiveBatchNorm()" as BN layer and replace already existing BN layers? Or can I just freeze all layers except the LayerNorm and InstanceNorm3d layers (maybe reset the weights and bias to random) and then train for n epochs?
Thank you so much in advance.
Best
Matthias