We've had good questions about the model's inner workings and this is an opportunity to try captum (most likely) or similar toolkit for model explainability.
https://github.com/pytorch/captum/blob/master/tutorials/Resnet_TorchVision_Ablation.ipynb - this is a promising notebook on "Inspecting influential image parts" for semantic segmentation with pytorch
We've had good questions about the model's inner workings and this is an opportunity to try
captum(most likely) or similar toolkit for model explainability.https://github.com/pytorch/captum/blob/master/tutorials/Resnet_TorchVision_Ablation.ipynb - this is a promising notebook on "Inspecting influential image parts" for semantic segmentation with pytorch