Skip to content

model size mismatch when loading for inference #27

@frikyng

Description

@frikyng

Hi,

I recently installed SUPPORT on a new PC and just noticed a bug when opening a model in the test GUI. When I open a model that I trained on the other PC I don't get it, so I think this has to do with the training of models on the new machine. Do you know what the issue could be?
I only found one other post (#6) that references a similar behaviour, but the issue was supposed to be fixed. So maybe this is something else?

(SUPPORT) C:\Users\~\~\~\SUPPORT>python -m src.GUI.test_GUI
Traceback (most recent call last):
  File "C:\Users\~\~\~\SUPPORT\src\GUI\test_GUI.py", line 112, in run
    model.load_state_dict(state)
  File "C:\Users\~\anaconda3\envs\SUPPORT\lib\site-packages\torch\nn\modules\module.py", line 2189, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for SUPPORT:
        size mismatch for enc_layers.0.weight: copying a param with shape torch.Size([64, 60, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 60, 3, 3]).
        size mismatch for enc_layers.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
        size mismatch for enc_layers.1.weight: copying a param with shape torch.Size([128, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 16, 3, 3]).
        size mismatch for enc_layers.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
        size mismatch for enc_layers.2.weight: copying a param with shape torch.Size([256, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 32, 3, 3]).
        size mismatch for enc_layers.2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
        size mismatch for enc_layers.3.weight: copying a param with shape torch.Size([512, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
        size mismatch for enc_layers.3.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for enc_layers.4.weight: copying a param with shape torch.Size([1024, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
        size mismatch for enc_layers.4.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for dec_layers.0.weight: copying a param with shape torch.Size([512, 1536, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 384, 3, 3]).
        size mismatch for dec_layers.0.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for dec_layers.1.weight: copying a param with shape torch.Size([256, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 192, 3, 3]).
        size mismatch for dec_layers.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
        size mismatch for dec_layers.2.weight: copying a param with shape torch.Size([128, 384, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 96, 3, 3]).
        size mismatch for dec_layers.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
        size mismatch for dec_layers.3.weight: copying a param with shape torch.Size([64, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 48, 3, 3]).
        size mismatch for dec_layers.3.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
        size mismatch for unet_1_convs.0.weight: copying a param with shape torch.Size([32, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 16, 1, 1]).

This is the command I passed for the training:
python -m src.train --exp_name trained_model_test_vid --noisy_data C:\Users\~\Documents\Data_FK\videos\test_vid.tif --n_epochs 150 --patch_size 61 400 180 --results_dir C:\Users\~\Documents\Data_FK\videos\VP8_denoised

Thanks and best,
Friedrich

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions