Skip to content

Question in training #38

@SungHwanYoo

Description

@SungHwanYoo

Thank you for your nice experiment.

I have a question in train.

In the paper, you say that you ran 200k iterations on batch number 16, but when I look at GoPro-AdaRevIDB-pretrain-4gpu.yml, it says as a comment.

Split 300k iterations into two cycles.

1st cycle: fixed 3e-4 LR for 92k iters.

2nd cycle: cosine annealing (3e-4 to 1e-6) for 208k iters.

I'm wondering how exactly it was trained since it looks like this

And I wonder if the encoder is learning by freezing the weights from start to finish?

Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions