Skip to content

so hard to train #9

@buptlj

Description

@buptlj

Hi, I use the preprocessed CelebA dataset that you provided to train, but the attention masks become 1 after several iters. I didn't change the parameters in the code.
lambda_cls=160, lambda_rec=10, lambda_gp=10, lambda_sat=0.1, lambda_smooth=1e-4.
When I change lambda_sat to 1, the attention masks also saturate to 1.
But when I change lambda_sat to 1.5, the attention masks become 0.
Your pretrained model works well,did you use the same parameters?
Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions