-
Notifications
You must be signed in to change notification settings - Fork 12
Open
Description
Hi, I use the preprocessed CelebA dataset that you provided to train, but the attention masks become 1 after several iters. I didn't change the parameters in the code.
lambda_cls=160, lambda_rec=10, lambda_gp=10, lambda_sat=0.1, lambda_smooth=1e-4.
When I change lambda_sat to 1, the attention masks also saturate to 1.
But when I change lambda_sat to 1.5, the attention masks become 0.
Your pretrained model works well,did you use the same parameters?
Thanks!
Metadata
Metadata
Assignees
Labels
No labels