-
Notifications
You must be signed in to change notification settings - Fork 449
Open
Description
Hi,
I've been exploring your implementation for a while now and noticed something that might lead to unintended bias during training when using --eval_during_training.
Here, you run:
for index, seed in enumerate(allseeds):
...
fixseed(seed)
...which will overwrite the initial seed. However, after this loop finishes, it seems that the original seed is not restored.
If evaluation is invoked inside the training loop, this means after the first evaluation, training resumes using the last seed in allseeds, and that persists throughout the rest of the training.
Kind regards.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels