-
Notifications
You must be signed in to change notification settings - Fork 122
Open
Description
For me, decreasing the batch size of test dataloader was very helpful for GPU speed and memory.
So in train.py, changing
test_loader = data.DataLoader( test_set, batch_size=len(test_set), shuffle=False, collate_fn=test_set.collate_fn)
to
test_loader = data.DataLoader( test_set, batch_size=min(len(test_set), config["batch_size"] // 2), shuffle=False, collate_fn=test_set.collate_fn)
There doesn't seem to be an advantage to loading the entire test set on GPU at the same time. I tried to pull request this change but I don't think I'm allowed to. Hope this is helpful!
Thanks,
Bryan
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels