-
Notifications
You must be signed in to change notification settings - Fork 52
Open
Description
Hello.
Thanks for your great work based on representation. And this method does save more time than Emsemble, Core-set, or MC Dropout when querying informative samples.
However, I got some questions.
- I find it quite time-consuming when using transductive representation learning, which uses all the images in the trainset. That can be a duplicate process when training on the next stages (15%, 20%, ... 40%) since the majority of labelset and unlabelset of the trainset hasn't changed.
- And if we train 100 epochs for VAE and Discriminator, the task model has been trained more than 100 epochs since
len(labelset) < len(trainset). I think splitting the task model from the VAE and Discriminator may be a better choice.
CIFAR10 Train Process
python main.py --cuda --dataset cifar10 --data_path /nfs/xs/Datasets/CIFAR10 \
--batch_size 128 --train_epochs 100 \
--latent_dim 32 --beta 1 --adversary_param 1
Iter 1000/39062, task loss: 0.213, vae: 3.931, dsc: 1.389: 3%|█ | 1046/39062 [21:38<13:05:31, 1.24s/it]
Sincerely
I'm looking forward to your advice.
Thanks a lot.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels