Skip to content

Training Losses and optimal parameters #23

@adderbyte

Description

@adderbyte

Hi,
Thank you so much for this excellent work!

I have trained a model using the script you provided but with a different data set. I got the following output :

Pretrain_Epoch:96, trainLoss:17.692522, validLoss:103.652969, validReverseLoss:0.000000
Pretrain_Epoch:97, trainLoss:17.549919, validLoss:104.221916, validReverseLoss:0.000000
Pretrain_Epoch:98, trainLoss:17.376888, validLoss:104.022125, validReverseLoss:0.000000
Pretrain_Epoch:99, trainLoss:17.238510, validLoss:104.839447, validReverseLoss:0.000000
Epoch:0, d_loss:0.436150, g_loss:3.820657, accuracy:1.000000, AUC:1.000000
Epoch:1, d_loss:0.005911, g_loss:3.363690, accuracy:1.000000, AUC:1.000000
Epoch:2, d_loss:0.007880, g_loss:1.667129, accuracy:0.999333, AUC:0.999994
Epoch:3, d_loss:0.031970, g_loss:0.164756, accuracy:0.999583, AUC:1.000000
Epoch:4, d_loss:0.010160, g_loss:0.155293, accuracy:1.000000, AUC:1.000000
Epoch:5, d_loss:0.004382, g_loss:0.106739, accuracy:0.999500, AUC:1.000000
Epoch:6, d_loss:0.005284, g_loss:0.098650, accuracy:0.999667, AUC:1.000000

Is this similar to your result? Is there any way to tweak the model to improve it?
I changed aked the _VALIDATION_RATIO = 0.2 to avoid some errors. This should not affect my results I guess.
The learning rate (for the optimizer) was not explicitly passed, what learning rate would you recommend?
What is recommended batch size/epochs and how does this change when the dimension of the data increases?

Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions