-
Notifications
You must be signed in to change notification settings - Fork 4
Open
Description
In DPSH_model.py, the loss is
loss = tf.div(
(- 2.0 * tf.reduce_sum(tf.mul(S, theta) - (tf.max(0,theta)+tf.log(tf.exp(tf.abs(-theta))+1)))) + config.lamda * tf.reduce_sum(tf.pow((B_code - U0), 2)), float(config.N_sizeconfig.batch_size))
but it cannot get a good acc (always be 0.12) and the loss is around 41.
I modifield it as
loss = tf.div(
(- 2.0 * tf.reduce_sum(tf.mul(S, theta) - (theta+tf.log(tf.exp(-theta)+1)))) + config.lamda * tf.reduce_sum(tf.pow((B_code - U0), 2)), float(config.N_sizeconfig.batch_size))
and then the loss would become smaller and smaller.
tf.abs() in the raw code is meanningfulless and the no need tf.max(). Just followed the paper(DPSH, Wu-jun Li).
Your codes help me a lot. Thank you very much!
Metadata
Metadata
Assignees
Labels
No labels