Skip to content

Should the loss be modified? #1

@DanVane

Description

@DanVane

In DPSH_model.py, the loss is
loss = tf.div(
(- 2.0 * tf.reduce_sum(tf.mul(S, theta) - (tf.max(0,theta)+tf.log(tf.exp(tf.abs(-theta))+1)))) + config.lamda * tf.reduce_sum(tf.pow((B_code - U0), 2)), float(config.N_sizeconfig.batch_size))
but it cannot get a good acc (always be 0.12) and the loss is around 41.
I modifield it as
loss = tf.div(
(- 2.0 * tf.reduce_sum(tf.mul(S, theta) - (theta+tf.log(tf.exp(-theta)+1)))) + config.lamda * tf.reduce_sum(tf.pow((B_code - U0), 2)), float(config.N_sizeconfig.batch_size))
and then the loss would become smaller and smaller.
tf.abs() in the raw code is meanningfulless and the no need tf.max(). Just followed the paper(DPSH, Wu-jun Li).
Your codes help me a lot. Thank you very much!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions