-
Notifications
You must be signed in to change notification settings - Fork 25
Description
I have a few queries regarding the loss function implementation:
a) the lambda value in loss function is set to 0 therefore the l2_norm is not considered in the loss equation
b)The embedding loss is just the distance between the two embedding spaces as per the code:
return tf.reduce_mean(tf.square(Fx - Fe)) #tf.trace(tf.matmul(C1, tf.transpose(C1))) + self.config.solver.lagrange_const * tf.trace(tf.matmul(C2, tf.transpose(C2))) + self.config.solver.lagrange_const * tf.trace(tf.matmul(C3, tf.transpose(C3)))
this may not converge to the optimal point
c) The output loss is (1/y1 * y0) whereas the equation in the paper says (1/(y1 * y0)) resulting in a different alpha value for the output loss function.
the output loss is similar to bp-mll loss and there are different implementations already available for the loss function. I came across a paper that cited this implementation and the implementation here relates more to a DistAE rather than CorrAE as described in one of Wang et. al.'s paper https://ttic.uchicago.edu/~wwang5/papers/icml15a.pdf