-
Notifications
You must be signed in to change notification settings - Fork 13
Open
Description
norm=torch.linalg.norm(y-den_rec,dim=dim,` ord=2)
rec_grads=torch.autograd.grad(outputs=norm, inputs=x)
rec_grads=rec_grads[0]
normguide=torch.linalg.norm(rec_grads)/x.shape[-1]**0.5
#normalize scaling
s=self.xi/(normguide*t_i+1e-6)
#optionally apply a treshold to the gradients
if self.treshold_on_grads>0:
#pply tresholding to the gradients. It is a dirty trick but helps avoiding bad artifacts
rec_grads=torch.clip(rec_grads, min=-self.treshold_on_grads, max=self.treshold_on_grads)
score=(x_hat.detach()-x)/t_i**2
Inside the sampler, the norm guide and error norm are all normalized, but they should all be the power of the error and the gradient, which seems to be mismatching the paper. Is this on purpose?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels