Skip to content

optim.checkgrad dimensions dependent #161

@bermanmaxim

Description

@bermanmaxim

I'm cross-posting from a question on google groups because I believe there is a problem with checkgrad, although it's possible I have overlooked something. Considering this function

function f(x)
    local vect = x.new(x:size()):fill(1)
    local fx = torch.dot(x:view(-1), vect:view(-1))
    return fx, vect
end

fx is the dot product between flattened x and flattened vect, therefore the jacobian of fx should simply be vect (ones of the size of x).

Yet when checking with optim.checkgrad

th> a = torch.rand(2, 5)
th> diff, dC, dC_est = optim.checkgrad(f, a)
th> dC_est
 5.0000  5.0000  5.0000  5.0000  5.0000
 5.0000  5.0000  5.0000  5.0000  5.0000
[torch.DoubleTensor of size 2x5]

dC_est evaluates to 5.0 * (ones of the size of x). PyTorch's autograd returns the result I would expect, a tensor of ones:

>>> import torch
>>> from torch.autograd import Variable
>>> a = torch.rand(2, 5)
>>> av = Variable(a, requires_grad=True)
>>> b = Variable(torch.ones(10))
>>> c = torch.dot(av.view(-1), b)
>>> c.backward()
>>> print(av.grad)
Variable containing:
 1  1  1  1  1
 1  1  1  1  1
[torch.FloatTensor of size 2x5]

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions