-
Notifications
You must be signed in to change notification settings - Fork 59
Open
Description
Hi, friends, I meet a problem when I run
"python main.py --data_type "event" --use_vfeatures --use_siamese --use_gfeatures --use_gcn --use_cd"
the error as follow:
Traceback (most recent call last):
File "main.py", line 297, in <module>
train(args, fout)
File "main.py", line 248, in train
step = train_epoch(epoch, step)
File "main.py", line 190, in train_epoch
output = model(w2v_idxs_l, w2v_idxs_r, v_feature, adj, g_feature, g_vertice) # what if batch > 1 ?
File "//users10/yhwu/miniconda/envs/match_article/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/users10/yhwu/Project/match_article/src/models/CCIG/models/se_gcn.py", line 124, in forward
x_siamese = self.gc_w2v[n_l](x_siamese, adj)
File "//users10/yhwu/miniconda/envs/match_article/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/users10/yhwu/Project/match_article/src/models/CCIG/models/layers.py", line 60, in forward
output = SparseMM()(adj, support)
File "//users10/yhwu/miniconda/envs/match_article/lib/python3.8/site-packages/torch/autograd/function.py", line 159, in __call__
raise RuntimeError(
RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
By searching online, I find that this problem may due to the version of pytorch( when pytorch version >1.3, torch.autograd.Function need to be static instead of non-static). So I try to solve this problem by changing class"SparseMM" from non-static to static, My modified results are as follows:
class SparseMM(torch.autograd.Function):
"""
Sparse x dense matrix multiplication with autograd support.
Implementation by Soumith Chintala:
https://discuss.pytorch.org/t/
does-pytorch-support-autograd-on-sparse-matrix/6156/7
"""
@staticmethod
def forward(ctx, matrix1, matrix2):
ctx.save_for_backward(matrix1, matrix2)
return torch.mm(matrix1, matrix2)
@staticmethod
def backward(ctx, grad_output):
matrix1, matrix2 = ctx.saved_tensors
grad_matrix1 = grad_matrix2 = None
if ctx.needs_input_grad[0]:
grad_matrix1 = torch.mm(grad_output, matrix2.t())
if ctx.needs_input_grad[1]:
grad_matrix2 = torch.mm(matrix1.t(), grad_output)
return grad_matrix1, grad_matrix2
However, the problem has not been solved and the error prompt has not changed.
I don't know what to do next.(Unless I change the pytorch version) I hope to get the help of the author and everyone. Thanks a lot!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels