Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when I run the main.py #1

Closed
lxtGH opened this issue Nov 9, 2017 · 1 comment
Closed

Error when I run the main.py #1

lxtGH opened this issue Nov 9, 2017 · 1 comment

Comments

@lxtGH
Copy link

lxtGH commented Nov 9, 2017

File "/home/lxt/pytorch/CapsNet/main.py", line 45, in
digit_caps = model(data, target)
File "/home/lxt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 224, in call
result = self.forward(*input, **kwargs)
File "/home/lxt/pytorch/CapsNet/capsnet.py", line 29, in forward
digit_caps = self.digit_caps(primary_caps)
File "/home/lxt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 224, in call
result = self.forward(*input, **kwargs)
File "/home/lxt/pytorch/CapsNet/functions.py", line 53, in forward
pred = [self.Wi for i, group in enumerate(u) for in_vec in group]
File "/home/lxt/pytorch/CapsNet/functions.py", line 53, in
pred = [self.Wi for i, group in enumerate(u) for in_vec in group]
File "/home/lxt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 224, in call
result = self.forward(*input, **kwargs)
File "/home/lxt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 53, in forward
return F.linear(input, self.weight, self.bias)
File "/home/lxt/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 555, in linear
output = input.matmul(weight.t())
File "/home/lxt/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 560, in matmul
return torch.matmul(self, other)
File "/home/lxt/anaconda3/lib/python3.6/site-packages/torch/functional.py", line 168, in matmul
return torch.mm(tensor1.unsqueeze(0), tensor2).squeeze_(0)
File "/home/lxt/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 579, in mm
return Addmm.apply(output, self, matrix, 0, 1, True)
File "/home/lxt/anaconda3/lib/python3.6/site-packages/torch/autograd/_functions/blas.py", line 26, in forward
matrix1, matrix2, out=output)
TypeError: torch.addmm received an invalid combination of arguments - got (int, torch.cuda.FloatTensor, int, torch.cuda.FloatTensor, torch.FloatTensor, out=torch.cuda.FloatTensor), but expected one of:

  • (torch.cuda.FloatTensor source, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
  • (torch.cuda.FloatTensor source, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
  • (float beta, torch.cuda.FloatTensor source, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
  • (torch.cuda.FloatTensor source, float alpha, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
  • (float beta, torch.cuda.FloatTensor source, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
  • (torch.cuda.FloatTensor source, float alpha, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
  • (float beta, torch.cuda.FloatTensor source, float alpha, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
    didn't match because some of the arguments have invalid types: (int, torch.cuda.FloatTensor, int, torch.cuda.FloatTensor, !torch.FloatTensor!, out=torch.cuda.FloatTensor)
  • (float beta, torch.cuda.FloatTensor source, float alpha, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)

torch version 2.0.3 , I try both python3 and python2

@leftthomas
Copy link
Owner

leftthomas commented Nov 9, 2017

@lxtGH I have updated the codes, it could be run properly now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants