You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear Hailanyi:
When I was training with Virconv-L, I found an incredible problem: [Exception|implicit_gemm_backward]feat=torch.Size([0]),w=torch.Size([32, 3, 3, 32]),pair=torch.Size([9, 166197]),issubm=True,do=torch.Size([166197, 32]) epochs: 24%|██▍ | 12/50 [6:29:43<20:34:06, 1948.59s/it, loss=2.18, lr=0.00698] Traceback (most recent call last): File "/media/xd/hpc/xyy/VirConv-master/tools/train_utils/train_utils.py", line 95, in train_model accumulated_iter = train_one_epoch( File "/media/xd/hpc/xyy/VirConv-master/tools/train_utils/train_utils.py", line 47, in train_one_epoch loss.backward() SPCONV_DEBUG_SAVE_PATH not found, you can specify SPCONV_DEBUG_SAVE_PATH as debug data save path to save debug data which can be attached in a issue. File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward Variable._execution_engine.run_backward( File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/torch/autograd/function.py", line 199, in apply return user_fn(self, *args) File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/torch/autograd/function.py", line 340, in wrapper outputs = fn(ctx, *args) File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 111, in decorate_bwd return bwd(*args, **kwargs) File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/spconv/pytorch/functional.py", line 258, in backward raise e File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/spconv/pytorch/functional.py", line 234, in backward input_bp, filters_bp = ops.implicit_gemm_backward( File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/spconv/pytorch/ops.py", line 1188, in implicit_gemm_backward if features.dtype == torch.int8 or features.dtype == torch.qint8: RuntimeError: unsupported scalarType python-BaseException
But when I trained Virconv-S, I found no such problems. Have you ever encountered such a problem before? Thank you!
The text was updated successfully, but these errors were encountered:
Dear Hailanyi:
When I was training with Virconv-L, I found an incredible problem:
[Exception|implicit_gemm_backward]feat=torch.Size([0]),w=torch.Size([32, 3, 3, 32]),pair=torch.Size([9, 166197]),issubm=True,do=torch.Size([166197, 32]) epochs: 24%|██▍ | 12/50 [6:29:43<20:34:06, 1948.59s/it, loss=2.18, lr=0.00698] Traceback (most recent call last): File "/media/xd/hpc/xyy/VirConv-master/tools/train_utils/train_utils.py", line 95, in train_model accumulated_iter = train_one_epoch( File "/media/xd/hpc/xyy/VirConv-master/tools/train_utils/train_utils.py", line 47, in train_one_epoch loss.backward() SPCONV_DEBUG_SAVE_PATH not found, you can specify SPCONV_DEBUG_SAVE_PATH as debug data save path to save debug data which can be attached in a issue. File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward Variable._execution_engine.run_backward( File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/torch/autograd/function.py", line 199, in apply return user_fn(self, *args) File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/torch/autograd/function.py", line 340, in wrapper outputs = fn(ctx, *args) File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 111, in decorate_bwd return bwd(*args, **kwargs) File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/spconv/pytorch/functional.py", line 258, in backward raise e File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/spconv/pytorch/functional.py", line 234, in backward input_bp, filters_bp = ops.implicit_gemm_backward( File "/home/xd/anaconda3/envs/vircompletion/lib/python3.8/site-packages/spconv/pytorch/ops.py", line 1188, in implicit_gemm_backward if features.dtype == torch.int8 or features.dtype == torch.qint8: RuntimeError: unsupported scalarType python-BaseException
But when I trained Virconv-S, I found no such problems. Have you ever encountered such a problem before? Thank you!
The text was updated successfully, but these errors were encountered: