Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

不知道什么原因希望协助解决 #85

Open
johndialys opened this issue Sep 16, 2021 · 2 comments
Open

不知道什么原因希望协助解决 #85

johndialys opened this issue Sep 16, 2021 · 2 comments

Comments

@johndialys
Copy link

Traceback (most recent call last):
File "demo.py", line 16, in
model = create_model(opt)
File "/viton/Global-Flow-Local-Attention/model/init.py", line 32, in create_model
instance = model(opt)
File "/viton/Global-Flow-Local-Attention/model/pose_model.py", line 64, in init
norm='instance', activation='LeakyReLU', extractor_kz=opt.kernel_size)
File "/viton/Global-Flow-Local-Attention/model/networks/init.py", line 37, in define_g
return create_network(netG_cls, opt, **parameter_dic)
File "/viton/Global-Flow-Local-Attention/model/networks/init.py", line 28, in create_network
net.init_weights(opt.init_type)
File "/viton/Global-Flow-Local-Attention/model/networks/base_network.py", line 55, in init_weights
self.apply(init_func)
File "/root/anaconda3/envs/gflab/lib/python3.6/site-packages/torch/nn/modules/module.py", line 242, in apply
module.apply(fn)
File "/root/anaconda3/envs/gflab/lib/python3.6/site-packages/torch/nn/modules/module.py", line 242, in apply
module.apply(fn)
File "/root/anaconda3/envs/gflab/lib/python3.6/site-packages/torch/nn/modules/module.py", line 242, in apply
module.apply(fn)
[Previous line repeated 1 more time]
File "/root/anaconda3/envs/gflab/lib/python3.6/site-packages/torch/nn/modules/module.py", line 243, in apply
fn(self)
File "/viton/Global-Flow-Local-Attention/model/networks/base_network.py", line 47, in init_func
init.orthogonal_(m.weight.data, gain=gain)
File "/root/anaconda3/envs/gflab/lib/python3.6/site-packages/torch/nn/init.py", line 356, in orthogonal_
q, r = torch.qr(flattened)
RuntimeError: cuda runtime error (11) : invalid argument at /pytorch/aten/src/THC/generic/THCTensorMathPairwise.cu:225

@zhangle9012
Copy link

CUDA version may not match torch version
see in https://pytorch.org/get-started/previous-versions/

@matrixlibing
Copy link

matrixlibing commented Dec 22, 2022

I have the same problem.
My machine environment is as follows:
CUDA Driver Version > CUDA Toolkit == Pytorch needed.

`(gfla) [gfla@k8s-master gfla]$ conda list | grep cuda
cuda90 1.0 h6433d27_0 pytorch
pytorch 1.0.0 py3.7_cuda9.0.176_cudnn7.4.1_1 pytorch

(gfla) [gfla@k8s-master gfla]$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176

(gfla) [gfla@k8s-master gfla]$ nvidia-smi
Thu Dec 22 15:26:05 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:3B:00.0 Off | 0 |
| N/A 45C P0 29W / 70W | 11535MiB / 15079MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla T4 Off | 00000000:AF:00.0 Off | 0 |
| N/A 43C P0 29W / 70W | 12174MiB / 15079MiB | 0% Default |
+-------------------------------+----------------------+----------------------+`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants