Replies: 4 comments 14 replies
-
cu118 on url means CUDA 11.8. |
Beta Was this translation helpful? Give feedback.
-
@lshqqytiger Dear bro, Actually i am having the problem I've changed all the I really appreciate all the help you've given me so far. I was hoping you could help me out again. I've been reading a lot and trying different things, but I can't find much information on this error I'm getting. I don't know what to do or if I'm doing something wrong.
python .\run.py
No GPU being used. Careful, inference might be very slow!
0%| | 0/100 [00:00<?, ?it/s]Traceback (most recent call last):
File "C:\Users\NoeXVanitasXJunk\bark\run.py", line 13, in <module>
audio_array = generate_audio(text_prompt)
File "C:\Users\NoeXVanitasXJunk\bark\bark\api.py", line 107, in generate_audio
semantic_tokens = text_to_semantic(
File "C:\Users\NoeXVanitasXJunk\bark\bark\api.py", line 25, in text_to_semantic
x_semantic = generate_text_semantic(
File "C:\Users\NoeXVanitasXJunk\bark\bark\generation.py", line 460, in generate_text_semantic
logits, kv_cache = model(
File "C:\Users\NoeXVanitasXJunk\miniconda3\envs\tfdml_plugin\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\NoeXVanitasXJunk\bark\bark\model.py", line 208, in forward
x, kv = block(x, past_kv=past_layer_kv, use_cache=use_cache)
File "C:\Users\NoeXVanitasXJunk\miniconda3\envs\tfdml_plugin\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\NoeXVanitasXJunk\bark\bark\model.py", line 121, in forward
attn_output, prev_kvs = self.attn(self.ln_1(x), past_kv=past_kv, use_cache=use_cache)
File "C:\Users\NoeXVanitasXJunk\miniconda3\envs\tfdml_plugin\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\NoeXVanitasXJunk\bark\bark\model.py", line 50, in forward
q, k ,v = self.c_attn(x).split(self.n_embd, dim=2)
File "C:\Users\NoeXVanitasXJunk\miniconda3\envs\tfdml_plugin\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\NoeXVanitasXJunk\miniconda3\envs\tfdml_plugin\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: Cannot set version_counter for inference tensor
0%| | 0/100 [00:00<?, ?it/s] I am in Python 3.9.16 Could you try help me with this? Other thing: I've read about with torch-mlir is possible use AMD card, but not stay sure if on windows, I try it but i am not sure if need something more or some DirectMl Special |
Beta Was this translation helpful? Give feedback.
-
i run into another problem. after updating to torch2.0, sdwebui uses almost all of the vram during the first picture generation, and then run at a very low speed when generate the second and more picture |
Beta Was this translation helpful? Give feedback.
-
I am getting this error. |
Beta Was this translation helpful? Give feedback.
-
With the update to Pytorch 2.0 is there any way to update to it? I've tried manually updating using CLI and it seems to want to stay on 1.1.13, I tried removing my venv folder and adding " set TORCH_COMMAND= pip install torch torchvision pytorch-triton --extra-index-url https://download.pytorch.org/whl/cu118" to force an update and it seems that this breaks the install. Is it possible to update and try it out?
Beta Was this translation helpful? Give feedback.
All reactions