-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible to use python cuda libraries from a virtual env? Origianal whisper is super easy to install since it doesn't require me to change my system cuda version and simply pulls in the needed libs using pip. #153
Comments
Currently you can already make use of these libraries but you need to manually set the environment variable pip install nvidia-cublas-cu11 nvidia-cudnn-cu11
export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'` (Note that this only works on Linux systems.) I will look if we can load these libraries automatically when they are installed. It's an improvement we should make in the underlying CTranslate2 library. |
Is it possible to get it to work with VENV on Windows by any chance? I have cuda 11.8, cudnn 11.x all installed properly, but it's not working.
would installing pytorch help? What's really weird is that a few days ago, it worked fine without any CUDA, CUDNN, zlib installation. After a clean install of windows, it doesn't work. edited: The same error occurs even in a non-virtual environment. I did everything, including cuda 11.8 installation, cudnn 11.x installation, zlib installation, and path addition, but I don't know why this is happening. |
This solution does not work on Windows because NVIDIA only provides Linux binaries for the cuDNN package: https://pypi.org/project/nvidia-cudnn-cu11/#files Installing PyTorch will not help in this case.
Maybe you should double-check the PATH setting. I know it can be tricky to get it right. You could also look at @Purfview's standalone executable: https://github.com/Purfview/whisper-standalone-win. There you can download the NVIDIA libraries and simply put them in the same directory as the executable. |
Thanks, I'll try again to make sure I'm not missing anything. |
Yes these libraries are required for GPU execution. An error is raised when you try to use the GPU but these libraries cannot be found. |
According to this solution, I get the code below: try:
import os
import nvidia.cublas.lib
import nvidia.cudnn.lib
cublas_path = os.path.dirname(nvidia.cublas.lib.__file__)
cudnn_path = os.path.dirname(nvidia.cudnn.lib.__file__)
os.environ["LD_LIBRARY_PATH"] = f"{cublas_path}:{cudnn_path}"
except ModuleNotFoundError:
pass But still get same error, like the PATH is not set. |
The environment variable |
@guillaumekln oh, thank you to explain this. It seems like the once for all solution is not easy😂. |
I'm pretty certain that NVIDIA offers cuDNN files of some sort for Windows at https://developer.nvidia.com/rdp/cudnn-download. I do remember getting faster-whisper to work on the GPU on Windows but I do remember it being quite the hassle, either due to my inexperience or genuine difficulty. |
Yes, you can download cuDNN binaries for Windows on the NVIDIA website. But this issue is about installing cuDNN via PyPI with:
This does not work on Windows. |
If you're on Windows, a functioning workaround I've found is to install torch with cuda support, then add the "lib" subfolder to your PATH. It works since the lib folder contains DLLs for both cuBLAS and cuDNN. |
The trick in #153 (comment) is not working for me, what am I doing wrong? (faster-whisper) vadi@barbar:~$ pip install nvidia-cublas-cu11 nvidia-cudnn-cu11
Requirement already satisfied: nvidia-cublas-cu11 in ./Programs/miniconda3/envs/faster-whisper/lib/python3.10/site-packages (11.11.3.6)
Requirement already satisfied: nvidia-cudnn-cu11 in ./Programs/miniconda3/envs/faster-whisper/lib/python3.10/site-packages (8.9.2.26)
(faster-whisper) vadi@barbar:~$ export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`
(faster-whisper) vadi@barbar:~$ echo $LD_LIBRARY_PATH
/home/vadi/Programs/miniconda3/envs/faster-whisper/lib/python3.10/site-packages/nvidia/cublas/lib:/home/vadi/Programs/miniconda3/envs/faster-whisper/lib/python3.10/site-packages/nvidia/cudnn/lib
(faster-whisper) vadi@barbar:~$ python ~/Downloads/faster-whisper.py
Traceback (most recent call last):
File "/home/vadi/Downloads/faster-whisper.py", line 6, in <module>
model = WhisperModel(model_size, device="cuda", compute_type="float16")
File "/home/vadi/Programs/miniconda3/envs/faster-whisper/lib/python3.10/site-packages/faster_whisper/transcribe.py", line 124, in __init__
self.model = ctranslate2.models.Whisper(
RuntimeError: CUDA failed with error unknown error
(faster-whisper) vadi@barbar:~$ Running Nvidia driver 535.54.03 on RTX 4080 on Ubuntu 22.04. |
This is probably another issue. I think it happens when the GPU driver is not loaded correctly (e.g. it was just updated to a new version). Rebooting the system will often fix this type of error. |
It did. Thanks! |
why my nvidia.cublas.lib.file attribute is None so the environment variable failed to set,and when i run faster_whisper and will encounter the error |
The error is about cuDNN, not cuBLAS. You should double-check that you correct installed the pip packages as shown in #153 (comment). |
Doesn't work for me on Ubuntu (WSL).
But the libraries are installed:
Setting the paths manually like so worked:
|
Very cool, thanks for the tip! Btw if you're using a conda env you can set the env var like this (in your environment):
It will overwrite the default path |
Error in Windows
download https://github.com/Purfview/whisper-standalone-win/releases/download/libs/cuBLAS.and.cuDNN_win_v3.zip
|
It's been a while, but what command did you use to install torch specifically? |
Oh, it was just the usual Just installing torch with CUDA. |
Hi @guillaumekln , may I know if this improvement has been made on CTranslate2? I'm using CUDA 12.1 and I'm facing the same issue. It works by exporting the Happy to help for anything you'd need. |
Yes, look at my gui.pi script. |
No description provided.
The text was updated successfully, but these errors were encountered: