Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is docker image nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04 more suitable than nvidia/cuda:12.0.0-runtime-ubuntu22.04 for faster-whisper's cudnn requirements? #998

Open
yasu-kondo opened this issue Sep 10, 2024 · 0 comments

Comments

@yasu-kondo
Copy link

I'm playing with faster-whisper on Docker environment.
Let me suggest tiny modification on README.md.
I've struggled with following error message while I played with wrong environment, like cudnn9 instead of cudnn8.

Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory

Suggestion

modify README.md especially Use Docker section.

  • Suggestion
### Use Docker
The libraries (cuBLAS, cuDNN) are installed in these official NVIDIA CUDA Docker images: nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu20.04 or nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04.
  • Current
### Use Docker
The libraries (cuBLAS, cuDNN) are installed in these official NVIDIA CUDA Docker images: nvidia/cuda:12.0.0-runtime-ubuntu20.04 or nvidia/cuda:12.0.0-runtime-ubuntu22.04.

Some nvidia/cuda images don't have cudnn

faster-whisper needs cudnn package, but some docker image suggested on README.md doesn't have it.

$ docker run nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04 dpkg -l | grep cudnn
hi  libcudnn8                       8.9.6.50-1+cuda12.2                     amd64        cuDNN runtime libraries
$ docker run nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04 dpkg -l | grep cublas
hi  libcublas-12-2                  12.2.5.6-1                              amd64        CUBLAS native runtime libraries
$ docker run nvidia/cuda:12.0.0-runtime-ubuntu22.04 dpkg -l | grep cudnn  # NO OUTPUT means that this image doesn't have `cudnn` package.
$ docker run nvidia/cuda:12.0.0-runtime-ubuntu22.04 dpkg -l | grep cublas
hi  libcublas-12-0                  12.0.1.189-1                            amd64        CUBLAS native runtime libraries

I'm happy to make PR if this suggesion looks good ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant