Replies: 2 comments
-
I removed the onnxruntime==1.19.2, and install the onnxruntime-gpu==1.18.0, which , according to the release notes, may suport cuda12.X and cudnn 8.9, but this time, it need libcublasLt.so.11 while my env there is only libcublasLt.so.12, God, why so much limitations! |
Beta Was this translation helpful? Give feedback.
0 replies
-
see #21684 (comment) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
My system is CentOS7.2, and conda python=3.8, cuda=12.1.1, cudnn version = 8.9
my first install directly used the
pip install onnxruntime-gpu
, but when init the onnx-session, it report:UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider'
then according to https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-12x
I remove and reinstall the onnxruntime-gpu like :
pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
this time, when I init the onnxruntime session, it report:
provider_bridge_ort.cc: 1992 TryGetProviderInfo_CUDA] /onnxruntime_srt/onnxruntime/core/session/provider_bridge_ort.cc:1637 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError]: 1: FAIL : Faield to load library libonnxruntime_providers_cuda.so with error: libcudnn.so.9: cannot open shared object file: No such file or directory
both of the two install are onnxruntime-gpu:1.19.2
My question is , why the onnxruntime locked the libcudnn version? How to install a usable version of onnxruntime-gpu in my env?
Beta Was this translation helpful? Give feedback.
All reactions