Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc] cuda and cudnn requirements for 1.18.1 gpu packages #21377

Merged
merged 5 commits into from
Jul 17, 2024

Conversation

tianleiwu
Copy link
Contributor

@tianleiwu tianleiwu commented Jul 16, 2024

Description

Add CUDA and cuDNN requirements for 1.18.1 GPU package.

Motivation and Context

#21354
#21173

@jywu-msft
Copy link
Member

jywu-msft commented Jul 17, 2024

isn't this a little misleading? this makes it look like our default packages are built with CUDA 12.x which is not the case.
I think until that becomes the default, we should list both CUDA 11.8/CuDNN 8.x and CUDA 12.x/CuDNN 9.x as the requirements and call this out explicitly.
@yf711 is looking into changing the defaults for ORT 1.19 (make the pypi/nuget default to CUDA 12.x and host the 11.8 packages on the alt feed)

@tianleiwu
Copy link
Contributor Author

isn't this a little misleading? this makes it look like our default packages are built with CUDA 12.x which is not the case. I think until that becomes the default, we should list both CUDA 11.8/CuDNN 8.x and CUDA 12.x/CuDNN 9.x as the requirements and call this out explicitly. @yf711 is looking into changing the defaults for ORT 1.19 (make the pypi/nuget default to CUDA 12.x and host the 11.8 packages on the alt feed)

Installation is described in another doc, there are different ways for installing cuda 11.x and cuda 12.x:
https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-11x
https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-12x
That part will need change for next release 1.19.

Here we describe cuda and cudnn vesion. For example, if user has cuda 12, they will look up the first table to find cudnn version. If they have cuda 11, they need look at the second table to find cudnn version.

@tianleiwu tianleiwu merged commit 31f7cfb into gh-pages Jul 17, 2024
6 of 7 checks passed
@tianleiwu tianleiwu deleted the tlwu/doc_ort_1.18.1_cuda_version branch July 17, 2024 17:44
@jywu-msft
Copy link
Member

jywu-msft commented Jul 17, 2024

isn't this a little misleading? this makes it look like our default packages are built with CUDA 12.x which is not the case. I think until that becomes the default, we should list both CUDA 11.8/CuDNN 8.x and CUDA 12.x/CuDNN 9.x as the requirements and call this out explicitly. @yf711 is looking into changing the defaults for ORT 1.19 (make the pypi/nuget default to CUDA 12.x and host the 11.8 packages on the alt feed)

Installation is described in another doc, there are different ways for installing cuda 11.x and cuda 12.x: https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-11x https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-12x That part will need change for next release 1.19.

Here we describe cuda and cudnn vesion. For example, if user has cuda 12, they will look up the first table to find cudnn version. If they have cuda 11, they need look at the second table to find cudnn version.

Ok yeah understood. your modifications improves on the previous docs. thanks! it's just that people might be expecting CUDA 12.x to be the default since it's listed first and they don't go and read the additional instructions. In any case, I think we need to switch CUDA 12.x to be the default for our package installations asap.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants