Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
fix: update/pin dependencies to get ONNX runtime working again (#107)
#### Motivation Internal regression tests are failing when using the ONNX Runtime with an error indicating a dependency issue with ONNX Runtime and cuDNN: ``` Shard 0: 2024-07-31 19:38:04.423164988 [E:onnxruntime:Default, provider_bridge_ort.cc:1745 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1426 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcudnn.so.9: cannot open shared object file: No such file or directory ``` I found that ORT 1.18.1 started to build against cudnn 9 (included in the [release notes](https://github.com/Microsoft/onnxruntime/releases/tag/v1.18.1)). However, PyTorch does not use cudnn 9 until 2.4.0, so I pinned in to 1.18.0. In updating poetry.lock, I let other deps update as well, but found other compatibility issue and had to pin transformers and optimum as well to get internal tests passing. #### Modifications - pin the onnxruntime version to 1.18.0 - pin transformers to 4.40.2 (and remove separate `pip install` for it) - pin optimum to 1.20 - run `poetry update` to update poetry.lock #### Result `DEPLOYMENT_FRAMEWORK=hf_optimum_ort` will start working again and internal tests will be passing. --------- Signed-off-by: Travis Johnson <[email protected]>
- Loading branch information