Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Build] libonnxruntime_providers_cuda.so with error: libcublasLt.so.12: cannot open shared object file: No such file or directory #22968

Closed
wuhongsheng opened this issue Nov 28, 2024 · 3 comments
Labels
build build issues; typically submitted using template

Comments

@wuhongsheng
Copy link

Describe the issue

C++ infer output this error

Urgency

No response

Target platform

linux/amd64

Build script

ScopedTimer timer("Overall detection");

float* blobPtr = nullptr; // Pointer to hold preprocessed image data
// Define the shape of the input tensor (batch size, channels, height, width)
std::vector<int64_t> inputTensorShape = {1, 3, inputImageShape.height, inputImageShape.width};

// Preprocess the image and obtain a pointer to the blob
cv::Mat preprocessedImage = preprocess(image, blobPtr, inputTensorShape);

// Compute the total number of elements in the input tensor
size_t inputTensorSize = utils::vectorProduct(inputTensorShape);

// Create a vector from the blob data for ONNX Runtime input
std::vector<float> inputTensorValues(blobPtr, blobPtr + inputTensorSize);

delete[] blobPtr; // Free the allocated memory for the blob

// Create an Ort memory info object (can be cached if used repeatedly)
static Ort::MemoryInfo memoryInfo = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);
// Create input tensor object using the preprocessed data
Ort::Value inputTensor = Ort::Value::CreateTensor<float>(
    memoryInfo,
    inputTensorValues.data(),
    inputTensorSize,
    inputTensorShape.data(),
    inputTensorShape.size()
);

// Run the inference session with the input tensor and retrieve output tensors
std::vector<Ort::Value> outputTensors = session.Run(
    Ort::RunOptions{nullptr},
    inputNames.data(),
    &inputTensor,
    numInputNodes,
    outputNames.data(),
    numOutputNodes
);

// Determine the resized image shape based on input tensor shape
cv::Size resizedImageShape(static_cast<int>(inputTensorShape[3]), static_cast<int>(inputTensorShape[2]));

// Postprocess the output tensors to obtain detections
std::vector<Detection> detections = postprocess(image.size(), resizedImageShape, outputTensors, confThreshold, iouThreshold);

return detections; // Return the vector of detections

Error / output

Build files have been written to: /home/zhibo/whs/yolos-cpp/build
root@zhibo:/home/zhibo/whs/yolos-cpp/build# make -j4
[ 16%] Building CXX object CMakeFiles/image_inference.dir/src/image_inference.cpp.o
[ 33%] Building CXX object CMakeFiles/camera_inference.dir/src/camera_inference.cpp.o
[ 50%] Building CXX object CMakeFiles/video_inference.dir/src/video_inference.cpp.o
[ 66%] Linking CXX executable camera_inference
[ 83%] Linking CXX executable image_inference
[100%] Linking CXX executable video_inference
[100%] Built target image_inference
[100%] Built target camera_inference
[100%] Built target video_inference
root@zhibo:/home/zhibo/whs/yolos-cpp/build# ./video_inference
Inference device: GPU
terminate called after throwing an instance of 'Ort::Exception'
what(): /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1539 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.12: cannot open shared object file: No such file or directory

Visual Studio Version

No response

GCC / Compiler Version

No response

@wuhongsheng wuhongsheng added the build build issues; typically submitted using template label Nov 28, 2024
@wuhongsheng
Copy link
Author

root@zhibo:/home/zhibo/whs/yolos-cpp/build# ls /usr/local/cuda/lib64/ |grep libcublasLt.so.12
libcublasLt.so.12
libcublasLt.so.12.6.4.1

@skottmckay
Copy link
Contributor

Is /usr/local/cuda/lib64 in the PATH when you're running ORT and do the permissions allow ORT to load it?

Are there any dependencies of libcublasLt.so.12 that are not satisfied?

@wejoncy
Copy link
Contributor

wejoncy commented Nov 29, 2024

Can set LD_LIBRARY_PATH=/usr/local/cuda/lib64/ tempory to test if it works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build build issues; typically submitted using template
Projects
None yet
Development

No branches or pull requests

3 participants