Deploying gpu versions meets "error while loading shared libraries: libcuda.so.1" #1798
Unanswered
qingfenghcy
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have my own k8s cluster, in which the A100 server is managed, the specific environment information is as follows:
With reference to xxx,
I deployed the localai-3.2.0 version by helm, downloaded the local-ai:v2.9.0-cublas-cudall image and the ggml-gpt4all-j.bin model file in advance. Added F16:true,GPU_LAYERS:20,gpus:all environment variables to localai workload environment variables. The container can be started normally, but cannot provide external services normally. The request and container log are as follows:
Request and return:
The container log is as follows
I installed the localai cpu version through helm and used the cpu image to successfully provide services, but the gpu version did not. I have consulted official documentation and github issues and have been unable to find a solution.Thanks!!!
Beta Was this translation helpful? Give feedback.
All reactions