Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error installing through full linux bash script #1707

Closed
juerware opened this issue Jun 24, 2024 · 2 comments
Closed

Error installing through full linux bash script #1707

juerware opened this issue Jun 24, 2024 · 2 comments

Comments

@juerware
Copy link

SO: Ubuntu 22.04
Commit: 9a7c07b (last at the moment of this issue)
Cuda: 12.1
Nvidia: 535.183.01

I got the following error trying to install through:

bash docs/linux_install_full.sh

if someone can help with the error:

Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error

  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [93 lines of output]
      *** scikit-build-core 0.9.6 using CMake 3.27.1 (wheel)
      *** Configuring CMake...
      2024-06-24 12:54:28,896 - scikit_build_core - WARNING - Can't find a Python library, got libdir=/root/miniconda3/envs/h2ogpt/lib, ldlibrary=libpython3.10.a, multiarch=x86_64-linux-gnu, masd=None
      loading initial cache file /tmp/tmpcjgjaztp/build/CMakeInit.txt
      -- The C compiler identification is GNU 11.4.0
      -- The CXX compiler identification is GNU 11.4.0
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /usr/bin/cc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /usr/bin/c++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Found Git: /usr/bin/git (found version "2.34.1")
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
      -- Found Threads: TRUE
      CMake Warning at vendor/llama.cpp/CMakeLists.txt:387 (message):
        LLAMA_CUBLAS is deprecated and will be removed in the future.

        Use LLAMA_CUDA instead


      -- Unable to find cublas_v2.h in either "/usr/local/cuda/include" or "/usr/math_libs/include"
      -- Found CUDAToolkit: /usr/local/cuda/include (found version "12.1.66")
      -- CUDA found
      -- The CUDA compiler identification is NVIDIA 12.1.66
      -- Detecting CUDA compiler ABI info
      -- Detecting CUDA compiler ABI info - done
      -- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
      -- Detecting CUDA compile features
      -- Detecting CUDA compile features - done
      -- Using CUDA architectures: all
      -- CUDA host compiler is GNU 11.4.0

      -- ccache found, compilation results will be cached. Disable with LLAMA_CCACHE=OFF.
      -- CMAKE_SYSTEM_PROCESSOR: x86_64
      -- x86 detected
      CMake Warning (dev) at CMakeLists.txt:26 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.

      CMake Warning (dev) at CMakeLists.txt:35 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.

      -- Configuring done (5.7s)
      CMake Error at vendor/llama.cpp/CMakeLists.txt:1225 (target_link_libraries):
        Target "ggml" links to:

          CUDA::cublas

        but the target was not found.  Possible reasons include:

          * There is a typo in the target name.
          * A find_package call is missing for an IMPORTED target.
          * An ALIAS target is missing.



      CMake Error at vendor/llama.cpp/CMakeLists.txt:1232 (target_link_libraries):
        Target "ggml_shared" links to:

          CUDA::cublas

        but the target was not found.  Possible reasons include:

          * There is a typo in the target name.
          * A find_package call is missing for an IMPORTED target.
          * An ALIAS target is missing.



      CMake Error at vendor/llama.cpp/CMakeLists.txt:1249 (target_link_libraries):
        Target "llama" links to:

          CUDA::cublas

        but the target was not found.  Possible reasons include:

          * There is a typo in the target name.
          * A find_package call is missing for an IMPORTED target.
          * An ALIAS target is missing.



      -- Generating done (0.0s)
      CMake Generate step failed.  Build files cannot be regenerated correctly.

      *** CMake configuration failed
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python)
@juerware
Copy link
Author

juerware commented Jun 25, 2024

I think I have resolved the problem following the steps:

  1. Follow this url documentation: https://developer.nvidia.com/cuda-12-1-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=deb_local
  2. apt -y install nvidia-cuda-toolkit # be careful because this is version 11.5 not 12.1 because 11.5 is default package in the ubuntu distribution mentioned before. This package is necessary for getting installed header files for compilation through nvcc command.
  3. Execute command to view version of installed packages:
# dpkg -l | grep -iP '(cuda|nvidia)' | grep -i toolkit
ii  cuda-toolkit-12-1-config-common             12.1.55-1                               all          Common config package for CUDA Toolkit 12.1.
ii  cuda-toolkit-12-config-common               12.1.55-1                               all          Common config package for CUDA Toolkit 12.
ii  cuda-toolkit-config-common                  12.1.55-1                               all          Common config package for CUDA Toolkit.
ii  nvidia-cuda-toolkit                         11.5.1-1ubuntu1                         amd64        NVIDIA CUDA development toolkit
ii  nvidia-cuda-toolkit-doc                     11.5.1-1ubuntu1                         all          NVIDIA CUDA and OpenCL documentation
  1. Executing final repository command bash docs/linux_install_full.sh

As you can see with these steps it is getting up but .... I think it has to be reviewed in order to automate the right final process and coordinate cuda versions

@pseudotensor
Copy link
Collaborator

The steps say to install cuda 12.1 toolkit first. Yes, depending upon your drivers etc. this may be more involved as old drivers would need to be updated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants