Skip to content

Conversation

@luav
Copy link

@luav luav commented Jul 11, 2023

  • CUDA_ARCHITECTURES set to all ("all" works instead of "native" on various platforms) instead of OFF when a) they are not defined explicitly and b) CUBLAS is used;
  • pthread linked properly on Linux;
  • CUDA-based std is enabled when CUBLAS is used;
  • cmake version fixed in README to reflect the CMakeLists.txt (3.17 is required for LLAMA_CUBLAS).

@luav
Copy link
Author

luav commented Jul 11, 2023

The compilation is validated on Linux Ubuntu 20.04 x64 with GeForce MX150 and Windows 10 x64 with GeForce RTX 3050 TI for both CPU and CUDA GPU builds.
The following build commands are used for GPU builds:

build$ cmake -DCMAKE_CUDA_ARCHITECTURES="all" -DLLAMA_F16C=0 -DLLAMA_FMA=0 -DLLAMA_AVX=0 -DLLAMA_AVX2=0 -DCMAKE_C_FLAGS="-march=native" -DLLAMA_CUBLAS=1 ..
build$ cmake --build . --config Release -j 4

@cmp-nct
Copy link
Owner

cmp-nct commented Jul 12, 2023

I'll need to look at that in greater detail.
I'm not sure if switching CUBLAS auto-off is the right solution, people who want to compile it with cuda would just get a CPU-only binary which is probably more confusing than an error that CUDA was not found. It also means everything is being compiled wrongly which takes time and needs to be wiped once the actual problem is solved.

Regarding the fixes, I recall there were troubles with the architectures changes they had on llama.cpp. But I don't know the actual implications of the change.

I've compiled it fine on linux, windows and wsl with and without cuda support. I'm not sure which exact scenarios are improved now (and of that introduces issues we didn't have before)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants