You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
any call to the chat completion api in llama.cpp returns a string of "undefined" with the DeepSeek-Coder-V2-Lite-Instruct-Q6_K.gguf model
To Reproduce
Download the model, run with the following command: ./llama-server -ngl 99 --ctx-size 16384 -m models/DeepSeek-Coder-V2-Lite-Instruct/DeepSeek-Coder-V2-Lite-Instruct-Q6_K.gguf
, add a chat provider for llamacpp at localhost:8080/v1/chat/completions.
Expected behavior
undefined is NOT repeated and the generated text makes at least some level of sense
Describe the bug
any call to the chat completion api in llama.cpp returns a string of "undefined" with the DeepSeek-Coder-V2-Lite-Instruct-Q6_K.gguf model
To Reproduce
Download the model, run with the following command:
./llama-server -ngl 99 --ctx-size 16384 -m models/DeepSeek-Coder-V2-Lite-Instruct/DeepSeek-Coder-V2-Lite-Instruct-Q6_K.gguf
, add a chat provider for llamacpp at localhost:8080/v1/chat/completions.
Expected behavior
undefined is NOT repeated and the generated text makes at least some level of sense
Screenshots
Logging
logs.txt
API Provider
as above, llama.cpp default port on localhost:8080/v1/chat/completions
Chat or Auto Complete?
chat
Model Name
DeepSeek-Coder-V2-Lite-Instruct-Q6_K.gguf
Desktop (please complete the following information):
Additional context
The text was updated successfully, but these errors were encountered: