-
OS: Ubuntu 23.04. Via a local deployment, when calling the chat/completions endpoint via llamacpp, with the Docker AIO image, and a basic prompt, an unexpected response is being receiving. This has happened repeatedly over multiple tries.
Does anyone have any information as to why this could possibly be happening?! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Don't use the model file as Just use the model names like you would do with OpenAI. For instance |
Beta Was this translation helpful? Give feedback.
-
Thank you @mudler - responses are coming through in a more coherent manner now. |
Beta Was this translation helpful? Give feedback.
Don't use the model file as
model
in the request unless you want to handle the prompt template for yourself.Just use the model names like you would do with OpenAI. For instance
gpt-4-vision-preview
, orgpt-4
are already present in the AIO images, just use those asmodel
when doing the curl calls.