Skip to content

v3.0.0

Compare
Choose a tag to compare
@rjmacarthy rjmacarthy released this 25 Jan 14:46
  • Added support hosted llama.cpp servers
  • Added configuration options for separate FIM and Chat completion server endpoints as llama.cpp server can only host one model at a time and fim/chat don't work interchangeably with the same model
  • Some settings have been re-named but the defaults stay the same
  • Remove support for deepseek models as was causing code smell inside the prompt templates (need to improve model support)