Releases: twinnydotdev/twinny
Releases · twinnydotdev/twinny
3.10.0
- Enable fully configurable API for both chat and fim endpoints.
- Remove defaults causing headache, user can add their own
- Adds support for custom fim template
- Add LiteLLM support
v3.8.0
- Support automatic multiline completions
- Enable multiline completions by default
- Keep option to disable it
- More sophisticated methods to determine multiline completions
v3.7.0
Updates Ollama chat completions to work from the OpenAI chat specification /v1/chat/completions
.
This is a minor update which affects the chat completions API.
If you were previously using /api/generate
or /api/chat
for Ollama chat completions please change it to /v1/chat/completions
or it will no longer work.
v3.6.6
Better context
Style updates
Bug fixes
Other features.
v3.5.0
- Add new document button to code blocks
- Fix some style issues with code blocks
- New button styling in code blocks
- Added LMStudio support
- Automatically set port and path when selecting provider
v3.4.0
- Add and edit custom templates
- Choose default templates for chat window
v3.1.0
Major refactor types, event handlers and more.
v3.0.0
- Added support hosted llama.cpp servers
- Added configuration options for separate FIM and Chat completion server endpoints as llama.cpp server can only host one model at a time and fim/chat don't work interchangeably with the same model
- Some settings have been re-named but the defaults stay the same
- Remove support for deepseek models as was causing code smell inside the prompt templates (need to improve model support)
v2.6.14
Enabled cancellation of model download when starting twinny and an option to re-enable it.
v2.6.13
- Add option to click status bar icon to stop generation and destroy stream
- Add max tokens for fim and chat to options.