-
Notifications
You must be signed in to change notification settings - Fork 426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Elixir code completion #2332
base: main
Are you sure you want to change the base?
[WIP] Elixir code completion #2332
Conversation
Set OPENAI_API_KEY env var to play around with it
Ah sorry, I meant to open this under my own fork. Shall I move it there? |
Feel free to leave it here for people to play with :) |
This means you can now use any model for completion that llama.cpp can run Just compile llama.cpp and run the server like this: ./server -m codellama-7b.Q5_K_M.gguf -c 4096 I've tested this with codellama 7B quantised (codellama-7b.Q5_K_M.gguf) and it works well. But I have no idea if the special `/infill` endpoint works for other models, as I don't know how llama.cpp would know about the infilling tokens
- Refactored the way copilot completion backends work - Added Livebook.Copilot.BumblebeeBackend (including attempting to run Serving under new DynamicSupervisor) - Added Livebook.Copilot.DummyBackend for testing - Added Livebook.Copilot.LlamaCppHttpBackend for running models in llama.cpp's server locally - Added Livebook.Copilot.OpenaiBackend for running on OpenAi - Added Livebook.Copilot.HuggingfaceBackend to use HF inference endpoints - Played around with adding some user feedback via flash messages - Fixed a whole bunch of edge cases and bugs in client side logic - Request completions instantly (instead of debounced) when manually requested - Added special comments you can put in livebook cells that will override the configured copilot backend
Reverting to GPT2
Just to give a little update on this:
Will hopefully have a model that is demonstrably better than bumblebee-1.3b by the end of the week |
Bumblebee or deepseekr? :) |
I would deeply love if when i added a @doc above a function definition, it would autocomplete ; @doc ~S"""
Describe this function here
## Examples
iex> ThisModule.Thisfunction("CREATE party\n")
{:ok, {:create, "party"}}
""" I forget the indentation and key words for doctests to work. They are super useful in livebook. |
The aim of this PR is to eventually offer Elixir inline code completion within Livebook.
The high level design is like this
For more context on the project, see this this random document with (slightly outdated) notes.
Status
This is a very minimal implementation of copilot style code completion.
At the moment the only LLM supported is the GPT4 API. Set OPENAI_API_KEY env var to play around with it.
Inline completion should appear 500ms after you stop typing. Or use Ctrl + Space to force inline completion to appear.
TODO in Livebook
Frontend polish
Livebook plumbing
Model inference
Tests!
TODO for fine-tuning a model
The hardest task is to actually fine-tune a model
One of the most fiddly bits seems to be to properly tokenise the special infilling tokens (both in bumblebee and llama.cpp). This seems a bit fiddly and the models often output garbage if you get this wrong. There is some good context on these llama.cpp threads [1] [2]