Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Elixir code completion #2332

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Commits on Nov 9, 2023

  1. MVP copilot completion using GPT4 API

    Set OPENAI_API_KEY env var to play around with it
    jonastemplestein committed Nov 9, 2023
    Configuration menu
    Copy the full SHA
    c8c537b View commit details
    Browse the repository at this point in the history

Commits on Nov 10, 2023

  1. Added llama.cpp HTTP API client

    This means you can now use any model for completion that llama.cpp can run
    
    Just compile llama.cpp and run the server like this:
    
    ./server -m codellama-7b.Q5_K_M.gguf -c 4096
    
    I've tested this with codellama 7B quantised (codellama-7b.Q5_K_M.gguf) and it works well. But I have no idea if the special `/infill` endpoint works for other models, as I don't know how llama.cpp would know about the infilling tokens
    jonastemplestein committed Nov 10, 2023
    Configuration menu
    Copy the full SHA
    afc40e9 View commit details
    Browse the repository at this point in the history

Commits on Nov 15, 2023

  1. Lots of code above my paygrade

    - Refactored the way copilot completion backends work
    - Added Livebook.Copilot.BumblebeeBackend (including attempting to run Serving under new DynamicSupervisor)
    - Added Livebook.Copilot.DummyBackend for testing
    - Added Livebook.Copilot.LlamaCppHttpBackend for running models in llama.cpp's server locally
    - Added Livebook.Copilot.OpenaiBackend for running on OpenAi
    - Added Livebook.Copilot.HuggingfaceBackend to use HF inference endpoints
    - Played around with adding some user feedback via flash messages
    - Fixed a whole bunch of edge cases and bugs in client side logic
    - Request completions instantly (instead of debounced) when manually requested
    - Added special comments you can put in livebook cells that will override the configured copilot backend
    jonastemplestein committed Nov 15, 2023
    Configuration menu
    Copy the full SHA
    71a0958 View commit details
    Browse the repository at this point in the history
  2. Can't get bumblebee cuda to work on livebook machine

    Reverting to GPT2
    jonastemplestein committed Nov 15, 2023
    Configuration menu
    Copy the full SHA
    bd44d31 View commit details
    Browse the repository at this point in the history
  3. Fix typo

    jonastemplestein committed Nov 15, 2023
    Configuration menu
    Copy the full SHA
    4a3ab78 View commit details
    Browse the repository at this point in the history

Commits on Nov 16, 2023

  1. Add static assets

    jonastemplestein committed Nov 16, 2023
    Configuration menu
    Copy the full SHA
    bec936a View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    6e1ea76 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    644f414 View commit details
    Browse the repository at this point in the history