Skip to content

Use your open source local model from the terminal

Notifications You must be signed in to change notification settings

Belluxx/LlamaTerm

Repository files navigation

LlamaTerm

LlamaTerm is a simple CLI utility that allows to use local LLM models easily and with some additional features.

⚠️ Currently this project supports models that use ChatML format or something similar. Use for example Gemma-2 or Phi-3 GGUFs.

Preview

Basic usage:

Injecting file content:

Features

  • Give local files to the model using square brackets
    User: Can you explain the code in [helloworld.c] please?
  • More coming soon

Setup

You can setup LLamaTerm by:

  1. Rename example-<model_name>.env to .env
  2. Modify the .env so that the model path corresponds (you may also need to edit EOS and PREFIX_TEMPLATE if it's a non-standard model)
  3. If you need syntax highlighting for code and markdown, then set REAL_TIME=0 in the .env. Note that you will lose real time output generation.
  4. Install python dependencies with pip install -r requirements.txt

Run

Run LlamaTerm by adding the project directory to the PATH and then running llamaterm.

Alternatively you can just run ./llamaterm from the project directory.

Models supported out of the box

For the following models you will just need to rename the corresponding example example-*.env file to .env and set the MODEL_PATH field in the .env:

All the other models that have a prompt template similar to ChatML are supported but you will need to customize some fields like PREFIX_TEMPLATE, EOS etc... in the .env.

About

Use your open source local model from the terminal

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published