Skip to content

Commit 02bea7b

Browse files
committed
Add detailed Readme and Dockerfile
Signed-off-by: Sebastiano Mariani <[email protected]>
1 parent 9be3325 commit 02bea7b

File tree

4 files changed

+103
-15
lines changed

4 files changed

+103
-15
lines changed

Dockerfile

+13
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
FROM python:3.11
2+
3+
WORKDIR /app
4+
5+
# Copy the requirements file
6+
COPY ./src /app
7+
COPY ./pyproject.toml /app
8+
9+
# Install the requirements
10+
RUN pip install -e .
11+
12+
# Run the app
13+
ENTRYPOINT ["llm-repl"]

README.md

+73-3
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,87 @@
11
# LLM REPL
22

3-
This repository contains a REPL interface for interacting with various LLMs such as the ChatGPT, etc.
3+
## What is this?
4+
5+
The goal of this project is to create a simple, interactive **REPL** (Read-Eval-Print-Loop) that allows users to interact with a variety of Large Language Models (**LLMs**). The project is mainly built on top of two Python libraries: [langchain](https://github.com/hwchase17/langchain), which provides a convenient and flexible interface for working with LLMs, and [rich](https://github.com/Textualize/rich) which provides a user-friendly interface for the REPL.
6+
7+
Currently, the project is in development and only supports interaction with the ChatGPT but it has been structure to make it easy to extend it use any LLMs, including custom ones (by extending `BaseLLM` in `./src/llm_repl/llms/__init__.py`).
8+
9+
ChatGPT can be interacted by using the models `gpt-3.5-turbo` and `gpt4` (For users who got GPT-4 API beta).
10+
11+
## Features
12+
13+
The REPL supports the following features:
14+
15+
### Streaming Mode
16+
17+
The REPL won't wait for the model to finish generating the output, but it will start printing the output as soon as it is available.
18+
19+
### Pretty Printing
20+
21+
The REPL supports Markdown rendering both of the input and the output.
22+
23+
PS: In this initial version of the REPL, the full Markdown syntax is only when running the tool in `non-streaming` mode. In `streaming` mode only code sections will be pretty printed.
24+
25+
### Model Switching
26+
27+
The REPL supports the switching between different models. At the moment, the only supported LLMs are `chatgpt` and `chatgpt4`.
428

529
## Installation
630

7-
To install the TUI in development mode, first setup the `pre-commit` hooks:
31+
```bash
32+
pip install llm-repl
33+
```
34+
35+
## Usage
36+
37+
First export your OpenAI API key as an environment variable:
838

939
```bash
10-
pre-commit install
40+
export OPENAI_API_KEY=<OPENAI_KEY>
41+
```
42+
43+
Then run the REPL:
44+
45+
```bash
46+
llm-repl
47+
```
48+
49+
Or if you want to use a specific model:
50+
51+
```bash
52+
llm-repl --llm chatgpt4
53+
```
54+
55+
### Run inside Docker
56+
57+
```bash
58+
docker run -it --rm -e OPENAI_API_KEY=<OPENAI_KEY> phate/llm-repl
59+
```
60+
61+
Or if you want to source the environment variables from a file, first create a file called `.env` with the following content:
62+
63+
```bash
64+
OPENAI_API_KEY=<OPENAI_KEY>
1165
```
1266

67+
And then run the following command:
68+
69+
```bash
70+
docker run -it --rm --env-file .env phate/llm-repl
71+
```
72+
73+
## Development
74+
75+
To install the REPL in development mode
76+
1377
Then install the package in development mode:
1478

1579
```bash
1680
pip install -e ".[DEV]"
1781
```
82+
83+
Before contributing, please make sure to run the following commands:
84+
85+
```bash
86+
pre-commit install
87+
```

src/llm_repl/__main__.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
def main():
1616
parser = argparse.ArgumentParser(description="LLM REPL")
1717
parser.add_argument(
18-
"--model",
18+
"--llm",
1919
type=str,
2020
default="chatgpt",
2121
help="The LLM model to use",
@@ -32,4 +32,4 @@ def _(event):
3232
repl.handle_enter(event)
3333

3434
# Run the REPL
35-
repl.run(MODELS[args.model])
35+
repl.run(MODELS[args.llm])

src/llm_repl/repl.py

+15-10
Original file line numberDiff line numberDiff line change
@@ -14,11 +14,12 @@
1414

1515
class LLMRepl:
1616

17-
INTRO_BANNER = "Welcome to LLM REPL! Input your message and press enter twice to send it to the LLM (Ctrl+C to exit)"
1817
LOADING_MSG = "Thinking..."
1918
SERVER_MSG_TITLE = "LLM"
2019
CLIENT_MSG_TITLE = "You"
2120
ERROR_MSG_TITLE = "ERROR"
21+
EXIT_TOKEN = "exit"
22+
INTRO_BANNER = f"Welcome to LLM REPL! Input your message and press enter twice to send it to the LLM (type '{EXIT_TOKEN}' to quit the application)"
2223

2324
def __init__(self, config: dict[str, Any]):
2425
self.console = Console()
@@ -35,7 +36,7 @@ def __init__(self, config: dict[str, Any]):
3536
self.server_color = config["style"]["server"]["color"]
3637
self.error_color = "bold red"
3738
self.misc_color = "gray"
38-
self.model = None # Optional[BaseLLM] = None
39+
self.llm = None # Optional[BaseLLM] = None
3940

4041
def handle_enter(self, event):
4142
"""
@@ -104,7 +105,7 @@ def print_misc_msg(self, msg: str, justify: str = "left"):
104105
"""
105106
self._print_msg("", msg, self.misc_color, justify=justify)
106107

107-
def run(self, model):
108+
def run(self, llm):
108109
"""
109110
Starts the REPL.
110111
@@ -113,31 +114,35 @@ def run(self, model):
113114
The user can enter new lines in the REPL by pressing Enter once. The
114115
REPL will terminate when the user presses Enter twice.
115116
116-
:param BaseLLM model: The LLM model to use.
117+
:param BaseLLM llm: The LLM to use.
117118
"""
118119

119-
self.model = model.load(self)
120-
if self.model is None:
120+
self.llm = llm.load(self)
121+
if self.llm is None:
121122
return
122123

123124
self.print_misc_msg(self.INTRO_BANNER, justify="center")
124-
self.print_misc_msg(f"Loaded model: {self.model.name}", justify="center")
125+
self.print_misc_msg(f"Loaded model: {self.llm.name}", justify="center")
125126

126127
while True:
127128
user_input = self.session.prompt("> ").rstrip()
129+
if user_input == self.EXIT_TOKEN:
130+
self.print_misc_msg("Bye!")
131+
break
132+
128133
self.print_client_msg(user_input)
129134

130-
if not self.model.is_in_streaming_mode:
135+
if not self.llm.is_in_streaming_mode:
131136
self.print_misc_msg(self.LOADING_MSG)
132137
else:
133138
self.console.rule(
134139
f"[{self.server_color}]{self.SERVER_MSG_TITLE}",
135140
style=self.server_color,
136141
)
137142

138-
resp = self.model.process(user_input)
143+
resp = self.llm.process(user_input)
139144

140-
if not self.model.is_in_streaming_mode:
145+
if not self.llm.is_in_streaming_mode:
141146
self.print_server_msg(resp)
142147
else:
143148
self.console.print()

0 commit comments

Comments
 (0)