Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Groq Support #1238

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 52 additions & 0 deletions docs/language-models/hosted-models/groq.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: Groq
---

To use Open Interpreter with a model from Groq, simply run:

<CodeGroup>

```bash Terminal
interpreter --model groq/mixtral-8x7b-32768
```

```python Python
from interpreter import interpreter

interpreter.llm.model = "groq/mixtral-8x7b-32768"
interpreter.chat()
```

</CodeGroup>

# Supported Models

We support any model on [Groq's models page:](https://console.groq.com/docs/models)

<CodeGroup>

```bash Terminal
interpreter --model groq/mixtral-8x7b-32768
interpreter --model groq/llama3-8b-8192
interpreter --model groq/llama3-70b-8192
interpreter --model groq/gemma-7b-it
```

```python Python
interpreter.llm.model = "groq/mixtral-8x7b-32768"
interpreter.llm.model = "groq/llama3-8b-8192"
interpreter.llm.model = "groq/llama3-70b-8192"
interpreter.llm.model = "groq/gemma-7b-it"
```

</CodeGroup>

# Required Environment Variables

Run `export GROQ_API_KEY='<your-key-here>'` or place it in your rc file and re-source
Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.

| Environment Variable | Description | Where to Find |
| -------------------- | ---------------------------------------------------- | ------------------------------------------------------------------- |
| `GROQ_API_KEY` | The API key for authenticating to Groq's services. | [Groq Account Page](https://console.groq.com/keys) |

57 changes: 55 additions & 2 deletions interpreter/core/llm/llm.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
import litellm
from groq import Groq

groq_client = [None]

import tokentrim as tt

from ...terminal_interface.utils.display_markdown_message import (
Expand All @@ -9,6 +13,7 @@
from .utils.convert_to_openai_messages import convert_to_openai_messages

litellm.suppress_debug_info = True
import os
import time


Expand All @@ -26,6 +31,7 @@ def __init__(self, interpreter):

# Settings
self.model = "gpt-4-turbo"
self.model = "groq/mixtral-8x7b-32768" # can now use models from groq. `export GROQ_API_KEY="your-key-here")` or use --model
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line should be deleted before merging to main branch

self.temperature = 0
self.supports_vision = False
self.supports_functions = None # Will try to auto-detect
Expand Down Expand Up @@ -67,7 +73,7 @@ def run(self, messages):
self.supports_functions = False
except:
self.supports_functions = False

# Trim image messages if they're there
if self.supports_vision:
image_messages = [msg for msg in messages if msg["type"] == "image"]
Expand Down Expand Up @@ -210,7 +216,54 @@ def fixed_litellm_completions(**params):
# Run completion
first_error = None
try:
yield from litellm.completion(**params)
def source(**params):
"""Get Completions Using LiteLLM"""
yield from litellm.completion(**params)

if "model" in params and "groq/" in params["model"]:

def groq_complete(**params):
if groq_client[0] is None:
groq_client[0] = Groq(
# This is the default and can be omitted
api_key=os.environ.get("GROQ_API_KEY"),
timeout=2,
max_retries=3,
)
res = (
groq_client[0]
.chat.completions.create(
messages=params["messages"],
model=params["model"].split("groq/")[1],
)
.choices[0]
.message.content
)
return res

def s(**params):
"""Get Completions Using Groq"""
params["stream"] = (
False # To keep things simple for now, and groq is super fast anyway
)
word_by_word = False
if word_by_word:
for chunk in groq_complete(**params).split(" "):
yield {
"choices": [
{"delta": {"type": "message", "content": chunk + " "}}
]
}
else:
for whole in [groq_complete(**params)]:
yield {
"choices": [
{"delta": {"type": "message", "content": whole}}
]
}

source = s
yield from source(**params)
except Exception as e:
# Store the first error
first_error = e
Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ matplotlib = "^3.8.2"
toml = "^0.10.2"
posthog = "^3.1.0"
tiktoken = "^0.6.0"
groq = "^4.3.0"


#Optional dependencies
Expand Down