Skip to content

Commit b9b7a5d

Browse files
committed
doc updates
1 parent ee3b07f commit b9b7a5d

File tree

3 files changed

+8
-6
lines changed

3 files changed

+8
-6
lines changed

docs/content/integrations/backend/litellm.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -290,23 +290,23 @@ LiteLLM supports provider-prefixed model names:
290290
```python
291291
import openai
292292

293-
client = openai.OpenAI(base_url="http://localhost:40114/olla/openai")
293+
client = openai.OpenAI(base_url="http://localhost:40114/olla/litellm/v1")
294294

295-
# Routes to OpenAI
295+
# Routes to OpenAI via LiteLLM
296296
response = client.chat.completions.create(
297-
model="gpt-4",
297+
model="openai/gpt-4",
298298
messages=[{"role": "user", "content": "Hello!"}]
299299
)
300300

301301
# Routes to Anthropic via LiteLLM
302302
response = client.chat.completions.create(
303-
model="claude-4-opus",
303+
model="anthropic/claude-3-opus-20240229",
304304
messages=[{"role": "user", "content": "Hello!"}]
305305
)
306306

307307
# Routes to AWS Bedrock via LiteLLM
308308
response = client.chat.completions.create(
309-
model="claude-4-bedrock",
309+
model="bedrock/anthropic.claude-3-sonnet",
310310
messages=[{"role": "user", "content": "Hello!"}]
311311
)
312312
```

docs/content/integrations/backend/lmstudio.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ keywords: LM Studio, Olla, LLM proxy, local inference, OpenAI compatible, model
1313
</tr>
1414
<tr>
1515
<th>Since</th>
16-
<td>Olla <code>v0.0.7</code></td>
16+
<td>Olla <code>v0.0.12</code></td>
1717
</tr>
1818
<tr>
1919
<th>Type</th>

docs/content/usage.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,7 @@ Perfect for enthusiasts running multiple LLM instances:
3535
```yaml
3636
# Home lab config - local first, cloud fallback
3737
discovery:
38+
type: "static"
3839
static:
3940
endpoints:
4041
- name: "rtx-4090-mobile"
@@ -86,6 +87,7 @@ Seamlessly combine local and cloud models with native LiteLLM support:
8687
```yaml
8788
# Hybrid setup with LiteLLM
8889
discovery:
90+
type: "static"
8991
static:
9092
endpoints:
9193
# Local models for privacy/cost

0 commit comments

Comments
 (0)