You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+32-3
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ import any_llm_client
24
24
25
25
26
26
config = any_llm_client.OpenAIConfig(
27
-
url="http://127.0.0.1:11434/v1/chat/completions",
27
+
url="http://127.0.0.1:11434/v1/chat/completions",
28
28
model_name="qwen2.5-coder:1.5b",
29
29
request_extra={"best_of": 3}
30
30
)
@@ -57,7 +57,7 @@ import any_llm_client
57
57
58
58
59
59
config = any_llm_client.OpenAIConfig(
60
-
url="http://127.0.0.1:11434/v1/chat/completions",
60
+
url="http://127.0.0.1:11434/v1/chat/completions",
61
61
model_name="qwen2.5-coder:1.5b",
62
62
request_extra={"best_of": 3}
63
63
)
@@ -164,7 +164,9 @@ async with any_llm_client.OpenAIClient(config, ...) as client:
164
164
165
165
#### Errors
166
166
167
-
`any_llm_client.LLMClient.request_llm_message()` and `any_llm_client.LLMClient.stream_llm_message_chunks()` will raise `any_llm_client.LLMError` or `any_llm_client.OutOfTokensOrSymbolsError` when the LLM API responds with a failed HTTP status.
167
+
`any_llm_client.LLMClient.request_llm_message()` and `any_llm_client.LLMClient.stream_llm_message_chunks()` will raise:
168
+
-`any_llm_client.LLMError` or `any_llm_client.OutOfTokensOrSymbolsError` when the LLM API responds with a failed HTTP status,
169
+
-`any_llm_client.LLMRequestValidationError` when images are passed to YandexGPT client.
168
170
169
171
#### Timeouts, proxy & other HTTP settings
170
172
@@ -203,3 +205,30 @@ await client.request_llm_message("Кек, чо как вообще на нара
203
205
```
204
206
205
207
The `extra` parameter is united with `request_extra` in OpenAIConfig
208
+
209
+
210
+
#### Passing images
211
+
212
+
You can pass images to OpenAI client (YandexGPT doesn't support images yet):
213
+
214
+
```python
215
+
await client.request_llm_message(
216
+
messages=[
217
+
any_llm_client.TextContentItem("What's on the image?"),
0 commit comments