Skip to content

Commit e3f2e3a

Browse files
authored
Merge pull request #1488 from hc-tec/patch-1
Updated the validation logic for max_tokensto allow a maximum of ​​32,000​​ (previously capped at 16,000).
2 parents 571d8ad + 8421740 commit e3f2e3a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

gpt_researcher/utils/llm.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ async def create_chat_completion(
5454
raise ValueError("Model cannot be None")
5555
if max_tokens is not None and max_tokens > 32001:
5656
raise ValueError(
57-
f"Max tokens cannot be more than 16,000, but got {max_tokens}")
57+
f"Max tokens cannot be more than 32,000, but got {max_tokens}")
5858

5959
# Get the provider from supported providers
6060
provider_kwargs = {'model': model}

0 commit comments

Comments
 (0)