Skip to content

Commit 98e9899

Browse files
author
Changelog Bot
committed
chore: incoming 1747 changelog entry
1 parent 767c193 commit 98e9899

File tree

1 file changed

+5
-12
lines changed

1 file changed

+5
-12
lines changed
Lines changed: 5 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,7 @@
11
### PR [#1747](https://github.com/danielmiessler/Fabric/pull/1747) by [2b3pro](https://github.com/2b3pro): feat: Add MaxTokens option for AI model output control
22

3-
- Feat: Add MaxTokens option for AI model output control
4-
Introduce a new `MaxTokens` flag and configuration option to allow users to specify the maximum number of tokens to generate in AI model responses.
5-
This option is integrated across:
6-
- Anthropic: Uses `MaxTokens` for `MessageNewParams`.
7-
8-
- Gemini: Sets `MaxOutputTokens` in `GenerateContentConfig`.
9-
- Ollama: Sets `num_predict` option in chat requests.
10-
11-
- Dryrun: Includes `MaxTokens` in the formatted output.
12-
Update example configuration to include `maxTokens` with a descriptive comment.
13-
Resolved test issues
14-
Resolved test issues
3+
- Add MaxTokens flag and configuration option to control the maximum number of tokens generated in AI model responses
4+
- Integrate MaxTokens support across multiple AI providers: Anthropic (MessageNewParams), Gemini (MaxOutputTokens in GenerateContentConfig), and Ollama (num_predict option)
5+
- Include MaxTokens parameter in Dryrun formatted output for testing and validation
6+
- Update example configuration files to include maxTokens setting with descriptive documentation
7+
- Resolve test issues related to the new MaxTokens functionality

0 commit comments

Comments
 (0)