You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### PR [#1747](https://github.com/danielmiessler/Fabric/pull/1747) by [2b3pro](https://github.com/2b3pro): feat: Add MaxTokens option for AI model output control
2
2
3
-
- Feat: Add MaxTokens option for AI model output control
4
-
Introduce a new `MaxTokens` flag and configuration option to allow users to specify the maximum number of tokens to generate in AI model responses.
5
-
This option is integrated across:
6
-
- Anthropic: Uses `MaxTokens` for `MessageNewParams`.
7
-
8
-
- Gemini: Sets `MaxOutputTokens` in `GenerateContentConfig`.
9
-
- Ollama: Sets `num_predict` option in chat requests.
10
-
11
-
- Dryrun: Includes `MaxTokens` in the formatted output.
12
-
Update example configuration to include `maxTokens` with a descriptive comment.
13
-
Resolved test issues
14
-
Resolved test issues
3
+
- Add MaxTokens flag and configuration option to control the maximum number of tokens generated in AI model responses
4
+
- Integrate MaxTokens support across multiple AI providers: Anthropic (MessageNewParams), Gemini (MaxOutputTokens in GenerateContentConfig), and Ollama (num_predict option)
5
+
- Include MaxTokens parameter in Dryrun formatted output for testing and validation
6
+
- Update example configuration files to include maxTokens setting with descriptive documentation
7
+
- Resolve test issues related to the new MaxTokens functionality
0 commit comments