Skip to content

Commit ab5792a

Browse files
committed
feat: Add MaxTokens option for AI model output control
Introduce a new `MaxTokens` flag and configuration option to allow users to specify the maximum number of tokens to generate in AI model responses. This option is integrated across: - Anthropic: Uses `MaxTokens` for `MessageNewParams`. - Gemini: Sets `MaxOutputTokens` in `GenerateContentConfig`. - Ollama: Sets `num_predict` option in chat requests. - Dryrun: Includes `MaxTokens` in the formatted output. Update example configuration to include `maxTokens` with a descriptive comment. Resolved test issues Resolved test issues
1 parent 70f8c01 commit ab5792a

File tree

9 files changed

+689
-7
lines changed

9 files changed

+689
-7
lines changed
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
### PR [#1747](https://github.com/danielmiessler/Fabric/pull/1747) by [2b3pro](https://github.com/2b3pro): feat: Add MaxTokens option for AI model output control
2+
3+
- Feat: Add MaxTokens option for AI model output control
4+
Introduce a new `MaxTokens` flag and configuration option to allow users to specify the maximum number of tokens to generate in AI model responses.
5+
This option is integrated across:
6+
- Anthropic: Uses `MaxTokens` for `MessageNewParams`.
7+
8+
- Gemini: Sets `MaxOutputTokens` in `GenerateContentConfig`.
9+
- Ollama: Sets `num_predict` option in chat requests.
10+
11+
- Dryrun: Includes `MaxTokens` in the formatted output.
12+
Update example configuration to include `maxTokens` with a descriptive comment.

internal/cli/example.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,9 @@ topp: 0.67
1717
temperature: 0.88
1818
seed: 42
1919

20+
# Maximum number of tokens to generate
21+
maxTokens: 1000
22+
2023
stream: true
2124
raw: false
2225

internal/cli/flags.go

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -102,6 +102,7 @@ type Flags struct {
102102
Notification bool `long:"notification" yaml:"notification" description:"Send desktop notification when command completes"`
103103
NotificationCommand string `long:"notification-command" yaml:"notificationCommand" description:"Custom command to run for notifications (overrides built-in notifications)"`
104104
Thinking domain.ThinkingLevel `long:"thinking" yaml:"thinking" description:"Set reasoning/thinking level (e.g., off, low, medium, high, or numeric tokens for Anthropic or Google Gemini)"`
105+
MaxTokens int `long:"max-tokens" yaml:"maxTokens" description:"Maximum number of tokens to generate (provider-specific limits apply)"`
105106
Debug int `long:"debug" description:"Set debug level (0=off, 1=basic, 2=detailed, 3=trace)" default:"0"`
106107
}
107108

@@ -457,6 +458,7 @@ func (o *Flags) BuildChatOptions() (ret *domain.ChatOptions, err error) {
457458
Raw: o.Raw,
458459
Seed: o.Seed,
459460
Thinking: o.Thinking,
461+
MaxTokens: o.MaxTokens,
460462
ModelContextLength: o.ModelContextLength,
461463
Search: o.Search,
462464
SearchLocation: o.SearchLocation,

0 commit comments

Comments
 (0)