CLAUDE_CODE_MAX_OUTPUT_TOKENS limit reached #16
-
|
Any advice on workarounds for the API limit? Have tried changing the env variable 6 different ways but it appears the Claude API that CodeMaxhine uses has its own, uneditable limit. Simplest workaround might be to configure workflow to do the blocking task in batches? My exact situation is the Task Breakdown agent not being able to complete its function, nor can it resume from where 32,000 tokens left off because all its work is lost. Being able to find the incomplete task output so I could preserve it for the fallback/next resume might help? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
|
You’re right that this is quite an edge case - hitting the output limit during the planning phase usually means you’re working with a very large specification. The issue you’re experiencing is related to output token limits rather than API limits. When an agent like Task Breakdown generates more than ~32K tokens of output, it gets truncated and the work is lost. The best workaround is to use the codemachine step task-breakdown "Use a write-then-append flow. Same file, fixed chunks of 500 lines. Keep appending chunks until the full tasks file is produced to avoid hitting output limits"This tells the agent to break its work into manageable chunks and append them incrementally, preventing truncation. You can apply this technique to any agent hitting output limits. The other planning phase agents are:
For example: codemachine step plan-agent "Break the plan into sections and write incrementally to avoid output limits"After manually running the step, mark it as complete by adding it to your "completedSteps": [
0,
1,
2,
3 ← add this
]Good news: We’re planning to add built-in output file limits in version 0.4.0 to automatically handle this issue, so you won’t need these manual workarounds in the future. Let me know if this resolves the issue or if you run into any other problems! |
Beta Was this translation helpful? Give feedback.
You’re right that this is quite an edge case - hitting the output limit during the planning phase usually means you’re working with a very large specification.
The issue you’re experiencing is related to output token limits rather than API limits. When an agent like Task Breakdown generates more than ~32K tokens of output, it gets truncated and the work is lost.
The best workaround is to use the
stepcommand to give the agent explicit instructions on how to handle large outputs:codemachine step task-breakdown "Use a write-then-append flow. Same file, fixed chunks of 500 lines. Keep appending chunks until the full tasks file is produced to avoid hitting output limits"This tells the agent…