Skip to content

Conversation

@roomote
Copy link
Contributor

@roomote roomote bot commented Jan 29, 2026

Summary

This PR attempts to address Issue #11071 where GLM4.5 (and potentially other models) via both LM Studio AND OpenAI-compatible endpoints get stuck in a loop repeatedly reading the same file.

Root Cause

The issue was introduced in commit ed35b09 (Jan 29, 2026) which changed the parallel_tool_calls parameter default from false to true for all OpenAI-compatible providers. Not all models support this parameter, and when sent to models like GLM4.5 that do not support it, the model gets stuck in a loop.

Note: This is different from PR #11072 which only fixes LM Studio. This PR fixes both the OpenAI-compatible handler (openai.ts) AND the base provider class (base-openai-compatible-provider.ts) that is used by multiple providers.

Solution

This fix changes the OpenAI handlers to only include the parallel_tool_calls parameter when explicitly set to true (i.e., when metadata?.parallelToolCalls === true). This approach:

  1. Maintains compatibility with models that do not support parallel tool calls (e.g., GLM4.5)
  2. Still allows parallel tool calls when explicitly enabled for models that do support it
  3. Follows the same pattern established in commit 2d4dba0

Changes

  • Modified src/api/providers/openai.ts to conditionally include parallel_tool_calls (4 locations)
  • Modified src/api/providers/base-openai-compatible-provider.ts to conditionally include parallel_tool_calls
  • Updated tests in src/api/providers/__tests__/openai.spec.ts to verify the new behavior

Testing

  • All existing tests pass (48 tests in openai.spec.ts, 13 tests in base-openai-compatible-provider.spec.ts)
  • Linting and type checking pass

Fixes #11071

Feedback and guidance are welcome.


Important

Fixes issue by conditionally including parallel_tool_calls parameter only when explicitly enabled in OpenAI handlers.

  • Behavior:
    • Conditionally include parallel_tool_calls parameter in openai.ts and base-openai-compatible-provider.ts only when metadata?.parallelToolCalls === true.
    • Ensures compatibility with models that do not support parallel_tool_calls (e.g., GLM4.5).
  • Testing:
    • Updated tests in openai.spec.ts to verify conditional inclusion of parallel_tool_calls.
    • All existing tests pass (48 tests in openai.spec.ts, 13 tests in base-openai-compatible-provider.spec.ts).

This description was created by Ellipsis for 58d5f9d. You can customize this summary. It will automatically update as commits are pushed.

This change reverts the parallel_tool_calls handling to only include it
when explicitly set to true (metadata?.parallelToolCalls === true).

Root cause: Commit ed35b09 changed the default from false to true,
which caused models like GLM4.5 that do not support this parameter
to get stuck in a loop when reading files repeatedly.

The fix affects:
- src/api/providers/openai.ts (OpenAI-compatible handler)
- src/api/providers/base-openai-compatible-provider.ts (base class for other providers)

This ensures compatibility with models that do not support the
parallel_tool_calls parameter while still allowing it for models
that explicitly enable it.

Fixes #11071
@roomote
Copy link
Contributor Author

roomote bot commented Jan 29, 2026

Rooviewer Clock   See task on Roo Cloud

Review complete. No issues found.

The changes correctly fix the parallel_tool_calls parameter handling by only including it when explicitly enabled (parallelToolCalls === true). This maintains compatibility with models like GLM4.5 that do not support the parameter.

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Triage

Development

Successfully merging this pull request may close these issues.

[BUG] GLM4.5 via LMStudio as well as via an OpenAI-compatible endpoint stuck repeating file reads

1 participant