feat: add diagnostic logging for GLM model detection#11080
Draft
roomote[bot] wants to merge 2 commits intomainfrom
Draft
feat: add diagnostic logging for GLM model detection#11080roomote[bot] wants to merge 2 commits intomainfrom
roomote[bot] wants to merge 2 commits intomainfrom
Conversation
…viders This adds automatic GLM model detection for third-party providers, enabling the same optimizations that Z.ai uses for GLM models: 1. Created isGlmModel() utility function that detects GLM model IDs 2. Created getGlmModelOptions() to get model-specific configuration 3. Modified LM Studio provider to detect GLM models and apply: - mergeToolResultText option to prevent dropping reasoning_content - disabled parallel_tool_calls by default for GLM models 4. Modified BaseOpenAiCompatibleProvider with the same GLM handling This addresses issue #11071 questions about GLM model detection and ensuring Z.ai improvements are available to LM Studio and OpenAI-compatible endpoints running GLM models.
- Add console logging in getGlmModelOptions() to show when a GLM model is detected - Log model ID being used in LM Studio and OpenAI-compatible providers - Log parallel_tool_calls value being applied - Helps users verify GLM detection is working correctly Addresses issue #11071 where users cannot verify if GLM detection is functioning
Contributor
Author
Review complete. No issues found. The implementation correctly:
Diagnostic logging output matches the expected format from the PR description. Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR attempts to address Issue #11071 by adding diagnostic logging to help users verify GLM model detection.
Problem
Users testing PR #11077 cannot verify whether:
Solution
Added console logging at key points:
parallel_tool_callsvalue being sent to the APIExample Output
When using a GLM model like
glm-4.5:When using a non-GLM model:
Testing
Notes
Feedback and guidance are welcome!
Important
Adds diagnostic logging for GLM model detection and optimizations, with utility functions and tests for model identification and option retrieval.
base-openai-compatible-provider.tsandlm-studio.ts.parallel_tool_callsvalue.isGlmModel()andgetGlmModelOptions()inmodel-detection.tsfor GLM model detection and option retrieval.model-detection.spec.tsfor GLM model detection and option retrieval.This description was created by
for 2c5d905. You can customize this summary. It will automatically update as commits are pushed.