-
Notifications
You must be signed in to change notification settings - Fork 1.7k
feat: Add reasoning and capability controls for OpenAI Compatible models #4860
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
benzntech
wants to merge
16
commits into
Kilo-Org:main
Choose a base branch
from
benzntech:feat/openai-compatible-settings-improvements
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
feat: Add reasoning and capability controls for OpenAI Compatible models #4860
benzntech
wants to merge
16
commits into
Kilo-Org:main
from
benzntech:feat/openai-compatible-settings-improvements
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
🦋 Changeset detectedLatest commit: 89ee6c1 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
Related to Kilo-Org#4506 and #7370 ## Summary This PR addresses the pain points around configuring reasoning and tool capabilities for custom OpenAI-compatible models (like DeepSeek R1). ## Changes - **UI**: Added checkboxes for 'Supports Reasoning', 'Supports Function Calling', and 'Supports Computer Use' to the OpenAI Compatible settings. - **UI**: Compacted the capability checkboxes into a 2-column grid layout with tooltip-only descriptions. - **Backend**: Updated OpenAiHandler to inject the 'thinking' parameter when reasoning is enabled and the model supports it. - **Backend**: Gated tool inclusion based on the 'supportsNativeTools' flag. - **i18n**: Added translation strings for the new settings. ## Findings The backend logic in base-openai-compatible-provider.ts has support for reasoning (via <think> tags) and tools, but these features were gated by flags that were unreachable from the UI for custom models.
82cbecb to
8a902d2
Compare
- Disable Auto-fill button when no model selected in OpenAI Compatible settings - Provide feedback (undefined) when model info not found to clear UI state - Fix Gemini provider to pass baseUrl to GoogleGenAI constructor for proxy support - Bump version to 4.143.3-dev
…ation - Gemini/Ollama models only fetched when provider is actually selected - Add forceRefresh flag to Auto-fill button for cache bypass - Fix Infinity → null JSON serialization bug in model tiers - Replace Infinity with Number.MAX_SAFE_INTEGER for valid JSON - Bump version to 4.143.21
…e-settings-improvements # Please enter a commit message to explain why this merge is necessary, # especially if it merges an updated upstream into a topic branch. # # Lines starting with '#' will be ignored, and an empty message aborts # the commit.
…//github.com/benzntech/kilocode into feat/openai-compatible-settings-improvements
- OpenRouter is now the primary source for model info - Static model maps merged for capabilities (computer use, images, etc.) - Added fuzzy matching to handle model ID variations - Normalizes provider prefixes and date suffixes for matching
Collaborator
|
Thanks! We'll review and test this soon! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This PR enhances the OpenAI Compatible provider settings with:
1. Auto-fill Model Capabilities
gemini,vision,vl,omni,gpt-4o→ enables Images supportreasoner,thinking,r1,o1,o3→ enables Reasoning supportcomputer→ enables Computer Use supportforceRefreshflag to bypass cache2. Suppress Unconfigured Provider Errors
apiProvider === 'gemini'ANDgeminiApiKeyapiProvider === 'ollama'ANDollamaBaseUrl3. Fix JSON Serialization Bug
contextWindow: Infinityare now sanitized before JSON serializationFiles Changed
src/api/providers/openai.ts: Added tier sanitization ingetOpenAiModelInfo()src/core/webview/webviewMessageHandler.ts: Conditional Gemini/Ollama fetching, forceRefresh support, auto-detection heuristicswebview-ui/src/components/settings/providers/OpenAICompatible.tsx: Added forceRefresh flag to Auto-fill buttonsrc/package.json: Version bump to 4.143.21Testing
gemini-3-pro-previewordeepseek-r1Related PRs
Fixes #3271