Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
16 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .changeset/disable-questions-yolo-mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
---

Disable ask_followup_question tool when yolo mode is enabled to prevent the agent from asking itself questions and auto-answering them. Applied to:

- XML tool descriptions (system prompt)
- Native tool filtering
- Tool execution (returns error message if model still tries to use the tool from conversation history)
10 changes: 10 additions & 0 deletions .changeset/openai-compatible-settings-improvements.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
"kilo-code": minor
---

Add reasoning and capability controls for OpenAI Compatible models

- Added checkboxes for 'Supports Reasoning', 'Supports Function Calling', and 'Supports Computer Use' to the OpenAI Compatible settings UI.
- Compacted the capability checkboxes into a 2-column grid layout with tooltip-only descriptions.
- Updated OpenAiHandler to inject the 'thinking' parameter when reasoning is enabled and the model supports it.
- Gated tool inclusion based on the 'supportsNativeTools' flag.
1 change: 1 addition & 0 deletions .github/workflows/code-qa.yml
Original file line number Diff line number Diff line change
Expand Up @@ -186,6 +186,7 @@ jobs:
libxkbfile-dev \
pkg-config \
build-essential \
libkrb5-dev \
python3
- name: Turbo cache setup
uses: actions/cache@v4
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/marketplace-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -201,6 +201,7 @@ jobs:
libxkbfile-dev \
pkg-config \
build-essential \
libkrb5-dev \
python3
- name: Turbo cache setup
uses: actions/cache@v4
Expand Down
4 changes: 4 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ This is a pnpm monorepo using Turbo for task orchestration:
- **`apps/`** - E2E tests, Storybook, docs

Key source directories:

- `src/api/providers/` - AI provider implementations (50+ providers)
- `src/core/tools/` - Tool implementations (ReadFile, ApplyDiff, ExecuteCommand, etc.)
- `src/services/` - Services (MCP, browser, checkpoints, code-index)
Expand Down Expand Up @@ -67,11 +68,13 @@ Kilo Code is a fork of [Roo Code](https://github.com/RooVetGit/Roo-Code). We per
To minimize merge conflicts when syncing with upstream, mark Kilo Code-specific changes in shared code with `kilocode_change` comments.

**Single line:**

```typescript
const value = 42 // kilocode_change
```

**Multi-line:**

```typescript
// kilocode_change start
const foo = 1
Expand All @@ -80,6 +83,7 @@ const bar = 2
```

**New files:**

```typescript
// kilocode_change - new file
```
Expand Down
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# kilo-code

## 4.143.21

- Fix JetBrains plugin build by handling missing gradle.properties in CI check script. Thanks @janpaul123, @kevinvandijk, @chrarnoldus, @hassoncs, @markijbema, @catrielmuller! - Fixes #3271

## 4.143.2

### Patch Changes
Expand Down
14 changes: 8 additions & 6 deletions apps/kilocode-docs/docs/advanced-usage/agent-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,20 +90,22 @@ The Agent Manager requires proper authentication for full functionality, includi
### Supported Authentication Methods

1. **Kilo Code Extension (Recommended)**
- Sign in through the extension settings
- Provides seamless authentication for the Agent Manager
- Enables session syncing and cloud features

- Sign in through the extension settings
- Provides seamless authentication for the Agent Manager
- Enables session syncing and cloud features

2. **CLI with Kilo Code Provider**
- Use the CLI configured with `kilocode` as the provider
- Run `kilocode config` to set up authentication
- See [CLI setup](/cli) for details
- Use the CLI configured with `kilocode` as the provider
- Run `kilocode config` to set up authentication
- See [CLI setup](/cli) for details

### BYOK Limitations

**Important:** Bring Your Own Key (BYOK) is not yet supported with the Agent Manager.

If you're using BYOK with providers like Anthropic, OpenAI, or OpenRouter:

- The Agent Manager will not have access to cloud-synced sessions
- Session syncing features will be unavailable
- You must use one of the supported authentication methods above for full functionality
Expand Down
2 changes: 1 addition & 1 deletion apps/kilocode-docs/docs/providers/bedrock.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ sidebar_label: AWS Bedrock

# Using AWS Bedrock With Kilo Code

Kilo Code supports accessing models through Amazon Bedrock, a fully managed service that makes a selection of high-performing foundation models (FMs) from leading AI companies available via a single API. This provider connects directly to AWS Bedrock and authenticates with the provided credentials.
Kilo Code supports accessing models through Amazon Bedrock, a fully managed service that makes a selection of high-performing foundation models (FMs) from leading AI companies available via a single API. This provider connects directly to AWS Bedrock and authenticates with the provided credentials.

**Website:** [https://aws.amazon.com/bedrock/](https://aws.amazon.com/bedrock/)

Expand Down
30 changes: 17 additions & 13 deletions cli/docs/DEVELOPMENT.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,25 +7,29 @@ We use `pnpm` for package management. Please make sure `pnpm` is installed.
The CLI is currently built by bundling the extension core and replacing the vscode rendering parts with a cli rendering engine. To _develop_ on the CLI you need to follow a few steps:

1. Install dependencies from the root workspace folder:
```bash
pnpm install
```

```bash
pnpm install
```

2. Set up your environment file. Copy the sample and configure your API keys:
```bash
cp .env.sample cli/dist/.env
# Edit cli/dist/.env with your API keys
```

```bash
cp .env.sample cli/dist/.env
# Edit cli/dist/.env with your API keys
```

3. Build the extension core from the root workspace folder:
```bash
pnpm cli:bundle
```

```bash
pnpm cli:bundle
```

4. Change into the cli folder:
```bash
cd ./cli
```

```bash
cd ./cli
```

5. Build & run the extension by running `pnpm start:dev`. If you want to use the CLI to work on its own code, you can run `pnpm start:dev -w ../` which will start it within the root workspace folder.

Expand Down
45 changes: 45 additions & 0 deletions docs/context-window-autofill.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Context Window Auto-fill Feature

## Objective

Implement an auto-fill feature for the context window and other model capabilities in the OpenAI Compatible settings.

## Changes

### Backend

1. **`src/shared/WebviewMessage.ts`**:

- Added `requestOpenAiModelInfo` to `WebviewMessage` type.
- This message allows the frontend to request model information based on the selected model ID.

2. **`src/shared/ExtensionMessage.ts`**:

- Added `openAiModelInfo` to `ExtensionMessage` type.
- This property carriers the `ModelInfo` payload back to the frontend.

3. **`src/api/providers/openai.ts`**:

- Imported known model maps (`openAiNativeModels`, `anthropicModels`, etc.) from `@roo-code/types`.
- Added `getOpenAiModelInfo(modelId: string)` helper function.
- This function iterates through known model maps to find and return the `ModelInfo` for a given model ID.

4. **`src/core/webview/webviewMessageHandler.ts`**:
- Added a handler for `requestOpenAiModelInfo`.
- It calls `getOpenAiModelInfo` and sends back an `openAiModelInfo` message with the result.

### Frontend

1. **`webview-ui/src/i18n/locales/en/settings.json`**:

- Added `"autoFill": "Auto-fill"` translation key.

2. **`webview-ui/src/components/settings/providers/OpenAICompatible.tsx`**:
- Imported `vscode` utility for message passing.
- Implemented `handleAutoFill` function that sends `requestOpenAiModelInfo`.
- Added a listener in `onMessage` to handle `openAiModelInfo` response and update `openAiCustomModelInfo` state.
- Added an "Auto-fill" button in the "Model Capabilities" section header.

## Verification

- Ran `pnpm check-types` successfully, confirming type safety across the monorepo.
15 changes: 15 additions & 0 deletions jetbrains/scripts/check-dependencies.js
Original file line number Diff line number Diff line change
Expand Up @@ -309,6 +309,21 @@ function checkBuildSystem() {
const gradlew = path.join(pluginDir, process.platform === "win32" ? "gradlew.bat" : "gradlew")
const buildGradle = path.join(pluginDir, "build.gradle.kts")
const gradleProps = path.join(pluginDir, "gradle.properties")
const gradlePropsTemplate = path.join(pluginDir, "gradle.properties.template")

// Auto-generate gradle.properties from template if missing
if (!fs.existsSync(gradleProps) && fs.existsSync(gradlePropsTemplate)) {
try {
printWarning("gradle.properties is missing, generating from template...")
let content = fs.readFileSync(gradlePropsTemplate, "utf8")
// Use a default version for CI check - strict version sync happens later via sync:version
content = content.replace("{{VERSION}}", "0.0.0-dev")
fs.writeFileSync(gradleProps, content)
printFix("Generated gradle.properties from template")
} catch (error) {
printError(`Failed to generate gradle.properties: ${error.message}`)
}
}

if (fs.existsSync(gradlew) && fs.existsSync(buildGradle) && fs.existsSync(gradleProps)) {
printSuccess("Gradle build system is configured")
Expand Down
129 changes: 109 additions & 20 deletions src/api/providers/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,14 @@ import {
openAiModelInfoSaneDefaults,
DEEP_SEEK_DEFAULT_TEMPERATURE,
OPENAI_AZURE_AI_INFERENCE_PATH,
openAiNativeModels,
anthropicModels,
geminiModels,
mistralModels,
deepSeekModels,
qwenCodeModels,
vertexModels,
bedrockModels,
} from "@roo-code/types"

import type { ApiHandlerOptions } from "../../shared/api"
Expand Down Expand Up @@ -164,11 +172,17 @@ export class OpenAiHandler extends BaseProvider implements SingleCompletionHandl
stream: true as const,
...(isGrokXAI ? {} : { stream_options: { include_usage: true } }),
...(reasoning && reasoning),
...(metadata?.tools && { tools: this.convertToolsForOpenAI(metadata.tools) }),
...(metadata?.tool_choice && { tool_choice: metadata.tool_choice }),
...(metadata?.toolProtocol === "native" && {
parallel_tool_calls: metadata.parallelToolCalls ?? false,
}),
...((this.options.enableReasoningEffort && modelInfo.supportsReasoningBinary
? { thinking: { type: "enabled" } }
: {}) as any),
...(metadata?.tools &&
modelInfo.supportsNativeTools !== false && { tools: this.convertToolsForOpenAI(metadata.tools) }),
...(metadata?.tool_choice &&
modelInfo.supportsNativeTools !== false && { tool_choice: metadata.tool_choice }),
...(metadata?.toolProtocol === "native" &&
modelInfo.supportsNativeTools !== false && {
parallel_tool_calls: metadata.parallelToolCalls ?? false,
}),
}

// Add max_tokens if needed
Expand Down Expand Up @@ -251,11 +265,17 @@ export class OpenAiHandler extends BaseProvider implements SingleCompletionHandl
: enabledLegacyFormat
? [systemMessage, ...convertToSimpleMessages(messages)]
: [systemMessage, ...convertToOpenAiMessages(messages)],
...(metadata?.tools && { tools: this.convertToolsForOpenAI(metadata.tools) }),
...(metadata?.tool_choice && { tool_choice: metadata.tool_choice }),
...(metadata?.toolProtocol === "native" && {
parallel_tool_calls: metadata.parallelToolCalls ?? false,
}),
...(metadata?.tools &&
modelInfo.supportsNativeTools !== false && { tools: this.convertToolsForOpenAI(metadata.tools) }),
...(metadata?.tool_choice &&
modelInfo.supportsNativeTools !== false && { tool_choice: metadata.tool_choice }),
...(metadata?.toolProtocol === "native" &&
modelInfo.supportsNativeTools !== false && {
parallel_tool_calls: metadata.parallelToolCalls ?? false,
}),
...((this.options.enableReasoningEffort && modelInfo.supportsReasoningBinary
? { thinking: { type: "enabled" } }
: {}) as any),
}

// Add max_tokens if needed
Expand Down Expand Up @@ -387,11 +407,17 @@ export class OpenAiHandler extends BaseProvider implements SingleCompletionHandl
...(isGrokXAI ? {} : { stream_options: { include_usage: true } }),
reasoning_effort: modelInfo.reasoningEffort as "low" | "medium" | "high" | undefined,
temperature: undefined,
...(metadata?.tools && { tools: this.convertToolsForOpenAI(metadata.tools) }),
...(metadata?.tool_choice && { tool_choice: metadata.tool_choice }),
...(metadata?.toolProtocol === "native" && {
parallel_tool_calls: metadata.parallelToolCalls ?? false,
}),
...(metadata?.tools &&
modelInfo.supportsNativeTools !== false && { tools: this.convertToolsForOpenAI(metadata.tools) }),
...(metadata?.tool_choice &&
modelInfo.supportsNativeTools !== false && { tool_choice: metadata.tool_choice }),
...(metadata?.toolProtocol === "native" &&
modelInfo.supportsNativeTools !== false && {
parallel_tool_calls: metadata.parallelToolCalls ?? false,
}),
...((this.options.enableReasoningEffort && modelInfo.supportsReasoningBinary
? { thinking: { type: "enabled" } }
: {}) as any),
}

// O3 family models do not support the deprecated max_tokens parameter
Expand Down Expand Up @@ -422,11 +448,17 @@ export class OpenAiHandler extends BaseProvider implements SingleCompletionHandl
],
reasoning_effort: modelInfo.reasoningEffort as "low" | "medium" | "high" | undefined,
temperature: undefined,
...(metadata?.tools && { tools: this.convertToolsForOpenAI(metadata.tools) }),
...(metadata?.tool_choice && { tool_choice: metadata.tool_choice }),
...(metadata?.toolProtocol === "native" && {
parallel_tool_calls: metadata.parallelToolCalls ?? false,
}),
...(metadata?.tools &&
modelInfo.supportsNativeTools !== false && { tools: this.convertToolsForOpenAI(metadata.tools) }),
...(metadata?.tool_choice &&
modelInfo.supportsNativeTools !== false && { tool_choice: metadata.tool_choice }),
...(metadata?.toolProtocol === "native" &&
modelInfo.supportsNativeTools !== false && {
parallel_tool_calls: metadata.parallelToolCalls ?? false,
}),
...((this.options.enableReasoningEffort && modelInfo.supportsReasoningBinary
? { thinking: { type: "enabled" } }
: {}) as any),
}

// O3 family models do not support the deprecated max_tokens parameter
Expand Down Expand Up @@ -574,3 +606,60 @@ export async function getOpenAiModels(baseUrl?: string, apiKey?: string, openAiH
return []
}
}

export function getOpenAiModelInfo(modelId: string): ModelInfo | undefined {
const models: Record<string, ModelInfo>[] = [
openAiNativeModels,
anthropicModels,
geminiModels,
mistralModels,
deepSeekModels,
qwenCodeModels,
vertexModels,
bedrockModels,
]

// Helper function to sanitize and return model info
const sanitizeModelInfo = (info: ModelInfo): ModelInfo => {
if (info.tiers) {
return {
...info,
tiers: info.tiers.map((tier) => ({
...tier,
// Replace Infinity/null with Number.MAX_SAFE_INTEGER (essentially unlimited)
contextWindow:
tier.contextWindow === Infinity || tier.contextWindow === null
? Number.MAX_SAFE_INTEGER
: tier.contextWindow,
})),
}
}
return info
}

// Try exact match first
for (const modelMap of models) {
if (modelId in modelMap) {
return sanitizeModelInfo(modelMap[modelId])
}
}

// Normalize search ID: remove provider prefix (e.g., "google/", "anthropic/") and convert to lowercase
const normalizedSearchId = modelId.replace(/^[a-z-]+\//i, "").toLowerCase()

// Try fuzzy matching: find models where the key contains the normalized search ID
// or where the normalized search ID contains the model key's base name
for (const modelMap of models) {
const keys = Object.keys(modelMap)
for (const key of keys) {
const normalizedKey = key.toLowerCase()
// Check if key contains search term or search term contains key's base (without date suffix)
const keyBase = normalizedKey.replace(/-\d{8}$/, "") // Remove date suffixes like -20241022
if (normalizedKey.includes(normalizedSearchId) || normalizedSearchId.includes(keyBase)) {
return sanitizeModelInfo(modelMap[key])
}
}
}

return undefined
}
Loading