Skip to content
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 12 additions & 1 deletion .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ DATABASE_POOL_SIZE=10

# LLM Provider
# LLM_BACKEND=nearai # default
# Possible values: nearai, ollama, openai_compatible, openai, anthropic, tinfoil
# Possible values: nearai, ollama, openai_compatible, openai, anthropic, github_copilot, tinfoil
# LLM_REQUEST_TIMEOUT_SECS=120 # Increase for local LLMs (Ollama, vLLM, LM Studio)

# === Anthropic Direct ===
Expand All @@ -19,6 +19,17 @@ DATABASE_POOL_SIZE=10
# === OpenAI Direct ===
# OPENAI_API_KEY=sk-...

# === GitHub Copilot ===
# Uses the OAuth token from your Copilot IDE sign-in (for example
# ~/.config/github-copilot/apps.json on Linux/macOS), or run `ironclaw onboard`
# and choose the GitHub device login flow.
# LLM_BACKEND=github_copilot
# GITHUB_COPILOT_TOKEN=gho_...
# GITHUB_COPILOT_MODEL=gpt-4o
# IronClaw injects standard VS Code Copilot headers automatically.
# Optional advanced headers for custom overrides:
# GITHUB_COPILOT_EXTRA_HEADERS=Copilot-Integration-Id:vscode-chat

# === NEAR AI (Chat Completions API) ===
# Two auth modes:
# 1. Session token (default): Uses browser OAuth (GitHub/Google) on first run.
Expand Down
1 change: 1 addition & 0 deletions FEATURE_PARITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -242,6 +242,7 @@ This document tracks feature parity between IronClaw (Rust implementation) and O
| OpenRouter | ✅ | ✅ | - | Via OpenAI-compatible provider (RigAdapter) |
| Tinfoil | ❌ | ✅ | - | Private inference provider (IronClaw-only) |
| OpenAI-compatible | ❌ | ✅ | - | Generic OpenAI-compatible endpoint (RigAdapter) |
| GitHub Copilot | ✅ | ✅ | - | Dedicated provider with OAuth token exchange (`GithubCopilotProvider`) |
| Ollama (local) | ✅ | ✅ | - | via `rig::providers::ollama` (full support) |
| Perplexity | ✅ | ❌ | P3 | Freshness parameter for web_search |
| MiniMax | ✅ | ❌ | P3 | Regional endpoint selection |
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ written to `~/.ironclaw/.env` so they are available before the database connects
### Alternative LLM Providers

IronClaw defaults to NEAR AI but works with any OpenAI-compatible endpoint.
Popular options include **OpenRouter** (300+ models), **Together AI**, **Fireworks AI**,
Popular options include **OpenRouter** (300+ models), **Together AI**, **Fireworks AI**, **Github Copilot**
**Ollama** (local), and self-hosted servers like **vLLM** or **LiteLLM**.

Select *"OpenAI-compatible"* in the wizard, or set environment variables directly:
Expand Down
4 changes: 3 additions & 1 deletion README.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,9 @@ ironclaw onboard
### 替代 LLM 提供商

IronClaw 默认使用 NEAR AI,但兼容任何 OpenAI 兼容的端点。
常用选项包括 **OpenRouter**(300+ 模型)、**Together AI**、**Fireworks AI**、**Ollama**(本地部署)以及自托管服务器如 **vLLM** 或 **LiteLLM**。
常用选项包括 **OpenRouter**(300+ 模型)、**Together AI**、**Fireworks AI**、**Github Copilot**、**Ollama**(本地部署)以及自托管服务器如 **vLLM** 或 **LiteLLM**。

对于 GitHub Copilot,`ironclaw onboard` 现在可以通过 GitHub 设备登录流程获取并保存令牌。

在向导中选择 *"OpenAI-compatible"*,或直接设置环境变量:

Expand Down
29 changes: 29 additions & 0 deletions docs/LLM_PROVIDERS.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ configurations.
| Yandex AI Studio | `yandex` | `YANDEX_API_KEY` | YandexGPT models |
| MiniMax | `minimax` | `MINIMAX_API_KEY` | MiniMax-M2.5 models |
| Cloudflare Workers AI | `cloudflare` | `CLOUDFLARE_API_KEY` | Access to Workers AI |
| GitHub Copilot | `github_copilot` | `GITHUB_COPILOT_TOKEN` | Multi-models |
| Ollama | `ollama` | No | Local inference |
| AWS Bedrock | `bedrock` | AWS credentials | Native Converse API |
| OpenRouter | `openai_compatible` | `LLM_API_KEY` | 300+ models |
Expand Down Expand Up @@ -61,6 +62,34 @@ Popular models: `gpt-4o`, `gpt-4o-mini`, `o3-mini`

---

## GitHub Copilot

GitHub Copilot exposes chat endpoint at
`https://api.githubcopilot.com`. IronClaw uses that endpoint directly through the
built-in `github_copilot` provider.

```env
LLM_BACKEND=github_copilot
GITHUB_COPILOT_TOKEN=gho_...
GITHUB_COPILOT_MODEL=gpt-4o
# Optional advanced headers if your setup needs them:
# GITHUB_COPILOT_EXTRA_HEADERS=Copilot-Integration-Id:vscode-chat
```

`ironclaw onboard` can acquire this token for you using GitHub device login. If you
already signed into Copilot through VS Code or a JetBrains IDE, you can also reuse
the `oauth_token` stored in `~/.config/github-copilot/apps.json`. If you prefer,
`LLM_BACKEND=github-copilot` also works as an alias.

Popular models vary by subscription, but `gpt-4o` is a safe default. IronClaw keeps
model entry manual for this provider because GitHub Copilot model listing may require
extra integration headers on some clients. IronClaw automatically injects the standard
VS Code identity headers (`User-Agent`, `Editor-Version`, `Editor-Plugin-Version`,
`Copilot-Integration-Id`) and lets you override them with
`GITHUB_COPILOT_EXTRA_HEADERS`.

---

## Ollama (local)

Install Ollama from [ollama.com](https://ollama.com), pull a model, then:
Expand Down
23 changes: 23 additions & 0 deletions providers.json
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,29 @@
"can_list_models": false
}
},
{
"id": "github_copilot",
"aliases": [
"github-copilot",
"githubcopilot",
"copilot"
],
"protocol": "github_copilot",
"default_base_url": "https://api.githubcopilot.com",
"api_key_env": "GITHUB_COPILOT_TOKEN",
"api_key_required": true,
"model_env": "GITHUB_COPILOT_MODEL",
"default_model": "gpt-4o",
"extra_headers_env": "GITHUB_COPILOT_EXTRA_HEADERS",
"description": "GitHub Copilot Chat API (OAuth token from IDE sign-in)",
"setup": {
"kind": "api_key",
"secret_name": "llm_github_copilot_token",
"key_url": "https://docs.github.com/en/copilot",
"display_name": "GitHub Copilot",
"can_list_models": false
}
},
{
"id": "tinfoil",
"aliases": [],
Expand Down
99 changes: 99 additions & 0 deletions src/config/llm.rs
Original file line number Diff line number Diff line change
Expand Up @@ -298,6 +298,14 @@ impl LlmConfig {
} else {
Vec::new()
};
let extra_headers = if canonical_id == "github_copilot" {
merge_extra_headers(
crate::llm::github_copilot_auth::default_headers(),
extra_headers,
)
} else {
extra_headers
};

// Resolve OAuth token (Anthropic-specific: `claude login` flow).
// Only check for OAuth token when the provider is actually Anthropic.
Expand Down Expand Up @@ -379,6 +387,26 @@ fn parse_extra_headers(val: &str) -> Result<Vec<(String, String)>, ConfigError>
Ok(headers)
}

fn merge_extra_headers(
defaults: Vec<(String, String)>,
overrides: Vec<(String, String)>,
) -> Vec<(String, String)> {
let mut merged = Vec::new();
let mut positions = std::collections::HashMap::<String, usize>::new();

for (key, value) in defaults.into_iter().chain(overrides) {
let normalized = key.to_ascii_lowercase();
if let Some(existing_index) = positions.get(&normalized).copied() {
merged[existing_index] = (key, value);
} else {
positions.insert(normalized, merged.len());
merged.push((key, value));
}
}

merged
}

/// Get the default session file path (~/.ironclaw/session.json).
pub fn default_session_path() -> PathBuf {
ironclaw_base_dir().join("session.json")
Expand Down Expand Up @@ -510,6 +538,29 @@ mod tests {
);
}

#[test]
fn merge_extra_headers_prefers_overrides_case_insensitively() {
let merged = merge_extra_headers(
vec![
("User-Agent".to_string(), "default-agent".to_string()),
("X-Test".to_string(), "default".to_string()),
],
vec![
("user-agent".to_string(), "override-agent".to_string()),
("X-Extra".to_string(), "present".to_string()),
],
);

assert_eq!(
merged,
vec![
("user-agent".to_string(), "override-agent".to_string()),
("X-Test".to_string(), "default".to_string()),
("X-Extra".to_string(), "present".to_string()),
]
);
}

/// Clear all ollama-related env vars.
fn clear_ollama_env() {
// SAFETY: Only called under ENV_MUTEX in tests.
Expand Down Expand Up @@ -662,6 +713,54 @@ mod tests {
assert_eq!(provider.protocol, ProviderProtocol::OpenAiCompletions);
}

#[test]
fn registry_provider_resolves_github_copilot_alias() {
let _guard = ENV_MUTEX.lock().expect("env mutex poisoned");
// SAFETY: Under ENV_MUTEX.
unsafe {
std::env::set_var("LLM_BACKEND", "github-copilot");
std::env::set_var("GITHUB_COPILOT_TOKEN", "gho_test_token");
std::env::set_var(
"GITHUB_COPILOT_EXTRA_HEADERS",
"Copilot-Integration-Id:custom-chat,X-Test:enabled",
);
}

let settings = Settings::default();

let cfg = LlmConfig::resolve(&settings).expect("resolve should succeed");
assert_eq!(cfg.backend, "github_copilot");
let provider = cfg.provider.expect("provider config should be present");
assert_eq!(provider.provider_id, "github_copilot");
assert_eq!(provider.base_url, "https://api.githubcopilot.com");
assert_eq!(provider.model, "gpt-4o");
assert!(
provider
.extra_headers
.iter()
.any(|(key, value)| { key == "Copilot-Integration-Id" && value == "custom-chat" })
);
assert!(
provider
.extra_headers
.iter()
.any(|(key, value)| key == "User-Agent" && value == "GitHubCopilotChat/0.26.7")
);
assert!(
provider
.extra_headers
.iter()
.any(|(key, value)| key == "X-Test" && value == "enabled")
);

// SAFETY: Under ENV_MUTEX.
unsafe {
std::env::remove_var("LLM_BACKEND");
std::env::remove_var("GITHUB_COPILOT_TOKEN");
std::env::remove_var("GITHUB_COPILOT_EXTRA_HEADERS");
}
}

#[test]
fn nearai_backend_has_no_registry_provider() {
let _guard = ENV_MUTEX.lock().expect("env mutex poisoned");
Expand Down
17 changes: 17 additions & 0 deletions src/llm/CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Set via `LLM_BACKEND` env var:
| `nearai` (default) | NEAR AI Chat Completions | `NEARAI_SESSION_TOKEN` or `NEARAI_API_KEY` |
| `openai` | OpenAI | `OPENAI_API_KEY` |
| `anthropic` | Anthropic | `ANTHROPIC_API_KEY` |
| `github_copilot` | GitHub Copilot Chat API | `GITHUB_COPILOT_TOKEN`, `GITHUB_COPILOT_MODEL` |
| `ollama` | Ollama local | `OLLAMA_BASE_URL` |
| `openai_compatible` | Any OpenAI-compatible endpoint | `LLM_BASE_URL`, `LLM_API_KEY`, `LLM_MODEL` |
| `tinfoil` | Tinfoil TEE inference | `TINFOIL_API_KEY`, `TINFOIL_MODEL` |
Expand All @@ -46,6 +47,22 @@ Uses the native Converse API via `aws-sdk-bedrockruntime` (`bedrock.rs`). Requir
- `BEDROCK_MODEL` — Required model ID (e.g., `anthropic.claude-opus-4-6-v1`)
- `BEDROCK_CROSS_REGION` — Optional cross-region inference prefix (`us`, `eu`, `apac`, `global`)

## GitHub Copilot Provider Notes

`github_copilot` is a declarative registry provider backed by the existing
OpenAI-compatible path. It defaults to `https://api.githubcopilot.com` and expects a
GitHub Copilot OAuth token in `GITHUB_COPILOT_TOKEN` (for example the `oauth_token`
stored by your IDE sign-in flow in `~/.config/github-copilot/apps.json`). The setup
wizard also supports GitHub device login using the VS Code Copilot client ID and then
stores the resulting token in the encrypted secrets store.
Comment on lines +62 to +67

Manual model entry is used in the setup wizard (`can_list_models = false`) because
GitHub Copilot model discovery can require extra integration headers on some clients.
IronClaw injects the standard VS Code identity headers automatically:
`User-Agent`, `Editor-Version`, `Editor-Plugin-Version`, and
`Copilot-Integration-Id`. Advanced users can still override or append headers via
`GITHUB_COPILOT_EXTRA_HEADERS`.
Comment on lines +62 to +74

## NEAR AI Provider Gotchas

**Dual auth modes:**
Expand Down
Loading
Loading