Add github copilot as LLM provider.#1202
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces native support for GitHub Copilot as an LLM provider. It allows users to leverage their Copilot subscription for chat completions by handling the necessary OAuth token exchange and session management. The integration includes a user-friendly setup wizard for authentication and comprehensive documentation updates across the project. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces GitHub Copilot as a new LLM provider. The changes are comprehensive, including updates to configuration, documentation, and the setup wizard, along with the core implementation for the provider and its authentication flow. The code is well-structured and includes new tests for the added logic. I have one suggestion to improve the user experience during the device login flow by removing an unnecessary initial delay.
| tokio::time::sleep(Duration::from_secs(poll_interval)).await; | ||
|
|
||
| match poll_for_access_token(client, &device.device_code).await { | ||
| Ok(DevicePollingStatus::Pending) => { | ||
| consecutive_failures = 0; | ||
| } | ||
| Ok(DevicePollingStatus::SlowDown) => { | ||
| consecutive_failures = 0; | ||
| poll_interval = poll_interval.saturating_add(5); | ||
| } | ||
| Ok(DevicePollingStatus::Authorized(token)) => { | ||
| return Ok(token); | ||
| } | ||
| // Definitive failures — propagate immediately | ||
| Err(GithubCopilotAuthError::AccessDenied) => { | ||
| return Err(GithubCopilotAuthError::AccessDenied); | ||
| } | ||
| Err(GithubCopilotAuthError::Expired) => { | ||
| return Err(GithubCopilotAuthError::Expired); | ||
| } | ||
| // Transient failures — retry with backoff | ||
| Err(e) => { | ||
| consecutive_failures += 1; | ||
| tracing::warn!( | ||
| error = %e, | ||
| attempt = consecutive_failures, | ||
| max = MAX_POLL_FAILURES, | ||
| "Copilot: transient poll failure, will retry" | ||
| ); | ||
| if consecutive_failures >= MAX_POLL_FAILURES { | ||
| tracing::error!( | ||
| error = %e, | ||
| "Copilot: too many consecutive poll failures, giving up" | ||
| ); | ||
| return Err(e); | ||
| } | ||
| // Back off on transient errors | ||
| poll_interval = (poll_interval + 2).min(30); | ||
| } | ||
| } |
There was a problem hiding this comment.
The current implementation of the polling loop in wait_for_device_login introduces an unnecessary delay before the first poll attempt by calling tokio::time::sleep at the beginning of the loop. This can negatively impact the user experience during the device login flow.
According to the OAuth 2.0 Device Authorization Grant specification, clients should poll at the specified interval, but there's no requirement to wait before the first poll.
I suggest refactoring the loop to poll for the token first and then sleep, which will make the first poll immediate and improve the responsiveness of the login process.
let status = poll_for_access_token(client, &device.device_code).await;
match status {
Ok(DevicePollingStatus::Pending) => {
consecutive_failures = 0;
}
Ok(DevicePollingStatus::SlowDown) => {
consecutive_failures = 0;
poll_interval = poll_interval.saturating_add(5);
}
Ok(DevicePollingStatus::Authorized(token)) => {
return Ok(token);
}
// Definitive failures — propagate immediately
Err(GithubCopilotAuthError::AccessDenied) => {
return Err(GithubCopilotAuthError::AccessDenied);
}
Err(GithubCopilotAuthError::Expired) => {
return Err(GithubCopilotAuthError::Expired);
}
// Transient failures — retry with backoff
Err(e) => {
consecutive_failures += 1;
tracing::warn!(
error = %e,
attempt = consecutive_failures,
max = MAX_POLL_FAILURES,
"Copilot: transient poll failure, will retry"
);
if consecutive_failures >= MAX_POLL_FAILURES {
tracing::error!(
error = %e,
"Copilot: too many consecutive poll failures, giving up"
);
return Err(e);
}
// Back off on transient errors
poll_interval = (poll_interval + 2).min(30);
}
}
tokio::time::sleep(Duration::from_secs(poll_interval)).await;There was a problem hiding this comment.
Pull request overview
Adds first-class GitHub Copilot support as an LLM backend, including onboarding via GitHub device login and a dedicated provider implementation that performs Copilot session-token exchange before calling the Copilot Chat Completions API.
Changes:
- Add
github_copilotto the provider registry/config resolution (including default Copilot identity headers and backend aliases). - Implement a dedicated
GithubCopilotProvider+ auth/token-exchange module, and wire it into provider creation. - Extend the setup wizard + docs/tests to support configuring Copilot (device flow or manual token).
Reviewed changes
Copilot reviewed 16 out of 16 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/config_round_trip.rs | Includes github_copilot in backend round-trip coverage. |
| src/setup/wizard.rs | Adds wizard flow to configure Copilot via device login or pasted token; saves token to secrets/env overlay. |
| src/setup/README.md | Documents Copilot secrets/env vars and wizard behavior. |
| src/settings.rs | Updates Settings docs to include github_copilot backend. |
| src/llm/registry.rs | Adds ProviderProtocol::GithubCopilot enum variant for registry protocol parsing. |
| src/llm/mod.rs | Wires GithubCopilotProvider into create_registry_provider; exposes auth module internally. |
| src/llm/github_copilot.rs | New provider implementation for Copilot chat completions with automatic session-token exchange. |
| src/llm/github_copilot_auth.rs | New device login + token exchange + cached session token manager + token validation. |
| src/llm/CLAUDE.md | Documents Copilot backend usage and notes. |
| src/config/llm.rs | Merges default Copilot identity headers with user overrides; adds tests for header merging + alias resolution. |
| README.zh-CN.md | Mentions Copilot as an alternative provider and device login via onboard. |
| README.md | Mentions Copilot as an alternative provider. |
| providers.json | Adds github_copilot provider definition (aliases, env vars, defaults, setup hint). |
| FEATURE_PARITY.md | Notes Copilot provider support in Rust implementation. |
| docs/LLM_PROVIDERS.md | Adds Copilot provider docs and example env configuration. |
| .env.example | Adds example Copilot env configuration and notes. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if let ContentPart::ImageUrl { image_url } = part { | ||
| parts.push(OpenAiContentPart::ImageUrl { | ||
| image_url: OpenAiImageUrl { url: image_url.url }, | ||
| }); |
src/llm/github_copilot_auth.rs
Outdated
| now = now, | ||
| "Copilot: cached session token expired or expiring soon, refreshing" | ||
| ); | ||
| } else { |
| fn truncate_for_error(body: &str) -> String { | ||
| const LIMIT: usize = 200; | ||
| if body.len() <= LIMIT { | ||
| return body.to_string(); | ||
| } | ||
|
|
||
| let mut end = LIMIT; | ||
| while end > 0 && !body.is_char_boundary(end) { | ||
| end -= 1; | ||
| } | ||
| format!("{}...", &body[..end]) | ||
| } |
| `github_copilot` is a declarative registry provider backed by the existing | ||
| OpenAI-compatible path. It defaults to `https://api.githubcopilot.com` and expects a | ||
| GitHub Copilot OAuth token in `GITHUB_COPILOT_TOKEN` (for example the `oauth_token` | ||
| stored by your IDE sign-in flow in `~/.config/github-copilot/apps.json`). The setup | ||
| wizard also supports GitHub device login using the VS Code Copilot client ID and then | ||
| stores the resulting token in the encrypted secrets store. | ||
|
|
||
| Manual model entry is used in the setup wizard (`can_list_models = false`) because | ||
| GitHub Copilot model discovery can require extra integration headers on some clients. | ||
| IronClaw injects the standard VS Code identity headers automatically: | ||
| `User-Agent`, `Editor-Version`, `Editor-Plugin-Version`, and | ||
| `Copilot-Integration-Id`. Advanced users can still override or append headers via | ||
| `GITHUB_COPILOT_EXTRA_HEADERS`. |
README.md
Outdated
|
|
||
| IronClaw defaults to NEAR AI but works with any OpenAI-compatible endpoint. | ||
| Popular options include **OpenRouter** (300+ models), **Together AI**, **Fireworks AI**, | ||
| Popular options include **OpenRouter** (300+ models), **Together AI**, **Fireworks AI**, **Github Copilot** |
README.zh-CN.md
Outdated
|
|
||
| IronClaw 默认使用 NEAR AI,但兼容任何 OpenAI 兼容的端点。 | ||
| 常用选项包括 **OpenRouter**(300+ 模型)、**Together AI**、**Fireworks AI**、**Ollama**(本地部署)以及自托管服务器如 **vLLM** 或 **LiteLLM**。 | ||
| 常用选项包括 **OpenRouter**(300+ 模型)、**Together AI**、**Fireworks AI**、**Github Copilot**、**Ollama**(本地部署)以及自托管服务器如 **vLLM** 或 **LiteLLM**。 |
zmanian
left a comment
There was a problem hiding this comment.
Review: REQUEST CHANGES
Good first contribution -- well-structured, follows existing LLM provider patterns, no new dependencies. But critical issues need resolution before merge.
Critical
C1: OAuth token stored as plain String
CopilotTokenManager.oauth_token should use secrecy::SecretString, not a plain String that persists unprotected in memory. Same applies to CachedCopilotToken.token. Call .expose_secret() only at point of use (HTTP header injection).
C2: Hardcoded VS Code OAuth Client ID + identity headers
Uses VS Code Copilot's client ID (Iv1.b507a08c87ecfe98) and impersonates GitHubCopilotChat/0.26.7 / vscode/1.99.3. This raises ToS concerns -- GitHub could rotate the client ID at any time, breaking all IronClaw users. Options: (a) register IronClaw's own OAuth app, (b) document the risk and get explicit maintainer sign-off, or (c) remove device login and only support paste-token flow. Needs a project-level decision.
C3: TOCTOU race in get_token()
Multiple concurrent callers can all see an expired token under the read lock, drop it, and all perform parallel token exchanges. After acquiring the write lock, re-check if the token was already refreshed:
let mut guard = self.cached.write().await;
if let Some(ref cached) = *guard {
if cached.expires_at > now + TOKEN_REFRESH_BUFFER_SECS {
return Ok(cached.token.clone());
}
}
// Proceed with exchange...Important
I1: Empty else {} block in get_token() -- dead code, remove it.
I2: 401 mapped to LlmError::RequestFailed instead of AuthFailed. This causes wasted retries and miscounts toward circuit breaker threshold. After invalidating the cached token (correct), the error should be AuthFailed so the retry/circuit breaker chain handles it properly.
I3: prepare_github_copilot_setup manually reimplements set_llm_backend_preserving_model logic instead of calling the existing helper. Could drift if the helper's behavior changes.
I4: No unit tests for token manager, poll_for_access_token parsing, or wait_for_device_login timeout/retry logic. Given the "risk: high" label and complexity of the token exchange flow, more coverage is warranted.
CI Concern
Only classify/scope checks visible -- no build/test/clippy. Needs full CI pipeline before merge.
C1: Use secrecy::SecretString for oauth_token and cached session token
in CopilotTokenManager/CachedCopilotToken. Expose only at HTTP
header injection point via .expose_secret().
C2: Document risks of hardcoded VS Code OAuth client ID and editor
identity headers (ToS, rotation, staleness). Remove the unreliable
paste-token setup path (setup_github_copilot_manual_token).
C3: Fix TOCTOU race in get_token() — re-check token validity after
acquiring write lock so concurrent callers don't all perform
redundant token exchanges.
I1: Remove dead empty else {} block in get_token().
I2: Map 401 responses to LlmError::AuthFailed instead of RequestFailed
so retry/circuit-breaker logic handles auth failures correctly.
I3: Replace prepare_github_copilot_setup() with call to existing
set_llm_backend_preserving_model() helper to avoid logic drift.
I4: Add unit tests for CopilotTokenManager (caching, invalidation,
expiry/buffer behavior), poll response parsing (all OAuth device
flow states), and DeviceCodeResponse/CopilotTokenResponse deserialization.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Adds first-class GitHub Copilot support as an LLM backend, including a dedicated provider that performs the required OAuth→session-token exchange and wizard-based device-login onboarding.
Changes:
- Introduces a new
github_copilotprovider protocol and implementation (GithubCopilotProvider) with token exchange and default Copilot identity headers. - Extends onboarding wizard + config resolution to support GitHub Copilot tokens via secrets/env overlay, plus related tests.
- Updates provider registry (
providers.json) and documentation to list Copilot as a supported backend.
Reviewed changes
Copilot reviewed 16 out of 16 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/config_round_trip.rs | Adds github_copilot to backend round-trip coverage. |
| src/setup/wizard.rs | Adds GitHub Copilot onboarding flow (device login) + preservation tests. |
| src/setup/README.md | Documents Copilot secrets/env wiring and setup behavior. |
| src/settings.rs | Updates llm_backend docs to include github_copilot. |
| src/llm/registry.rs | Adds ProviderProtocol::GithubCopilot. |
| src/llm/mod.rs | Wires registry protocol dispatch to the new Copilot provider. |
| src/llm/github_copilot.rs | Implements GithubCopilotProvider (token exchange + OpenAI chat/tool request shaping). |
| src/llm/github_copilot_auth.rs | Implements device login + token exchange + cached session-token manager. |
| src/llm/CLAUDE.md | Adds Copilot provider notes for maintainers/users. |
| src/config/llm.rs | Merges default Copilot identity headers with user overrides; adds tests. |
| providers.json | Registers github_copilot provider definition, aliases, env vars, defaults. |
| FEATURE_PARITY.md | Adds Copilot entry in parity table. |
| docs/LLM_PROVIDERS.md | Documents configuring Copilot via env vars and onboarding. |
| .env.example | Adds Copilot env var examples. |
| README.md | Lists Copilot among built-in providers. |
| README.zh-CN.md | Lists Copilot among built-in providers (Chinese README). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| switching to a different backend | ||
|
|
||
| **GitHub Copilot** (`setup_github_copilot`): | ||
| - Offers **GitHub device login** (recommended) or manual token paste |
| `github_copilot` is a declarative registry provider backed by the existing | ||
| OpenAI-compatible path. It defaults to `https://api.githubcopilot.com` and expects a | ||
| GitHub Copilot OAuth token in `GITHUB_COPILOT_TOKEN` (for example the `oauth_token` | ||
| stored by your IDE sign-in flow in `~/.config/github-copilot/apps.json`). The setup | ||
| wizard also supports GitHub device login using the VS Code Copilot client ID and then | ||
| stores the resulting token in the encrypted secrets store. |
| fn truncate_for_error(body: &str) -> String { | ||
| const LIMIT: usize = 200; | ||
| if body.len() <= LIMIT { | ||
| return body.to_string(); | ||
| } | ||
|
|
||
| let mut end = LIMIT; | ||
| while end > 0 && !body.is_char_boundary(end) { | ||
| end -= 1; | ||
| } | ||
| format!("{}...", &body[..end]) | ||
| } |
| IronClaw defaults to NEAR AI but supports many LLM providers out of the box. | ||
| Built-in providers include **Anthropic**, **OpenAI**, **Google Gemini**, **MiniMax**, | ||
| **Mistral**, and **Ollama** (local). OpenAI-compatible services like **OpenRouter** | ||
| **Mistral**, **Github Copilot**, and **Ollama** (local). OpenAI-compatible services like **OpenRouter** |
|
|
||
| IronClaw 默认使用 NEAR AI,但开箱即用地支持多种 LLM 提供商。 | ||
| 内置提供商包括 **Anthropic**、**OpenAI**、**Google Gemini**、**MiniMax**、**Mistral** 和 **Ollama**(本地部署)。同时也支持 OpenAI 兼容服务,如 **OpenRouter**(300+ 模型)、**Together AI**、**Fireworks AI** 以及自托管服务器(**vLLM**、**LiteLLM**)。 | ||
| 内置提供商包括 **Anthropic**、**OpenAI**、**Google Gemini**、**MiniMax**、**Mistral**、**Github Copilot** 和 **Ollama**(本地部署)。同时也支持 OpenAI 兼容服务,如 **OpenRouter**(300+ 模型)、**Together AI**、**Fireworks AI** 以及自托管服务器(**vLLM**、**LiteLLM**)。 |
|
@zmanian Thanks for reviewing,
I testet with copying tokens today and found it not work. As a result if Github Copilot auth is kept it needs device login.
Important
fixed
Fixed
Should fixed
More tests added. CI Concern
It looks maintainer's approve to run CI pipeline |
Summary
Add Github Copilot as provider
The code is vibe coded by Github Copilot, it worked on my local environment
Change Type
Linked Issue
#80
Validation
cargo fmtcargo clippy --all --benches --tests --examples --all-features[] Relevant tests pass:
Manual testing:
See screenshot, I use the device login scenario and able to talk with Github Copilot with gpt-5-mini on repl
Security Impact
New secrets of Github Copilot token is added
Database Impact
None
Blast Radius
Rollback Plan
Review track: