Skip to content

Add github copilot as LLM provider.#1202

Open
fallenwood wants to merge 4 commits intonearai:stagingfrom
fallenwood:github_copilot_provider
Open

Add github copilot as LLM provider.#1202
fallenwood wants to merge 4 commits intonearai:stagingfrom
fallenwood:github_copilot_provider

Conversation

@fallenwood
Copy link

Summary

  • Add Github Copilot as provider

  • The code is vibe coded by Github Copilot, it worked on my local environment

image

Change Type

  • Bug fix
  • New feature
  • Refactor
  • Documentation
  • CI/Infrastructure
  • Security
  • Dependencies

Linked Issue

#80

Validation

  • cargo fmt

  • cargo clippy --all --benches --tests --examples --all-features

  • [] Relevant tests pass:

  • Manual testing:

  • See screenshot, I use the device login scenario and able to talk with Github Copilot with gpt-5-mini on repl

Security Impact

New secrets of Github Copilot token is added

Database Impact

None

Blast Radius

Rollback Plan


Review track:

Copilot AI review requested due to automatic review settings March 15, 2026 11:50
@github-actions github-actions bot added scope: llm LLM integration scope: config Configuration scope: setup Onboarding / setup scope: docs Documentation size: XL 500+ changed lines risk: high Safety, secrets, auth, or critical infrastructure contributor: new First-time contributor labels Mar 15, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces native support for GitHub Copilot as an LLM provider. It allows users to leverage their Copilot subscription for chat completions by handling the necessary OAuth token exchange and session management. The integration includes a user-friendly setup wizard for authentication and comprehensive documentation updates across the project.

Highlights

  • GitHub Copilot Integration: Added GitHub Copilot as a new LLM provider, enabling direct interaction with its chat API.
  • Authentication Flow: Implemented a robust authentication mechanism for GitHub Copilot, supporting both device login and manual token entry, including automatic token exchange and refresh.
  • Configuration and Documentation: Updated configuration files, documentation, and the setup wizard to reflect the new GitHub Copilot provider and guide users through its setup.
Changelog
  • .env.example
    • Added 'github_copilot' as a possible value for LLM_BACKEND.
    • Included a new section detailing GitHub Copilot environment variables and configuration.
  • FEATURE_PARITY.md
    • Updated the feature parity table to include GitHub Copilot as a dedicated provider.
  • README.md
    • Added GitHub Copilot to the list of alternative LLM providers in the English README.
  • README.zh-CN.md
    • Added GitHub Copilot to the list of alternative LLM providers and noted the ironclaw onboard functionality for token acquisition in the Chinese README.
  • docs/LLM_PROVIDERS.md
    • Added GitHub Copilot to the LLM provider table.
    • Included a new, detailed section explaining GitHub Copilot provider notes, token exchange, and header injection.
  • providers.json
    • Added a new JSON entry for the 'github_copilot' provider, defining its aliases, protocol, default base URL, API key, model environment variables, and setup hints.
  • src/config/llm.rs
    • Implemented logic to merge default GitHub Copilot headers with any user-provided extra headers.
    • Added a merge_extra_headers utility function to handle header merging with case-insensitive key comparison.
    • Included a unit test for the merge_extra_headers function.
  • src/llm/CLAUDE.md
    • Added github_copilot to the list of supported LLM backends.
    • Included a new section detailing GitHub Copilot provider notes and authentication specifics.
  • src/llm/github_copilot.rs
    • Added a new module implementing the GithubCopilotProvider trait for GitHub Copilot API interactions.
    • Implemented methods for complete and complete_with_tools requests, handling token exchange and OpenAI-compatible message conversion.
    • Included internal structs for OpenAI API request/response serialization and deserialization.
    • Added unit tests for message conversion and content extraction.
  • src/llm/github_copilot_auth.rs
    • Added a new module for GitHub Copilot authentication, including constants for API endpoints and client IDs.
    • Implemented functions for requesting device codes, polling for access tokens, and waiting for device login completion.
    • Provided a CopilotTokenManager to handle caching and automatic refreshing of Copilot session tokens.
    • Included a validate_token function to verify GitHub OAuth tokens.
    • Added utility functions for error truncation and detailed reqwest error formatting.
    • Included unit tests for default headers and UTF-8 boundary handling in truncation.
  • src/llm/mod.rs
    • Declared and imported the new github_copilot and github_copilot_auth modules.
    • Extended the create_registry_provider function to instantiate GithubCopilotProvider when ProviderProtocol::GithubCopilot is selected.
  • src/llm/registry.rs
    • Added GithubCopilot to the ProviderProtocol enum, defining its protocol for the registry.
  • src/settings.rs
    • Updated the llm_backend documentation to include github_copilot as a supported option.
  • src/setup/README.md
    • Added GitHub Copilot to the table of LLM providers with its secret name and environment variable.
    • Included a new section detailing the setup process for GitHub Copilot, covering device login and manual token entry.
  • src/setup/wizard.rs
    • Updated the setup wizard's introductory text to include GitHub Copilot as an inference provider option.
    • Added a new setup_github_copilot function to guide users through authentication choices (device login or manual token).
    • Implemented setup_github_copilot_device_login for GitHub OAuth device flow.
    • Implemented setup_github_copilot_manual_token for pasting existing Copilot tokens.
    • Added prepare_github_copilot_setup to manage LLM backend and model selection during setup.
    • Implemented save_github_copilot_token to validate, save, and inject the Copilot token.
    • Added unit tests for prepare_github_copilot_setup scenarios.
  • tests/config_round_trip.rs
    • Added github_copilot to the list of LLM backends tested for configuration round-tripping.
Activity
  • The author developed this feature, noting it was 'vibe coded by Github Copilot' and successfully tested locally.
  • Manual testing was performed, specifically using the device login scenario to interact with GitHub Copilot via gpt-5-mini on a repl environment, and a screenshot was provided as evidence.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces GitHub Copilot as a new LLM provider. The changes are comprehensive, including updates to configuration, documentation, and the setup wizard, along with the core implementation for the provider and its authentication flow. The code is well-structured and includes new tests for the added logic. I have one suggestion to improve the user experience during the device login flow by removing an unnecessary initial delay.

Comment on lines +222 to +261
tokio::time::sleep(Duration::from_secs(poll_interval)).await;

match poll_for_access_token(client, &device.device_code).await {
Ok(DevicePollingStatus::Pending) => {
consecutive_failures = 0;
}
Ok(DevicePollingStatus::SlowDown) => {
consecutive_failures = 0;
poll_interval = poll_interval.saturating_add(5);
}
Ok(DevicePollingStatus::Authorized(token)) => {
return Ok(token);
}
// Definitive failures — propagate immediately
Err(GithubCopilotAuthError::AccessDenied) => {
return Err(GithubCopilotAuthError::AccessDenied);
}
Err(GithubCopilotAuthError::Expired) => {
return Err(GithubCopilotAuthError::Expired);
}
// Transient failures — retry with backoff
Err(e) => {
consecutive_failures += 1;
tracing::warn!(
error = %e,
attempt = consecutive_failures,
max = MAX_POLL_FAILURES,
"Copilot: transient poll failure, will retry"
);
if consecutive_failures >= MAX_POLL_FAILURES {
tracing::error!(
error = %e,
"Copilot: too many consecutive poll failures, giving up"
);
return Err(e);
}
// Back off on transient errors
poll_interval = (poll_interval + 2).min(30);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current implementation of the polling loop in wait_for_device_login introduces an unnecessary delay before the first poll attempt by calling tokio::time::sleep at the beginning of the loop. This can negatively impact the user experience during the device login flow.

According to the OAuth 2.0 Device Authorization Grant specification, clients should poll at the specified interval, but there's no requirement to wait before the first poll.

I suggest refactoring the loop to poll for the token first and then sleep, which will make the first poll immediate and improve the responsiveness of the login process.

        let status = poll_for_access_token(client, &device.device_code).await;

        match status {
            Ok(DevicePollingStatus::Pending) => {
                consecutive_failures = 0;
            }
            Ok(DevicePollingStatus::SlowDown) => {
                consecutive_failures = 0;
                poll_interval = poll_interval.saturating_add(5);
            }
            Ok(DevicePollingStatus::Authorized(token)) => {
                return Ok(token);
            }
            // Definitive failures — propagate immediately
            Err(GithubCopilotAuthError::AccessDenied) => {
                return Err(GithubCopilotAuthError::AccessDenied);
            }
            Err(GithubCopilotAuthError::Expired) => {
                return Err(GithubCopilotAuthError::Expired);
            }
            // Transient failures — retry with backoff
            Err(e) => {
                consecutive_failures += 1;
                tracing::warn!(
                    error = %e,
                    attempt = consecutive_failures,
                    max = MAX_POLL_FAILURES,
                    "Copilot: transient poll failure, will retry"
                );
                if consecutive_failures >= MAX_POLL_FAILURES {
                    tracing::error!(
                        error = %e,
                        "Copilot: too many consecutive poll failures, giving up"
                    );
                    return Err(e);
                }
                // Back off on transient errors
                poll_interval = (poll_interval + 2).min(30);
            }
        }

        tokio::time::sleep(Duration::from_secs(poll_interval)).await;

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds first-class GitHub Copilot support as an LLM backend, including onboarding via GitHub device login and a dedicated provider implementation that performs Copilot session-token exchange before calling the Copilot Chat Completions API.

Changes:

  • Add github_copilot to the provider registry/config resolution (including default Copilot identity headers and backend aliases).
  • Implement a dedicated GithubCopilotProvider + auth/token-exchange module, and wire it into provider creation.
  • Extend the setup wizard + docs/tests to support configuring Copilot (device flow or manual token).

Reviewed changes

Copilot reviewed 16 out of 16 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
tests/config_round_trip.rs Includes github_copilot in backend round-trip coverage.
src/setup/wizard.rs Adds wizard flow to configure Copilot via device login or pasted token; saves token to secrets/env overlay.
src/setup/README.md Documents Copilot secrets/env vars and wizard behavior.
src/settings.rs Updates Settings docs to include github_copilot backend.
src/llm/registry.rs Adds ProviderProtocol::GithubCopilot enum variant for registry protocol parsing.
src/llm/mod.rs Wires GithubCopilotProvider into create_registry_provider; exposes auth module internally.
src/llm/github_copilot.rs New provider implementation for Copilot chat completions with automatic session-token exchange.
src/llm/github_copilot_auth.rs New device login + token exchange + cached session token manager + token validation.
src/llm/CLAUDE.md Documents Copilot backend usage and notes.
src/config/llm.rs Merges default Copilot identity headers with user overrides; adds tests for header merging + alias resolution.
README.zh-CN.md Mentions Copilot as an alternative provider and device login via onboard.
README.md Mentions Copilot as an alternative provider.
providers.json Adds github_copilot provider definition (aliases, env vars, defaults, setup hint).
FEATURE_PARITY.md Notes Copilot provider support in Rust implementation.
docs/LLM_PROVIDERS.md Adds Copilot provider docs and example env configuration.
.env.example Adds example Copilot env configuration and notes.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +510 to +513
if let ContentPart::ImageUrl { image_url } = part {
parts.push(OpenAiContentPart::ImageUrl {
image_url: OpenAiImageUrl { url: image_url.url },
});
now = now,
"Copilot: cached session token expired or expiring soon, refreshing"
);
} else {
Comment on lines +466 to +477
fn truncate_for_error(body: &str) -> String {
const LIMIT: usize = 200;
if body.len() <= LIMIT {
return body.to_string();
}

let mut end = LIMIT;
while end > 0 && !body.is_char_boundary(end) {
end -= 1;
}
format!("{}...", &body[..end])
}
Comment on lines +52 to +64
`github_copilot` is a declarative registry provider backed by the existing
OpenAI-compatible path. It defaults to `https://api.githubcopilot.com` and expects a
GitHub Copilot OAuth token in `GITHUB_COPILOT_TOKEN` (for example the `oauth_token`
stored by your IDE sign-in flow in `~/.config/github-copilot/apps.json`). The setup
wizard also supports GitHub device login using the VS Code Copilot client ID and then
stores the resulting token in the encrypted secrets store.

Manual model entry is used in the setup wizard (`can_list_models = false`) because
GitHub Copilot model discovery can require extra integration headers on some clients.
IronClaw injects the standard VS Code identity headers automatically:
`User-Agent`, `Editor-Version`, `Editor-Plugin-Version`, and
`Copilot-Integration-Id`. Advanced users can still override or append headers via
`GITHUB_COPILOT_EXTRA_HEADERS`.
README.md Outdated

IronClaw defaults to NEAR AI but works with any OpenAI-compatible endpoint.
Popular options include **OpenRouter** (300+ models), **Together AI**, **Fireworks AI**,
Popular options include **OpenRouter** (300+ models), **Together AI**, **Fireworks AI**, **Github Copilot**
README.zh-CN.md Outdated

IronClaw 默认使用 NEAR AI,但兼容任何 OpenAI 兼容的端点。
常用选项包括 **OpenRouter**(300+ 模型)、**Together AI**、**Fireworks AI**、**Ollama**(本地部署)以及自托管服务器如 **vLLM** 或 **LiteLLM**。
常用选项包括 **OpenRouter**(300+ 模型)、**Together AI**、**Fireworks AI**、**Github Copilot**、**Ollama**(本地部署)以及自托管服务器如 **vLLM** 或 **LiteLLM**。
Copy link
Collaborator

@zmanian zmanian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review: REQUEST CHANGES

Good first contribution -- well-structured, follows existing LLM provider patterns, no new dependencies. But critical issues need resolution before merge.

Critical

C1: OAuth token stored as plain String
CopilotTokenManager.oauth_token should use secrecy::SecretString, not a plain String that persists unprotected in memory. Same applies to CachedCopilotToken.token. Call .expose_secret() only at point of use (HTTP header injection).

C2: Hardcoded VS Code OAuth Client ID + identity headers
Uses VS Code Copilot's client ID (Iv1.b507a08c87ecfe98) and impersonates GitHubCopilotChat/0.26.7 / vscode/1.99.3. This raises ToS concerns -- GitHub could rotate the client ID at any time, breaking all IronClaw users. Options: (a) register IronClaw's own OAuth app, (b) document the risk and get explicit maintainer sign-off, or (c) remove device login and only support paste-token flow. Needs a project-level decision.

C3: TOCTOU race in get_token()
Multiple concurrent callers can all see an expired token under the read lock, drop it, and all perform parallel token exchanges. After acquiring the write lock, re-check if the token was already refreshed:

let mut guard = self.cached.write().await;
if let Some(ref cached) = *guard {
    if cached.expires_at > now + TOKEN_REFRESH_BUFFER_SECS {
        return Ok(cached.token.clone());
    }
}
// Proceed with exchange...

Important

I1: Empty else {} block in get_token() -- dead code, remove it.

I2: 401 mapped to LlmError::RequestFailed instead of AuthFailed. This causes wasted retries and miscounts toward circuit breaker threshold. After invalidating the cached token (correct), the error should be AuthFailed so the retry/circuit breaker chain handles it properly.

I3: prepare_github_copilot_setup manually reimplements set_llm_backend_preserving_model logic instead of calling the existing helper. Could drift if the helper's behavior changes.

I4: No unit tests for token manager, poll_for_access_token parsing, or wait_for_device_login timeout/retry logic. Given the "risk: high" label and complexity of the token exchange flow, more coverage is warranted.

CI Concern

Only classify/scope checks visible -- no build/test/clippy. Needs full CI pipeline before merge.

fallenwood and others added 2 commits March 16, 2026 16:30
C1: Use secrecy::SecretString for oauth_token and cached session token
    in CopilotTokenManager/CachedCopilotToken. Expose only at HTTP
    header injection point via .expose_secret().

C2: Document risks of hardcoded VS Code OAuth client ID and editor
    identity headers (ToS, rotation, staleness). Remove the unreliable
    paste-token setup path (setup_github_copilot_manual_token).

C3: Fix TOCTOU race in get_token() — re-check token validity after
    acquiring write lock so concurrent callers don't all perform
    redundant token exchanges.

I1: Remove dead empty else {} block in get_token().

I2: Map 401 responses to LlmError::AuthFailed instead of RequestFailed
    so retry/circuit-breaker logic handles auth failures correctly.

I3: Replace prepare_github_copilot_setup() with call to existing
    set_llm_backend_preserving_model() helper to avoid logic drift.

I4: Add unit tests for CopilotTokenManager (caching, invalidation,
    expiry/buffer behavior), poll response parsing (all OAuth device
    flow states), and DeviceCodeResponse/CopilotTokenResponse deserialization.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings March 16, 2026 12:30
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds first-class GitHub Copilot support as an LLM backend, including a dedicated provider that performs the required OAuth→session-token exchange and wizard-based device-login onboarding.

Changes:

  • Introduces a new github_copilot provider protocol and implementation (GithubCopilotProvider) with token exchange and default Copilot identity headers.
  • Extends onboarding wizard + config resolution to support GitHub Copilot tokens via secrets/env overlay, plus related tests.
  • Updates provider registry (providers.json) and documentation to list Copilot as a supported backend.

Reviewed changes

Copilot reviewed 16 out of 16 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
tests/config_round_trip.rs Adds github_copilot to backend round-trip coverage.
src/setup/wizard.rs Adds GitHub Copilot onboarding flow (device login) + preservation tests.
src/setup/README.md Documents Copilot secrets/env wiring and setup behavior.
src/settings.rs Updates llm_backend docs to include github_copilot.
src/llm/registry.rs Adds ProviderProtocol::GithubCopilot.
src/llm/mod.rs Wires registry protocol dispatch to the new Copilot provider.
src/llm/github_copilot.rs Implements GithubCopilotProvider (token exchange + OpenAI chat/tool request shaping).
src/llm/github_copilot_auth.rs Implements device login + token exchange + cached session-token manager.
src/llm/CLAUDE.md Adds Copilot provider notes for maintainers/users.
src/config/llm.rs Merges default Copilot identity headers with user overrides; adds tests.
providers.json Registers github_copilot provider definition, aliases, env vars, defaults.
FEATURE_PARITY.md Adds Copilot entry in parity table.
docs/LLM_PROVIDERS.md Documents configuring Copilot via env vars and onboarding.
.env.example Adds Copilot env var examples.
README.md Lists Copilot among built-in providers.
README.zh-CN.md Lists Copilot among built-in providers (Chinese README).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

switching to a different backend

**GitHub Copilot** (`setup_github_copilot`):
- Offers **GitHub device login** (recommended) or manual token paste
Comment on lines +62 to +67
`github_copilot` is a declarative registry provider backed by the existing
OpenAI-compatible path. It defaults to `https://api.githubcopilot.com` and expects a
GitHub Copilot OAuth token in `GITHUB_COPILOT_TOKEN` (for example the `oauth_token`
stored by your IDE sign-in flow in `~/.config/github-copilot/apps.json`). The setup
wizard also supports GitHub device login using the VS Code Copilot client ID and then
stores the resulting token in the encrypted secrets store.
Comment on lines +500 to +511
fn truncate_for_error(body: &str) -> String {
const LIMIT: usize = 200;
if body.len() <= LIMIT {
return body.to_string();
}

let mut end = LIMIT;
while end > 0 && !body.is_char_boundary(end) {
end -= 1;
}
format!("{}...", &body[..end])
}
IronClaw defaults to NEAR AI but supports many LLM providers out of the box.
Built-in providers include **Anthropic**, **OpenAI**, **Google Gemini**, **MiniMax**,
**Mistral**, and **Ollama** (local). OpenAI-compatible services like **OpenRouter**
**Mistral**, **Github Copilot**, and **Ollama** (local). OpenAI-compatible services like **OpenRouter**

IronClaw 默认使用 NEAR AI,但开箱即用地支持多种 LLM 提供商。
内置提供商包括 **Anthropic**、**OpenAI**、**Google Gemini**、**MiniMax**、**Mistral** 和 **Ollama**(本地部署)。同时也支持 OpenAI 兼容服务,如 **OpenRouter**(300+ 模型)、**Together AI**、**Fireworks AI** 以及自托管服务器(**vLLM**、**LiteLLM**)。
内置提供商包括 **Anthropic**、**OpenAI**、**Google Gemini**、**MiniMax**、**Mistral**、**Github Copilot** 和 **Ollama**(本地部署)。同时也支持 OpenAI 兼容服务,如 **OpenRouter**(300+ 模型)、**Together AI**、**Fireworks AI** 以及自托管服务器(**vLLM**、**LiteLLM**)。
@fallenwood
Copy link
Author

@zmanian Thanks for reviewing,

C1: OAuth token stored as plain String
Fixed

C2: Hardcoded VS Code OAuth Client ID + identity headers

I testet with copying tokens today and found it not work. As a result if Github Copilot auth is kept it needs device login.
For either a) or b) I am not maintainer to I don't have permissions, it depends on your feedback.

C3: TOCTOU race in get_token()
Fixed

Important

I1: Empty else {} block in get_token() -- dead code, remove it.

fixed

I2: 401 mapped to LlmError::RequestFailed instead of AuthFailed. This causes wasted retries and miscounts toward circuit breaker threshold. After invalidating the cached token (correct), the error should be AuthFailed so the retry/circuit breaker chain handles it properly.

Fixed

I3: prepare_github_copilot_setup manually reimplements set_llm_backend_preserving_model logic instead of calling the existing helper. Could drift if the helper's behavior changes.

Should fixed

I4: No unit tests for token manager, poll_for_access_token parsing, or wait_for_device_login timeout/retry logic. Given the "risk: high" label and complexity of the token exchange flow, more coverage is warranted.

More tests added.

CI Concern

Only classify/scope checks visible -- no build/test/clippy. Needs full CI pipeline before merge.

It looks maintainer's approve to run CI pipeline

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor: new First-time contributor risk: high Safety, secrets, auth, or critical infrastructure scope: config Configuration scope: docs Documentation scope: llm LLM integration scope: setup Onboarding / setup size: XL 500+ changed lines

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants