feat: surface GLM-backed Codex and Claude runtimes#1823
feat: surface GLM-backed Codex and Claude runtimes#1823Marve10s wants to merge 7 commits intopingdotgg:mainfrom
Conversation
Add GLM as a Codex-backed provider that routes through a local Responses-to-ChatCompletions bridge. GLM sessions reuse the Codex app-server runtime while presenting as a separate provider in the UI. Contracts: add "glm" to ProviderKind, ModelSelection, GlmSettings, and all Record<ProviderKind, ...> exhaustiveness sites. Server: GlmAdapter (thin CodexAdapter delegate), GlmProvider (snapshot service checking GLM_API_KEY), GLM bridge (loopback HTTP translating Responses <-> Chat Completions), shared codexLaunchConfig builder, text generation routing for GLM. Web: GLM in provider picker, settings panel with env var hint, composer registry entry, model selection config, GlmIcon. Tests: 40 new tests covering bridge translation (Responses -> Chat Completions, Chat Completions streaming -> Responses SSE), launch config builder, and updated existing tests for the new provider.
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Yes, I have lot of motivation to integrate GLM plans into Codex ( they work with CC as well btw ). |
- makeGlmAdapterLive now accepts and plumbs options parameter - Documented that GLM runtime events flow through the Codex adapter stream with provider="codex" — event re-attribution by ProviderService based on session directory is a follow-up
|
Addressed both review comments in aaeaf5b: 1. 2. Empty The correct follow-up is to add event re-attribution in |
I've been using my GLM coding plan with Claude in t3code. |
Summary
What changed
CodexProvidernow readsmodel_providerfrom Codex config, showsCodex / GLMwhen applicable, skips OpenAI login checks for custom backends, and swaps built-in suggestions to GLM modelsClaudeProvidernow detects Z.ai's Anthropic-compatible Claude setup from env or~/.claude/settings.json, showsClaude / GLM, and maps Claude tiers to configured GLM modelsdisplayName, and the web picker, banners, chat labels, and settings cards use that live runtime/backend labelglmprovider, GLM bridge, GLM adapter, GLM settings surface, and related routing/contracts that implied a third runtimeUser impact
Codex / GLMand GLM model suggestions instead of generic Codex/OpenAI labelingClaude / GLMand the mapped GLM models instead of a misleading Claude-only surfaceValidation
bun fmtbun lintbun typecheckcd apps/server && bun run test src/provider/Layers/ProviderRegistry.test.ts src/provider/Layers/ProviderAdapterRegistry.test.tsHOME,CODEX_HOME, andT3CODE_HOMEplus fake GLM-backed configs for Codex and Claude