test: model registry migration and inference routing validation#50
test: model registry migration and inference routing validation#50
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: e36f8f9098
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| safety_check: { candidates: ["gpt-5.2", "gpt-5-mini"], maxTokens: 4096, ceilingCents: 10 }, | ||
| summarization: { candidates: ["gpt-5.2", "gpt-5-mini"], maxTokens: 4096, ceilingCents: 10 }, | ||
| planning: { candidates: ["gpt-5.2", "gpt-5-mini"], maxTokens: 4096, ceilingCents: -1 }, | ||
| agent_turn: { candidates: ["glm-5"], maxTokens: 4096, ceilingCents: -1 }, |
There was a problem hiding this comment.
Keep a non-BYOK candidate in default routing
Switching every DEFAULT_ROUTING_MATRIX candidate to glm-5 makes the router pick a zai model even in non-BYOK setups, but resolveInferenceBackend only treats openai/anthropic/ollama/other as routable and otherwise falls back to byok, and chat() then throws BYOK inference requires inferenceBaseUrl when no BYOK URL is configured. In a standard OpenAI-key deployment (no inferenceBaseUrl), agent turns will repeatedly fail instead of completing inference.
Useful? React with 👍 / 👎.
| supportsVision: true, | ||
| parameterStyle: "max_completion_tokens", | ||
| enabled: true, | ||
| enabled: false, |
There was a problem hiding this comment.
Make baseline disable flags effective at startup
These enabled: false flips do not actually disable the models in runtime registry initialization, because ModelRegistry.initialize() writes enabled: existing?.enabled ?? true and ignores baseline enabled values for fresh databases. As a result, newly initialized registries will still keep these OpenAI models enabled/selectable, so the commit's baseline disablement and tests around enabled-provider filtering diverge from production behavior.
Useful? React with 👍 / 👎.
- Add v11 migration tests for legacy goal mapping and project backfilling - Update BYOK inference tests for current model registry state - Update inference router tests for provider validation - Update model registry configuration and type definitions - Validate changes from prior inference routing fix commits These tests ensure schema evolution is correct and inference routing properly validates available providers after gpt-5-mini and model registry updates.
e36f8f9 to
705955e
Compare
Validation tests for model registry updates and v11 migration
Status: ✅ Code review PASSED
Blockers: NONE