Skip to content

Conversation

@heathdutton
Copy link

Fixes #8240

When users define a model provider in config.toml with the same name as a built-in provider (e.g., ollama), the user's configuration should override the built-in defaults. Previously, or_insert was used which only inserted if the key didn't exist, causing user-defined base_url values to be ignored for built-in provider names.

Changed to insert so user-defined providers properly override built-in ones.

Added a regression test to verify the fix.

@github-actions
Copy link
Contributor

github-actions bot commented Jan 2, 2026

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

@heathdutton
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

github-actions bot added a commit that referenced this pull request Jan 2, 2026
@etraut-openai
Copy link
Collaborator

@codex review

Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 80b117cfeb

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@bolinfest bolinfest self-requested a review January 9, 2026 20:45
@bolinfest
Copy link
Collaborator

For the record, my main apprehension with this change is that while the built-in definition for ollama is quite simple and straightforward to override:

pub fn create_oss_provider_with_base_url(base_url: &str, wire_api: WireApi) -> ModelProviderInfo {
ModelProviderInfo {
name: "gpt-oss".into(),
base_url: Some(base_url.into()),
env_key: None,
env_key_instructions: None,
experimental_bearer_token: None,
wire_api,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: None,
stream_max_retries: None,
stream_idle_timeout_ms: None,
requires_openai_auth: false,
}
}

The built-in definition for the openai model provider is much more complex:

pub fn create_openai_provider() -> ModelProviderInfo {
ModelProviderInfo {
name: OPENAI_PROVIDER_NAME.into(),
// Allow users to override the default OpenAI endpoint by
// exporting `OPENAI_BASE_URL`. This is useful when pointing
// Codex at a proxy, mock server, or Azure-style deployment
// without requiring a full TOML override for the built-in
// OpenAI provider.
base_url: std::env::var("OPENAI_BASE_URL")
.ok()
.filter(|v| !v.trim().is_empty()),
env_key: None,
env_key_instructions: None,
experimental_bearer_token: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: Some(
[("version".to_string(), env!("CARGO_PKG_VERSION").to_string())]
.into_iter()
.collect(),
),
env_http_headers: Some(
[
(
"OpenAI-Organization".to_string(),
"OPENAI_ORGANIZATION".to_string(),
),
("OpenAI-Project".to_string(), "OPENAI_PROJECT".to_string()),
]
.into_iter()
.collect(),
),
// Use global defaults for retry/timeout unless overridden in config.toml.
request_max_retries: None,
stream_max_retries: None,
stream_idle_timeout_ms: None,
requires_openai_auth: true,
}
}

For ollama, it seems that the main thing one needs to override is the base_url, which can also be controlled by the CODEX_OSS_BASE_URL and CODEX_OSS_PORT environment variables.

Similarly, we currently honor the OPENAI_BASE_URL environment variable so you can modify that one field of the openai model provider without accidentally messing up the other fields.

That said, I don't love the environment variables.

Would it be more attractive to add support for openai_base_url and ollama_base_url config options instead?

/cc @etraut-openai who I think was also hoping to eliminate support for OPENAI_BASE_URL

@heathdutton
Copy link
Author

@bolinfest your concern about openai complexity makes sense. For the ollama case in #8240, users just need to point at a remote server - they shouldn't have to know about all the internal fields.

Thoughts:

  • The current fix matches the documented behavior ("override or extend"), but I agree it's a footgun for openai
  • ollama_base_url would solve Ollama support assumes localhost, ignoring config.toml #8240 cleanly, though it's more narrow - wouldn't help if someone wanted to also override wire_api for example
  • A middle ground could be merging user fields into built-in defaults (so [model_providers.ollama] with just base_url keeps the other defaults), but that's more complex

Happy to go whichever direction you prefer. The *_base_url config options seem like the simplest path if the goal is just enabling remote servers without exposing the full provider complexty.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Ollama support assumes localhost, ignoring config.toml

3 participants