Skip to content

Conversation

@aibrahim-oai
Copy link
Collaborator

@aibrahim-oai aibrahim-oai commented Dec 9, 2025

  • Make Config.model optional and centralize default-selection logic in ModelsManager, including a default_model helper (with codex-auto-balanced when available) so sessions now carry an explicit chosen model separate from the base config.
  • Resolve model once in core and tui from config. Then store the state of it on other structs.
  • Move refreshing models to be before resolving the default model

Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@aibrahim-oai
Copy link
Collaborator Author

@codex review this

Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

fn build_per_turn_config(session_configuration: &SessionConfiguration) -> Config {
let config = session_configuration.original_config_do_not_use.clone();
let mut per_turn_config = (*config).clone();
per_turn_config.model = session_configuration.model.clone();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we confident nobody is reading this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Supposedly yeah. I replaced it with turn context


let model_family = models_manager
.construct_model_family(&config.model, &config)
.construct_model_family(&session_configuration.model, &config)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd love if everything model-related happend through model_family and we didn't flow raw string model anywhere.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll do that


// Assert the request body model equals the configured review model
// Assert the review request currently uses the primary chat model.
// TODO: switch back to asserting the custom review model once the runtime plumbs it through.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way we don't merge broken default review model?

Some(&rate_display),
None,
captured_at,
config_model(&config),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very roundabout way to pass the model into this method. Does it have to be set on config for some other reason?

Copy link
Collaborator

@pakrym-oai pakrym-oai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think having a separate default_model method on models manager is a footgun.

Models manager should have 3 methods:
async list_models
async get_model(requested_model: Optional<...>)

both of which should async block on manager waiting for models to get loaded.

we can use async list_models as a way to trigger warm up of model manager on start.

I'm also a bit worried about all the callsites that read model override fields from config directly and us having no systematic way of getting rid of them.

@aibrahim-oai
Copy link
Collaborator Author

@codex review this

Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@aibrahim-oai
Copy link
Collaborator Author

@codex review this. something different than Remote model refresh ignores configured provider

Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 1956 to 1960
.client
.get_otel_event_manager()
.with_model(
per_turn_config.model.as_str(),
parent_turn_context.client.get_model().as_str(),
review_model_family.slug.as_str(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Tag review telemetry with the review model

Review threads now create the OTEL event manager with parent_turn_context.client.get_model() even though the review task actually runs against review_model_family (the dedicated review model). This sets the telemetry model attribute to the conversation’s main model while the slug is the review model, so /review traffic will be misattributed in monitoring and usage reporting. Use the review model slug for both parameters to keep telemetry consistent with the model that is actually invoked.

Useful? React with 👍 / 👎.

use std::path::Path;

/// Convert a ModelPreset to ModelInfo for cache storage.
fn preset_to_info(preset: &ModelPreset, priority: i32) -> ModelInfo {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

surprised to find this file in app-server tests

#[tokio::test]
async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
let codex_home = TempDir::new()?;
write_models_cache(codex_home.path())?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we enable remote models in this test?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should have some default mock for model list call? Pretty strange to have to workaround this via writing a cache in an unrelated test.

ModelsManager::construct_model_family_offline(&config.model, &config)
ModelsManager::construct_model_family_offline(&default_model, &config)
.reasoning_summary_format;
let cell = new_reasoning_summary_block(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not for this PR but we can just hardcode reasoning format here.

} = common;
let model_slug = model_family.get_model_slug().to_string();
let mut config = config;
config.model = Some(model_slug.clone());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the fact that we set the model here and it looks like someplace cares about but never change it again makes me nervous but that's just a feeling.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Who cares about this value?

) {
let conv = new_conv.conversation;
let session_configured = new_conv.session_configured;
let model_family = self.chat_widget.get_model_family();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

roundabout way to pass model family in but fine I guess.

skills: self.skills.clone(),
is_first_run: false,
model_family,
model_family: model_family.clone(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this bite us? This feels like ChatWidgetInit needs model family way too early..

migration_config: migration_config_key.to_string(),
});
config.model = target_model.to_string();
config.model = Some(target_model.clone());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this point we maybe should just return the new model but that's for the future.

let conversation_manager = ConversationManager::new(auth_manager.clone(), SessionSource::Exec);
let default_model = conversation_manager
.get_models_manager()
.get_model(config.model.clone().as_deref(), &config)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: for later, can this take &Option so we avoid all these clones?

let all_requests = server
.received_requests()
.await
.unwrap_or_else(|| panic!("received requests"));
Copy link
Collaborator

@pakrym-oai pakrym-oai Dec 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you trying to hide .expect?

// Chat Completions assertion: the second POST should include a tool role message
// with an array `content` containing an item with the expected data URL.
let requests = server.received_requests().await.expect("requests captured");
let all_requests = server.received_requests().await.expect("requests captured");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this because we mock models endpoint? Why are remote models enabled in these tests?

// Assert the request body model equals the configured review model
let request = &server.received_requests().await.unwrap()[0];
let requests = get_responses_requests(&server).await;
let request = requests
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, can we pick a smaller subset of tests to enable remote_models for?

@aibrahim-oai aibrahim-oai changed the title Config model make model optional in config Dec 10, 2025
@aibrahim-oai
Copy link
Collaborator Author

@codex review this. something different than Remote model refresh ignores configured provider

Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 1958 to 1960
.with_model(
per_turn_config.model.as_str(),
parent_turn_context.client.get_model().as_str(),
review_model_family.slug.as_str(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Use review model when tagging OTEL metadata

Review threads are instrumented under the main session model even though the client uses review_model_family. In spawn_review_thread the OTEL event manager is built with parent_turn_context.client.get_model() instead of the review model, so whenever the review model differs from the conversation model the telemetry/logs will attribute review activity to the wrong model even though the actual requests use the review model.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants