Skip to content

Conversation

nielstron
Copy link

TLDR

When specifying an openai-base-url and an openai-api-key via CLI args, the CLI silently falls back to the default model, regardless of what model is specified using --model. This is fixed here, by using --model.

Dive Deeper

Appearently, it is possible to specify all OpenAI API parameters using Environment variables, but this is undocumented. --help shows that --openai-api-key and --openai-base-url are supported as CLI parameters. Thus it would seem natural that the model is specified via --model. It turns out that it is entirely impossible to specify the model used when specifying openai-api-key etc and instead the code tries to infer the model from environment variables (OPENAI_MODEL) or falls back to the default model.

Another way to fix this would be to not provide a path through the CLI arguments to control OpenAI API usage but only through environment variables, and to adequately document this option.

Reviewer Test Plan

Check that the following command works after the change

qwen --openai-api-key <openrouter key> --openai-base-url https://openrouter.ai/api/v1  --openai-model qwen/qwen3-coder-30b-a3b-instruct --openai-logging true -p "Write a hello world python program"

Testing Matrix

🍏 🪟 🐧
npm run
npx
Docker
Podman - -
Seatbelt - -

Linked issues / bugs

@github-actions github-actions bot added bug Something isn't working status/need-information More information is needed to resolve this issue. labels Sep 8, 2025
@pomelo-nwu pomelo-nwu requested a review from Mingholy September 11, 2025 09:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working status/need-information More information is needed to resolve this issue.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant