Skip to content

Conversation

@minrk
Copy link
Contributor

@minrk minrk commented Nov 17, 2025

this was the missing piece in jupyterlab/jupyter-ai#1478 to get api_base into the config UI

get_supported_openai_params doesn't include client parameters like api_base, so api_base is never included from the rest api.

Screenshot 2025-11-17 at 14 15 20

Copy link
Collaborator

@srdas srdas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@minrk The code looks good. Thanks for making this update as it will help users in v3, as requested here: jupyterlab/jupyter-ai#1010 (comment)

I have also tested this with the Ollama GPT-OSS as follows:

  1. Set the default port for Ollama to 10000, now that we can set the parameter api_base as required and try the model, which works:
api_10000
  1. Next, change the port to 11434 (default) and see that it fails as Ollama is still running on port 10000 (there is no response on the chat interface, but we can see the log below):
image api_11434_fail
  1. The restart Ollama from its default port and retry:
api_11434_works

It works again. So the api_base updates are working correctly.

@ellisonbg
Copy link

Thanks Min!

Copy link
Contributor

@dlqqq dlqqq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@minrk Thank you for fixing this for us so quickly! Changes look great, thank you @srdas for testing this locally also.

Merging and releasing now 🎉

@dlqqq dlqqq added the bug Something isn't working label Nov 24, 2025
@dlqqq dlqqq merged commit e379031 into jupyter-ai-contrib:main Nov 24, 2025
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants