Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

work well in chat jupyternaut, but failed in inline completion #1238

Open
songqiangchina opened this issue Feb 10, 2025 · 5 comments
Open

work well in chat jupyternaut, but failed in inline completion #1238

songqiangchina opened this issue Feb 10, 2025 · 5 comments
Labels
bug Something isn't working

Comments

@songqiangchina
Copy link

songqiangchina commented Feb 10, 2025

Issue

I searched the issue but can't found similar answers. here is my issue: I set up the Language model and inline completions model with my local LLM config. Jupyternaut chat work well but I got error in inlince completion. (Inline completion failed: APIConnectionError)

Description

Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions yield File "/usr/lib/python3.10/site-packages/httpx/_transports/default.py", line 377, in handle_async_request resp = await self._pool.handle_async_request(req) File "/usr/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 256, in handle_async_request raise exc from None File "/usr/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 236, in handle_async_request response = await connection.handle_async_request( File "/usr/lib/python3.10/site-packages/httpcore/_async/connection.py", line 101, in handle_async_request raise exc File "/usr/lib/python3.10/site-packages/httpcore/_async/connection.py", line 78, in handle_async_request stream = await self._connect(request) File "/usr/lib/python3.10/site-packages/httpcore/_async/connection.py", line 124, in _connect stream = await self._network_backend.connect_tcp(**kwargs) File "/usr/lib/python3.10/site-packages/httpcore/_backends/auto.py", line 31, in connect_tcp return await self._backend.connect_tcp( File "/usr/lib/python3.10/site-packages/httpcore/_backends/anyio.py", line 113, in connect_tcp with map_exceptions(exc_map): File "/usr/lib/python3.10/contextlib.py", line 153, in exit self.gen.throw(typ, value, traceback) File "/usr/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.ConnectError: All connection attempts failed The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/openai/_base_client.py", line 1582, in _request response = await self._client.send( File "/usr/lib/python3.10/site-packages/httpx/_client.py", line 1674, in send response = await self._send_handling_auth( File "/usr/lib/python3.10/site-packages/httpx/_client.py", line 1702, in _send_handling_auth response = await self._send_handling_redirects( File "/usr/lib/python3.10/site-packages/httpx/_client.py", line 1739, in _send_handling_redirects response = await self._send_single_request(request) File "/usr/lib/python3.10/site-packages/httpx/_client.py", line 1776, in _send_single_request response = await transport.handle_async_request(request) File "/usr/lib/python3.10/site-packages/httpx/_transports/default.py", line 376, in handle_async_request with map_httpcore_exceptions(): File "/usr/lib/python3.10/contextlib.py", line 153, in exit self.gen.throw(typ, value, traceback) File "/usr/lib/python3.10/site-packages/httpx/_transports/default.py", line 89, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.ConnectError: All connection attempts failed The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/jupyter_ai/completions/handlers/base.py", line 120, in handle_request_and_catch await handle_request File "/usr/lib/python3.10/site-packages/jupyter_ai/completions/handlers/base.py", line 148, in _handle_request await self.handle_request(request) File "/usr/lib/python3.10/site-packages/jupyter_ai/completions/handlers/default.py", line 15, in handle_request reply = await llm.generate_inline_completions(request) File "/usr/lib/python3.10/site-packages/jupyter_ai_magics/providers.py", line 486, in generate_inline_completions suggestion = await chain.ainvoke(input=model_arguments) File "/usr/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2923, in ainvoke input = await asyncio.create_task(part()) File "/usr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 298, in ainvoke llm_result = await self.agenerate_prompt( File "/usr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 787, in agenerate_prompt return await self.agenerate( File "/usr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 747, in agenerate raise exceptions[0] File "/usr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 923, in _agenerate_with_cache result = await self._agenerate( File "/usr/lib/python3.10/site-packages/langchain_openai/chat_models/base.py", line 843, in _agenerate response = await self.async_client.create(**payload) File "/usr/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 1727, in create return await self._post( File "/usr/lib/python3.10/site-packages/openai/_base_client.py", line 1849, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) File "/usr/lib/python3.10/site-packages/openai/_base_client.py", line 1543, in request return await self._request( File "/usr/lib/python3.10/site-packages/openai/_base_client.py", line 1606, in _request return await self._retry_request( File "/usr/lib/python3.10/site-packages/openai/_base_client.py", line 1676, in _retry_request return await self._request( File "/usr/lib/python3.10/site-packages/openai/_base_client.py", line 1606, in _request return await self._retry_request( File "/usr/lib/python3.10/site-packages/openai/_base_client.py", line 1676, in _retry_request return await self._request( File "/usr/lib/python3.10/site-packages/openai/_base_client.py", line 1616, in _request raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error.

  1. open a jupyter ipynb file
  2. edit some code
  3. occur inline completions failed: AIPConnectionError
  4. See error '...'

Expected behavior

since the config of my local llm in Language model and inline completions model are the same, it should not occur
openai.APIConnectionError: Connection error.

Context

  • Operating System and version: Linux centos7.9 (launch up by k8s docker)

IPython : 8.13.2
ipykernel : 6.29.5
ipywidgets : not installed
jupyter_client : 8.6.1
jupyter_core : 5.7.2
jupyter_server : 2.15.0
jupyterlab : 4.2.4
nbclient : 0.10.0
nbconvert : 7.16.4
nbformat : 5.10.4
notebook : 7.2.3
qtconsole : not installed
traitlets : 5.14.3

Troubleshoot Output
Paste the output from running `jupyter troubleshoot` from the command line here.
You may want to sanitize the paths in the output.
Command Line Output
Paste the output from your command line running `jupyter lab` here, use `--debug` if possible.
Browser Output
Paste the output from your browser Javascript console here, if applicable.
@songqiangchina songqiangchina added the bug Something isn't working label Feb 10, 2025
@krassowski
Copy link
Member

  1. What version of jupyter_ai and jupyter_ai_magics do you have installed?
  2. Can you share your config after sanitizing it by remvoing the API keys?

JupyterLab 4.2.4 is a bit outdated btw.

@dlqqq
Copy link
Member

dlqqq commented Feb 10, 2025

@songqiangchina It's possible that this is not an issue with Jupyter AI. This could be the OpenAI servers rate-limiting/throttling requests issued by the inline completer. Are you able to see any code completions before the error surfaces?

@songqiangchina
Copy link
Author

songqiangchina commented Feb 11, 2025

@songqiangchina It's possible that this is not an issue with Jupyter AI. This could be the OpenAI servers rate-limiting/throttling requests issued by the inline completer. Are you able to see any code completions before the error surfaces?

I can't see any code completions in jupyterlab notebook. since I also use vscode continue, the same local LLM work well in continue code completions.

Is it try to connect the openai server address not my local base url address? Is the local base url not pass to openai request parameters when in completion?

@songqiangchina
Copy link
Author

songqiangchina commented Feb 11, 2025

  1. What version of jupyter_ai and jupyter_ai_magics do you have installed?
  2. Can you share your config after sanitizing it by remvoing the API keys?

JupyterLab 4.2.4 is a bit outdated btw.

jupyter_ai==2.28.5
jupyter_ai_magics==2.28.5

I use OpenRouter, my config likes:

Completion model:

  • OpenRouter::*

Local model ID

  • deepseek-code-7b-instruct

API Base URL(optional)

@dlqqq
Copy link
Member

dlqqq commented Feb 11, 2025

@srdas Can you help test OpenRouter completions when using the deepseek-code-7b-instruct model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants