Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: Agent around a Query Pipeline #17971

Open
1 task done
rama-bin opened this issue Mar 1, 2025 · 3 comments
Open
1 task done

[Question]: Agent around a Query Pipeline #17971

rama-bin opened this issue Mar 1, 2025 · 3 comments
Labels
question Further information is requested

Comments

@rama-bin
Copy link

rama-bin commented Mar 1, 2025

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

Hi - I am trying to follow the steps outlined here - https://docs.llamaindex.ai/en/stable/examples/agent/agent_runner/query_pipeline_agent/ using AzureOpenAI.

'agent.run_step(task.task_id)' is failing due to error below:

venv\Lib\site-packages\openai_base_client.py:1037, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
1027 return self._retry_request(
1028 input_options,
1029 cast_to,
(...)
1033 response_headers=None,
1034 )
1036 log.debug("Raising connection error")
-> 1037 raise APIConnectionError(request=request) from err
1039 log.debug(
1040 'HTTP Response: %s %s "%i %s" %s',
1041 request.method,
(...)
1045 response.headers,
1046 )
1047 log.debug("request_id: %s", response.headers.get("x-request-id"))

APIConnectionError: Connection error.

Upon checking, for some reason, chat completion is using openai end point. Any idea how to force to use AzureOpenAI?

Image

Thanks.

@rama-bin rama-bin added the question Further information is requested label Mar 1, 2025
Copy link

dosubot bot commented Mar 1, 2025

Hello, @rama-bin! I'm here to help you with your software issues, answer your questions, and guide you on your coding journey. Let's tackle this bug together!

To resolve the APIConnectionError and ensure that the OpenAI client uses the AzureOpenAI endpoint, you need to configure the client to point to your Azure endpoint. Here's how you can do it:

  1. Set the Azure OpenAI Endpoint: Ensure that the AZURE_OPENAI_ENDPOINT environment variable is set to your Azure endpoint, formatted as https://YOUR_RESOURCE_NAME.openai.azure.com/.

  2. Set the API Version: Make sure the OPENAI_API_VERSION is set to 2023-07-01-preview.

  3. Configure the AzureOpenAI Client: Use the AzureOpenAI class from the LlamaIndex integration to initialize the client with the necessary parameters. Here's an example:

import os
from llama_index.llms.azure_openai import AzureOpenAI
from azure.identity import DefaultAzureCredential
from llama_index.llms.azure_openai.base import get_bearer_token_provider

# Set the Azure OpenAI endpoint
os.environ["AZURE_OPENAI_ENDPOINT"] = "https://YOUR_RESOURCE_NAME.openai.azure.com/"

# Set the API version for Azure OpenAI
os.environ["OPENAI_API_VERSION"] = "2023-07-01-preview"

# Set your API key
os.environ["AZURE_OPENAI_API_KEY"] = "<your-api-key>"

# Use DefaultAzureCredential for Azure AD token
credential = DefaultAzureCredential()
token_provider = get_bearer_token_provider(credential, "https://cognitiveservices.azure.com/.default")

# Initialize the AzureOpenAI client with the necessary parameters
llm = AzureOpenAI(
    engine="AZURE_AZURE_OPENAI_DEPLOYMENT_NAME",  # Your deployment name
    model="YOUR_AZURE_OPENAI_COMPLETION_MODEL_NAME",  # Your model name
    azure_ad_token_provider=token_provider,
    use_azure_ad=True,
    azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
    api_version=os.environ["OPENAI_API_VERSION"],
)

# Use the client as needed

Replace YOUR_RESOURCE_NAME, <your-api-key>, AZURE_AZURE_OPENAI_DEPLOYMENT_NAME, and YOUR_AZURE_OPENAI_COMPLETION_MODEL_NAME with your actual Azure resource name, API key, deployment name, and model name, respectively. This setup should direct the requests to the Azure endpoint and resolve the connection error [1][2][3].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@rama-bin
Copy link
Author

rama-bin commented Mar 1, 2025

@dosu - Not a software issue. Issue seems with Query Pipeline as Settings.llm is pointed to AzureOpenAI model.

Copy link

dosubot bot commented Mar 1, 2025

To resolve the issue with the Query Pipeline when Settings.llm is pointed to the AzureOpenAI model, update your installation by running:

pip install -U llama-index-llms-openai

This should address the 'Unknown reasoning_effort parameter' error [1].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant