-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: Agent around a Query Pipeline #17971
Comments
Hello, @rama-bin! I'm here to help you with your software issues, answer your questions, and guide you on your coding journey. Let's tackle this bug together! To resolve the
import os
from llama_index.llms.azure_openai import AzureOpenAI
from azure.identity import DefaultAzureCredential
from llama_index.llms.azure_openai.base import get_bearer_token_provider
# Set the Azure OpenAI endpoint
os.environ["AZURE_OPENAI_ENDPOINT"] = "https://YOUR_RESOURCE_NAME.openai.azure.com/"
# Set the API version for Azure OpenAI
os.environ["OPENAI_API_VERSION"] = "2023-07-01-preview"
# Set your API key
os.environ["AZURE_OPENAI_API_KEY"] = "<your-api-key>"
# Use DefaultAzureCredential for Azure AD token
credential = DefaultAzureCredential()
token_provider = get_bearer_token_provider(credential, "https://cognitiveservices.azure.com/.default")
# Initialize the AzureOpenAI client with the necessary parameters
llm = AzureOpenAI(
engine="AZURE_AZURE_OPENAI_DEPLOYMENT_NAME", # Your deployment name
model="YOUR_AZURE_OPENAI_COMPLETION_MODEL_NAME", # Your model name
azure_ad_token_provider=token_provider,
use_azure_ad=True,
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
api_version=os.environ["OPENAI_API_VERSION"],
)
# Use the client as needed Replace To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
@dosu - Not a software issue. Issue seems with Query Pipeline as Settings.llm is pointed to AzureOpenAI model. |
To resolve the issue with the Query Pipeline when pip install -U llama-index-llms-openai This should address the 'Unknown reasoning_effort parameter' error [1]. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
Question Validation
Question
Hi - I am trying to follow the steps outlined here - https://docs.llamaindex.ai/en/stable/examples/agent/agent_runner/query_pipeline_agent/ using AzureOpenAI.
'agent.run_step(task.task_id)' is failing due to error below:
venv\Lib\site-packages\openai_base_client.py:1037, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
1027 return self._retry_request(
1028 input_options,
1029 cast_to,
(...)
1033 response_headers=None,
1034 )
1036 log.debug("Raising connection error")
-> 1037 raise APIConnectionError(request=request) from err
1039 log.debug(
1040 'HTTP Response: %s %s "%i %s" %s',
1041 request.method,
(...)
1045 response.headers,
1046 )
1047 log.debug("request_id: %s", response.headers.get("x-request-id"))
APIConnectionError: Connection error.
Upon checking, for some reason, chat completion is using openai end point. Any idea how to force to use AzureOpenAI?
Thanks.
The text was updated successfully, but these errors were encountered: