-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added AzureOpenAI API call #11
base: main
Are you sure you want to change the base?
Conversation
uses existing OpenAI library, but added appropriate function calls when using AzureOpenAI endpoints
added AzureOpenAI env template
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
WalkthroughThe changes introduce support for Azure OpenAI within the LLM module of the project. A new provider, Changes
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Outside diff range and nitpick comments (6)
SimplerLLM/language/llm_providers/azureopenai_llm.py (4)
1-1
: Remove misleading or outdated comment.The comment
# add streaming
at the top of the file doesn't reflect the current content, as the code doesn't include streaming functionality. Consider removing or updating the comment to accurately describe the file.
124-125
: Clarify the exception message for missinguser_input
.The current exception message states
"user_input must be provided."
Consider specifying which function it pertains to for clearer debugging.Suggested change:
124 if not user_input: -125 raise ValueError("user_input must be provided.") +125 raise ValueError("generate_embeddings: 'user_input' must be provided.")
177-178
: Clarify the exception message for missinguser_input
.For better debugging, specify which function the exception is raised from.
Suggested change:
177 if not user_input: -178 raise ValueError("user_input must be provided.") +178 raise ValueError("generate_embeddings_async: 'user_input' must be provided.")
17-211
: Refactor to reduce code duplication across functions.The functions
generate_response
,generate_response_async
,generate_embeddings
, andgenerate_embeddings_async
share similar structures and logic. Consider refactoring common code into helper functions or creating a base class to improve maintainability and reduce redundancy.SimplerLLM/language/llm.py (2)
78-132
: Refactor to eliminate code duplication withOpenAILLM
The
AzureOpenAILLM
class shares significant code with theOpenAILLM
class, particularly in thegenerate_response
,generate_response_async
, andappend_messages
methods. Refactoring common logic into a shared base class or utility functions would enhance maintainability and reduce redundancy.Consider applying this refactor:
+class BaseOpenAILLM(LLM): + def __init__(self, provider, model_name, temperature, top_p, api_key_env_var, api_key): + super().__init__(provider, model_name, temperature, top_p, api_key) + self.api_key = api_key or os.getenv(api_key_env_var, "") + + def append_messages(self, system_prompt: str, messages: list): + model_messages = [{"role": "system", "content": system_prompt}] + if messages: + model_messages.extend(messages) + return model_messages + + def _validate_and_prepare_messages(self, prompt, messages, system_prompt): + if prompt and messages: + raise ValueError("Only one of 'prompt' or 'messages' should be provided.") + if not prompt and not messages: + raise ValueError("Either 'prompt' or 'messages' must be provided.") + if prompt: + return [ + {"role": "system", "content": system_prompt}, + {"role": "user", "content": prompt}, + ] + else: + return self.append_messages(system_prompt, messages) + + def _prepare_params(self, model_name, temperature, top_p, max_tokens, full_response): + params = self.prepare_params(model_name, temperature, top_p) + params.update({ + "api_key": self.api_key, + "max_tokens": max_tokens, + "full_response": full_response, + }) + return params -class AzureOpenAILLM(LLM): +class AzureOpenAILLM(BaseOpenAILLM): def __init__(self, provider, model_name, temperature, top_p, api_key): - super().__init__(provider, model_name, temperature, top_p, api_key) - self.api_key = api_key or os.getenv("AZUREOPENAI_API_KEY", "") + super().__init__(provider, model_name, temperature, top_p, "AZUREOPENAI_API_KEY", api_key) def generate_response( self, model_name: str = None, prompt: str = None, messages: list = None, system_prompt: str = "You are a helpful AI Assistant", temperature: float = 0.7, max_tokens: int = 300, top_p: float = 1.0, full_response: bool = False, ): - params = self.prepare_params(model_name, temperature, top_p) - # Validate inputs - if prompt and messages: - raise ValueError("Only one of 'prompt' or 'messages' should be provided.") - if not prompt and not messages: - raise ValueError("Either 'prompt' or 'messages' must be provided.") - # Prepare messages based on input type - if prompt: - model_messages = [ - {"role": "system", "content": system_prompt}, - {"role": "user", "content": prompt}, - ] - elif messages: - model_messages = self.append_messages(system_prompt, messages) - params.update( - { - "api_key": self.api_key, - "messages": model_messages, - "max_tokens": max_tokens, - "full_response": full_response, - } - ) + model_messages = self._validate_and_prepare_messages(prompt, messages, system_prompt) + params = self._prepare_params(model_name, temperature, top_p, max_tokens, full_response) + params["messages"] = model_messages return azureopenai_llm.generate_response(**params) async def generate_response_async( self, model_name: str = None, prompt: str = None, messages: list = None, system_prompt: str = "You are a helpful AI Assistant", temperature: float = 0.7, max_tokens: int = 300, top_p: float = 1.0, full_response: bool = False, ): - params = self.prepare_params(model_name, temperature, top_p) - # Validate inputs - if prompt and messages: - raise ValueError("Only one of 'prompt' or 'messages' should be provided.") - if not prompt and not messages: - raise ValueError("Either 'prompt' or 'messages' must be provided.") - # Prepare messages based on input type - if prompt: - model_messages = [ - {"role": "system", "content": system_prompt}, - {"role": "user", "content": prompt}, - ] - elif messages: - model_messages = self.append_messages(system_prompt, messages) - params.update( - { - "api_key": self.api_key, - "messages": model_messages, - "max_tokens": max_tokens, - "full_response": full_response, - } - ) + model_messages = self._validate_and_prepare_messages(prompt, messages, system_prompt) + params = self._prepare_params(model_name, temperature, top_p, max_tokens, full_response) + params["messages"] = model_messages return await azureopenai_llm.generate_response_async(**params)
92-131
: Extract common logic ingenerate_response
methodsThe
generate_response
andgenerate_response_async
methods contain repeated code for input validation and message preparation. To improve code clarity and reduce duplication, consider extracting this logic into a private helper method.Example refactor:
+ def _prepare_request( + self, + model_name, + prompt, + messages, + system_prompt, + temperature, + max_tokens, + top_p, + full_response, + ): + params = self.prepare_params(model_name, temperature, top_p) + # Validate inputs + if prompt and messages: + raise ValueError("Only one of 'prompt' or 'messages' should be provided.") + if not prompt and not messages: + raise ValueError("Either 'prompt' or 'messages' must be provided.") + # Prepare messages + if prompt: + model_messages = [ + {"role": "system", "content": system_prompt}, + {"role": "user", "content": prompt}, + ] + else: + model_messages = self.append_messages(system_prompt, messages) + params.update( + { + "api_key": self.api_key, + "messages": model_messages, + "max_tokens": max_tokens, + "full_response": full_response, + } + ) + return params def generate_response( self, model_name: str = None, prompt: str = None, messages: list = None, system_prompt: str = "You are a helpful AI Assistant", temperature: float = 0.7, max_tokens: int = 300, top_p: float = 1.0, full_response: bool = False, ): - params = self.prepare_params(model_name, temperature, top_p) - # Validate inputs - # ... - # Prepare messages - # ... - params.update( - { - "api_key": self.api_key, - "messages": model_messages, - "max_tokens": max_tokens, - "full_response": full_response, - } - ) + params = self._prepare_request( + model_name, + prompt, + messages, + system_prompt, + temperature, + max_tokens, + top_p, + full_response, + ) return azureopenai_llm.generate_response(**params) async def generate_response_async( self, model_name: str = None, prompt: str = None, messages: list = None, system_prompt: str = "You are a helpful AI Assistant", temperature: float = 0.7, max_tokens: int = 300, top_p: float = 1.0, full_response: bool = False, ): - params = self.prepare_params(model_name, temperature, top_p) - # Validate inputs - # ... - # Prepare messages - # ... - params.update( - { - "api_key": self.api_key, - "messages": model_messages, - "max_tokens": max_tokens, - "full_response": full_response, - } - ) + params = self._prepare_request( + model_name, + prompt, + messages, + system_prompt, + temperature, + max_tokens, + top_p, + full_response, + ) return await azureopenai_llm.generate_response_async(**params)Also applies to: 133-171
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (3)
SimplerLLM/language/llm.py
(5 hunks)SimplerLLM/language/llm_providers/azureopenai_llm.py
(1 hunks)readme.md
(1 hunks)
🔇 Additional comments (6)
SimplerLLM/language/llm_providers/azureopenai_llm.py (2)
141-142
:
Ensure correct access to embedding data.
The embeddings are assigned using generate_embeddings = response.data
. Typically, embeddings are accessed via response.data[0].embedding
. Verify that you are extracting the embeddings correctly.
2-3
:
Verify the imports of AsyncAzureOpenAI
and AzureOpenAI
.
The classes AsyncAzureOpenAI
and AzureOpenAI
imported from the openai
library may not exist. Ensure that these classes are available in the openai
package you're using, or adjust the imports accordingly.
Run the following script to check if these classes are available in the openai
module:
✅ Verification successful
The imports are correct for the OpenAI SDK version being used
The codebase is using openai==1.25.0
as specified in requirements.txt, and both AsyncAzureOpenAI
and AzureOpenAI
classes are correctly imported and used throughout the codebase. These classes are part of the official OpenAI v1 Python SDK, which introduced a new Azure-specific client implementation. The usage is consistent with the API version "2024-07-01-preview" being specified in the code.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for 'AsyncAzureOpenAI' and 'AzureOpenAI' in the openai module.
# List all attributes of the openai module and search for the classes
python -c "import openai; print(dir(openai))" | grep -E 'AzureOpenAI|AsyncAzureOpenAI'
Length of output: 185
Script:
#!/bin/bash
# Let's check the requirements.txt or setup.py files to see what version of openai is being used
fd "requirements.txt|setup.py" --exec cat {}
# Also check for any direct references to these classes in the codebase
rg "AsyncAzureOpenAI|AzureOpenAI" -A 2 -B 2
Length of output: 5093
SimplerLLM/language/llm.py (4)
6-6
: Import statement added correctly
The import of the azureopenai_llm
module is properly added, ensuring that Azure OpenAI functions are accessible.
19-19
: Enum updated with new provider
The LLMProvider
enum now includes AZUREOPENAI
, allowing the selection of Azure OpenAI as a provider.
59-61
: Factory method updated to support Azure OpenAI
The create
method correctly handles the AZUREOPENAI
provider by returning an instance of AzureOpenAILLM
.
666-666
: Ensure correct parameter passing in LwhLLM
The line appears to be part of the LwhLLM
class. Verify that all required parameters are correctly passed to lwh_llm.generate_response_async
.
Run the following script to check for parameter consistency:
✅ Verification successful
Parameter passing to generate_response_async
is correct
All required parameters from the generate_response_async
function signature are correctly included in the params
dictionary being passed, including user_id
, messages
, system_prompt
, max_tokens
, temperature
, top_p
, and full_response
. The parameters match both in name and type with the function definition.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check that all calls to `lwh_llm.generate_response_async` pass the correct parameters.
# Search for the function definition
rg -A 10 'def generate_response_async' SimplerLLM/language/llm_providers/lwh_llm.py
Length of output: 392
Script:
#!/bin/bash
# Let's check the context of the line in question and the parameters being passed
rg -B 10 -A 2 'return await lwh_llm\.generate_response_async\(\*\*params\)' SimplerLLM/language/llm.py
Length of output: 534
AZUREOPENAI_API_KEY="your_azureopenai_api_key_here" | ||
AZURE_OPENAI_ENDPOINT='your_azureopenai_endpoint_url_here' | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Standardize environment variable naming and add deployment name.
The Azure OpenAI configuration needs some adjustments:
- Standardize the naming convention:
AZUREOPENAI_API_KEY
should beAZURE_OPENAI_API_KEY
to match the endpoint variable style - Add the required
AZURE_OPENAI_DEPLOYMENT_NAME
environment variable which is typically needed for Azure OpenAI API calls
-AZUREOPENAI_API_KEY="your_azureopenai_api_key_here"
+AZURE_OPENAI_API_KEY="your_azure_openai_api_key_here"
AZURE_OPENAI_ENDPOINT='your_azureopenai_endpoint_url_here'
+AZURE_OPENAI_DEPLOYMENT_NAME='your_azure_openai_deployment_name_here'
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
AZUREOPENAI_API_KEY="your_azureopenai_api_key_here" | |
AZURE_OPENAI_ENDPOINT='your_azureopenai_endpoint_url_here' | |
AZURE_OPENAI_API_KEY="your_azure_openai_api_key_here" | |
AZURE_OPENAI_ENDPOINT='your_azureopenai_endpoint_url_here' | |
AZURE_OPENAI_DEPLOYMENT_NAME='your_azure_openai_deployment_name_here' |
🛠️ Refactor suggestion
Add Azure OpenAI usage example in code samples.
The environment variables are added but there's no example showing how to use Azure OpenAI in the "Creating an LLM Instance" section.
Add this example to the code samples section:
# For Anthropic Claude
#llm_instance = LLM.create(LLMProvider.ANTHROPIC, model_name="claude-3-opus-20240229")
+
+# For Azure OpenAI
+#llm_instance = LLM.create(provider=LLMProvider.AZURE_OPENAI, model_name="your-deployment-model")
response = llm_instance.generate_response(prompt="generate a 5 words sentence")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
AZUREOPENAI_API_KEY="your_azureopenai_api_key_here" | |
AZURE_OPENAI_ENDPOINT='your_azureopenai_endpoint_url_here' | |
# For Anthropic Claude | |
#llm_instance = LLM.create(LLMProvider.ANTHROPIC, model_name="claude-3-opus-20240229") | |
# For Azure OpenAI | |
#llm_instance = LLM.create(provider=LLMProvider.AZURE_OPENAI, model_name="your-deployment-model") | |
response = llm_instance.generate_response(prompt="generate a 5 words sentence") |
async def generate_response_async( | ||
model_name, | ||
messages=None, | ||
temperature=0.7, | ||
max_tokens=300, | ||
top_p=1.0, | ||
full_response=False, | ||
api_key = None, | ||
): | ||
start_time = time.time() if full_response else None | ||
async_openai_client = AsyncAzureOpenAI(api_key=api_key, | ||
api_version="2024-07-01-preview", | ||
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT") | ||
) | ||
|
||
for attempt in range(MAX_RETRIES): | ||
try: | ||
completion = await async_openai_client.chat.completions.create( | ||
model=model_name, | ||
messages=messages, | ||
temperature=temperature, | ||
max_tokens=max_tokens, | ||
top_p=top_p, | ||
) | ||
generated_text = completion.choices[0].message.content | ||
|
||
if full_response: | ||
end_time = time.time() | ||
process_time = end_time - start_time | ||
return LLMFullResponse( | ||
generated_text=generated_text, | ||
model=model_name, | ||
process_time=process_time, | ||
llm_provider_response=completion, | ||
) | ||
return generated_text | ||
|
||
except Exception as e: | ||
if attempt < MAX_RETRIES - 1: | ||
await asyncio.sleep(RETRY_DELAY * (2**attempt)) | ||
else: | ||
error_msg = f"Failed after {MAX_RETRIES} attempts" | ||
if full_response: | ||
end_time = time.time() | ||
process_time = end_time - start_time | ||
error_msg += f" and {process_time} seconds" | ||
error_msg += f" due to: {e}" | ||
print(error_msg) | ||
return None | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance error handling in generate_response_async
.
The asynchronous function generate_response_async
uses print
statements for error messages and returns None
on failure. Align the error handling with the synchronous version by using the logging
module and raising exceptions.
Apply similar changes as suggested for generate_response
:
- Use
logging.error(error_msg)
for error messages. - Raise an exception instead of returning
None
.
except Exception as e: | ||
if attempt < MAX_RETRIES - 1: | ||
await asyncio.sleep(RETRY_DELAY * (2**attempt)) | ||
else: | ||
error_msg = f"Failed after {MAX_RETRIES} attempts" | ||
if full_response: | ||
end_time = time.time() | ||
process_time = end_time - start_time | ||
error_msg += f" and {process_time} seconds" | ||
error_msg += f" due to: {e}" | ||
print(error_msg) | ||
return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance error handling in generate_embeddings_async
.
Align the asynchronous embeddings function with the error handling practices used elsewhere:
- Use
logging.error(error_msg)
for error messages. - Raise exceptions instead of returning
None
.
Apply the following changes:
+import logging
...
200 except Exception as e:
201 if attempt < MAX_RETRIES - 1:
202 await asyncio.sleep(RETRY_DELAY * (2**attempt))
203 else:
204 error_msg = f"Failed after {MAX_RETRIES} attempts"
205 if full_response:
206 end_time = time.time()
207 process_time = end_time - start_time
208 error_msg += f" and {process_time} seconds"
209 error_msg += f" due to: {e}"
-210 print(error_msg)
-211 return None
+210 logging.error(error_msg)
+211 raise Exception(error_msg)
Committable suggestion skipped: line range outside the PR's diff.
async def generate_embeddings_async( | ||
model_name, | ||
user_input=None, | ||
full_response = False, | ||
api_key = None, | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add input validation for model_name
in generate_embeddings_async
.
Similar to the synchronous version, ensure that model_name
is provided and is a non-empty string.
Apply this diff:
167 async def generate_embeddings_async(
168 model_name,
169 user_input=None,
170 full_response=False,
171 api_key=None,
172 ):
+ if not model_name:
+ raise ValueError("model_name must be provided and cannot be empty.")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async def generate_embeddings_async( | |
model_name, | |
user_input=None, | |
full_response = False, | |
api_key = None, | |
): | |
async def generate_embeddings_async( | |
model_name, | |
user_input=None, | |
full_response=False, | |
api_key=None, | |
): | |
if not model_name: | |
raise ValueError("model_name must be provided and cannot be empty.") |
def generate_embeddings( | ||
model_name, | ||
user_input=None, | ||
full_response = False, | ||
api_key = None | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add input validation for model_name
in generate_embeddings
.
The model_name
parameter is critical for the API call but lacks validation. Ensure that model_name
is provided and is a non-empty string.
Apply this diff to add validation:
117 def generate_embeddings(
118 model_name,
119 user_input=None,
120 full_response=False,
121 api_key=None
122 ):
+ if not model_name:
+ raise ValueError("model_name must be provided and cannot be empty.")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def generate_embeddings( | |
model_name, | |
user_input=None, | |
full_response = False, | |
api_key = None | |
): | |
def generate_embeddings( | |
model_name, | |
user_input=None, | |
full_response=False, | |
api_key=None | |
): | |
if not model_name: | |
raise ValueError("model_name must be provided and cannot be empty.") |
except Exception as e: | ||
if attempt < MAX_RETRIES - 1: | ||
time.sleep(RETRY_DELAY * (2**attempt)) | ||
else: | ||
error_msg = f"Failed after {MAX_RETRIES} attempts" | ||
if full_response: | ||
end_time = time.time() | ||
process_time = end_time - start_time | ||
error_msg += f" and {process_time} seconds" | ||
error_msg += f" due to: {e}" | ||
print(error_msg) | ||
return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve exception handling and use proper logging in generate_embeddings
.
Similar to previous suggestions, replace print(error_msg)
with logging.error(error_msg)
and raise exceptions instead of returning None
.
Apply the following changes:
+import logging
...
154 except Exception as e:
155 if attempt < MAX_RETRIES - 1:
156 time.sleep(RETRY_DELAY * (2**attempt))
157 else:
158 error_msg = f"Failed after {MAX_RETRIES} attempts"
159 if full_response:
160 end_time = time.time()
161 process_time = end_time - start_time
162 error_msg += f" and {process_time} seconds"
163 error_msg += f" due to: {e}"
-164 print(error_msg)
-165 return None
+164 logging.error(error_msg)
+165 raise Exception(error_msg)
Committable suggestion skipped: line range outside the PR's diff.
api_version="2024-07-01-preview", | ||
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Azure OpenAI API version needs to be updated to a stable version
The code currently uses the preview version "2024-07-01-preview" which is a future date and appears to be incorrect. According to Azure OpenAI's current versioning scheme, you should use a stable version like "2023-05-15" or check the Azure OpenAI documentation for the latest stable version.
- In
SimplerLLM/language/llm_providers/azureopenai_llm.py
, update theapi_version
in all four instances where it's set to "2024-07-01-preview"
🔗 Analysis chain
Verify the correctness of the api_version
.
The api_version
is set to "2024-07-01-preview"
. Please confirm that this is the correct API version for Azure OpenAI. The latest available version might be different, which could affect compatibility.
You can run the following script to list available API versions:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: List available API versions for Azure OpenAI.
# Note: Replace '<resource-group>' and '<resource-name>' with your actual resource group and resource name.
az openai account deployment list --resource-group <resource-group> --resource-name <resource-name> --query '[].apiVersion'
Length of output: 181
Script:
#!/bin/bash
# Let's check the Azure OpenAI documentation or configuration files in the codebase
rg -i "api[_-]version.*azure.*openai" -g "*.{py,json,yaml,yml,md,txt}" -B 2 -A 2
Length of output: 82
Script:
#!/bin/bash
# Search for any Azure OpenAI related configuration or constants
ast-grep --pattern 'api_version = "$_"'
Length of output: 41
Script:
#!/bin/bash
# Look for any Azure OpenAI related documentation or README files
fd -e md -e txt -e rst | xargs rg -i "azure.*openai" -B 2 -A 2
Length of output: 2224
Script:
#!/bin/bash
# Let's check the specific file content to understand the context better
cat SimplerLLM/language/llm_providers/azureopenai_llm.py
Length of output: 7566
Script:
#!/bin/bash
# Search for any version-related imports or configurations
rg -i "2024-07-01-preview" -g "*.{py,json,yaml,yml,md,txt}" -B 2 -A 2
Length of output: 2219
except Exception as e: | ||
if attempt < MAX_RETRIES - 1: | ||
time.sleep(RETRY_DELAY * (2**attempt)) | ||
else: | ||
error_msg = f"Failed after {MAX_RETRIES} attempts" | ||
if full_response: | ||
end_time = time.time() | ||
process_time = end_time - start_time | ||
error_msg += f" and {process_time} seconds" | ||
error_msg += f" due to: {e}" | ||
print(error_msg) | ||
return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve exception handling and use proper logging.
Currently, errors are being printed using print(error_msg)
, and the function returns None
on failure. Consider using the logging
module for error messages and raising exceptions to inform the caller of failures.
Apply the following changes:
- Import the
logging
module and configure it as needed. - Replace
print(error_msg)
withlogging.error(error_msg)
. - Instead of returning
None
, raise an exception with the error message.
Example diff:
+import logging
...
54 except Exception as e:
55 if attempt < MAX_RETRIES - 1:
56 time.sleep(RETRY_DELAY * (2**attempt))
57 else:
58 error_msg = f"Failed after {MAX_RETRIES} attempts"
59 if full_response:
60 end_time = time.time()
61 process_time = end_time - start_time
62 error_msg += f" and {process_time} seconds"
63 error_msg += f" due to: {e}"
-64 print(error_msg)
-65 return None
+64 logging.error(error_msg)
+65 raise Exception(error_msg)
Committable suggestion skipped: line range outside the PR's diff.
Added AzureOpenAI API call functions that still uses existing dependencies (openai library) but allows usage of deployed Azure endpoints. Updated README.md with env template example.
Summary by CodeRabbit
Release Notes
New Features
AZUREOPENAI_API_KEY
andAZURE_OPENAI_ENDPOINT
.Bug Fixes
Documentation