-
Notifications
You must be signed in to change notification settings - Fork 888
Python: [BREAKING] Observability updates #2782
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…milar to AzureOpenAIChatClient Fixes microsoft#2186
449f8e1 to
7f77071
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.
|
|
||
| async def setup_azure_ai_observability(self, enable_sensitive_data: bool | None = None) -> None: | ||
| """Use this method to setup tracing in your Azure AI Project. | ||
| async def setup_azure_ai_observability( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be setup_observability_with_azure_monitor for more clarity.
| actual_traces_endpoint = traces_endpoint or endpoint | ||
| actual_metrics_endpoint = metrics_endpoint or endpoint | ||
| actual_logs_endpoint = logs_endpoint or endpoint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For completeness, we should check if these result in None. If so, this function will essentially be a noop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think overall we are moving towards a better direction. Just so I understand correctly, with these changes, setup_observability will no longer take care of setting up the exporters for Azure Monitor. Is that correct? If so, I have the following questions:
- How would users set up two backends? Say Aspire and Azure Monitor. It looks like they can only do it manually now.
- What would be your thoughts on separating instrumentation and exporting? We are currently mixing both.
|
|
||
|
|
||
| @pytest.mark.skipif( | ||
| True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we create a test env for testing? Otherwise, these tests will never get run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to include this file in the repo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one too. Maybe we should maintain a change log instead?
| enable_performance_counters=False, | ||
| ) | ||
| print("Configured Azure Monitor for Application Insights.") | ||
| setup_observability(enable_sensitive_data=True, disable_exporter_creation=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what I am referring to in the second question of the previous comment: https://github.com/microsoft/agent-framework/pull/2782/files#r2611570436
Calling setup_observability after configure_azure_monitor seems very confusing, especially all setup_observability does here is enabling sensitive data.
Perhaps we should do the following:
- Create a separate method
instrument(enable_senstive_data=False)whose sole purpose is to instrument the code. This is similar to what other otel instrumentation packages do. For example: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation-genai/opentelemetry-instrumentation-openai-v2. In our case, we don't need to inject instrumentation code by monkeypatching. We just need to configure the two env variables:ENABLE_OBSERVABILITYandENABLE_SENSITIVE_DATA. - A dedicated method to setting up the backends.
Motivation and Context
Made the setup a lot simpler, and more compatible:
configure_azure_monitor) that you run that, and then only callsetup_observabilitywhen you want to programmatically ensure that the code paths that emit traces, metrics, etc. are enabled. if you set the right env variables, then that is not needed.Description
Also closes: #2649
Contribution Checklist