Skip to content

Conversation

@eavanvalkenburg
Copy link
Member

@eavanvalkenburg eavanvalkenburg commented Dec 11, 2025

Motivation and Context

Made the setup a lot simpler, and more compatible:

  • simplified the env variables in ObservabilitySettings
  • adjusted the logic for enabled and sensitive data
  • uses the OTEL standard env variables to create exporters fully dynamically
  • removed gRPC exporter dependency, instead you should install the ones you need (gRPC, http or others like azure monitor), tries to import with a proper message
  • adjusted sample and readme to reflect changes, the idea is now, that if you need complex setup (like with configure_azure_monitor) that you run that, and then only call setup_observability when you want to programmatically ensure that the code paths that emit traces, metrics, etc. are enabled. if you set the right env variables, then that is not needed.

Description

Also closes: #2649

Contribution Checklist

  • The code builds clean without any errors or warnings
  • The PR follows the Contribution Guidelines
  • All unit tests pass, and I have added new tests where possible
  • Is this a breaking change? If yes, add "[BREAKING]" prefix to the title of the PR.

Copilot AI review requested due to automatic review settings December 11, 2025 11:34
@markwallace-microsoft markwallace-microsoft added documentation Improvements or additions to documentation python lab Agent Framework Lab labels Dec 11, 2025
@markwallace-microsoft
Copy link
Member

markwallace-microsoft commented Dec 11, 2025

Python Test Coverage

Python Test Coverage Report •
FileStmtsMissCoverMissing
packages/a2a/agent_framework_a2a
   _agent.py139794%354–355, 392–393, 422–424
packages/azure-ai/agent_framework_azure_ai
   _client.py1713778%220–223, 228, 231–234, 239, 242–243, 246, 253, 292, 294–297, 299, 436–439, 443, 445–446, 448–456, 458
packages/core/agent_framework
   _agents.py2895282%329, 390–392, 438, 492, 510, 672, 853, 856–858, 986–989, 991, 994–996, 1084, 1125, 1127, 1136–1141, 1147, 1149, 1159–1160, 1167, 1169–1170, 1178–1182, 1190–1191, 1193, 1198, 1200, 1234, 1279–1280, 1282, 1284, 1295
   _clients.py100991%267, 383, 431–434, 478, 799, 801
   _memory.py691578%119, 140, 158, 168, 185, 255, 259, 287–288, 291–292, 294, 310–312
   _serialization.py1051090%335, 347–348, 357, 516, 532, 542, 554, 610, 613
   _types.py95210189%130–131, 149–150, 287, 289, 296, 315, 355, 401–402, 438, 588, 702–703, 705, 730, 737, 754–756, 829, 834–835, 837, 844–845, 847, 869, 876, 879–881, 886–887, 893–895, 1019, 1106–1109, 1117–1118, 1209, 1390, 1396, 1640–1642, 1648–1649, 1951, 1956, 1960, 1964, 2142–2144, 2156, 2207–2211, 2221, 2226, 2685, 2771–2773, 2846, 2857–2858, 3032, 3036, 3048–3050, 3151–3153, 3155–3157, 3160, 3164, 3167, 3172, 3217–3218, 3225–3226, 3260–3262, 3277, 3293, 3321, 3328
   observability.py64715476%244, 312–317, 319, 321, 323–325, 328–330, 335–336, 342–343, 349–350, 357, 359–361, 364–366, 371–372, 378–379, 385–386, 393, 430, 433, 436–438, 441, 444–445, 448–450, 452–454, 457, 550, 552, 634, 652–653, 655, 658, 666–667, 670–673, 675, 678–680, 683–684, 697–703, 705–714, 717–721, 724–727, 729–732, 735–736, 744, 845, 847, 872–874, 996, 998, 1002–1007, 1009, 1012–1016, 1018, 1288, 1368–1370, 1442–1444, 1617, 1625, 1629, 1633, 1639, 1641, 1643, 1651, 1661, 1689–1690, 1703–1706, 1720, 1722, 1729, 1745, 1748, 1808, 1824, 1828, 1962, 1964
TOTAL16640263184% 

Python Unit Test Overview

Tests Skipped Failures Errors Time
2410 144 💤 0 ❌ 0 🔥 57.011s ⏱️

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.


async def setup_azure_ai_observability(self, enable_sensitive_data: bool | None = None) -> None:
"""Use this method to setup tracing in your Azure AI Project.
async def setup_azure_ai_observability(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be setup_observability_with_azure_monitor for more clarity.

Comment on lines +311 to +313
actual_traces_endpoint = traces_endpoint or endpoint
actual_metrics_endpoint = metrics_endpoint or endpoint
actual_logs_endpoint = logs_endpoint or endpoint
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For completeness, we should check if these result in None. If so, this function will essentially be a noop.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think overall we are moving towards a better direction. Just so I understand correctly, with these changes, setup_observability will no longer take care of setting up the exporters for Azure Monitor. Is that correct? If so, I have the following questions:

  1. How would users set up two backends? Say Aspire and Azure Monitor. It looks like they can only do it manually now.
  2. What would be your thoughts on separating instrumentation and exporting? We are currently mixing both.



@pytest.mark.skipif(
True,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we create a test env for testing? Otherwise, these tests will never get run.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to include this file in the repo?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one too. Maybe we should maintain a change log instead?

enable_performance_counters=False,
)
print("Configured Azure Monitor for Application Insights.")
setup_observability(enable_sensitive_data=True, disable_exporter_creation=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what I am referring to in the second question of the previous comment: https://github.com/microsoft/agent-framework/pull/2782/files#r2611570436

Calling setup_observability after configure_azure_monitor seems very confusing, especially all setup_observability does here is enabling sensitive data.

Perhaps we should do the following:

  1. Create a separate method instrument(enable_senstive_data=False) whose sole purpose is to instrument the code. This is similar to what other otel instrumentation packages do. For example: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation-genai/opentelemetry-instrumentation-openai-v2. In our case, we don't need to inject instrumentation code by monkeypatching. We just need to configure the two env variables: ENABLE_OBSERVABILITY and ENABLE_SENSITIVE_DATA.
  2. A dedicated method to setting up the backends.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation lab Agent Framework Lab python

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Python: setup_observability does not support Application Insights QuickPulse aka Live Metrics

3 participants