Skip to content

Added tracing metadata for LiteLLM provider#86

Merged
tysonthomas9 merged 1 commit intomainfrom
feat/litellm-tracing
Dec 29, 2025
Merged

Added tracing metadata for LiteLLM provider#86
tysonthomas9 merged 1 commit intomainfrom
feat/litellm-tracing

Conversation

@Daedaelius
Copy link
Copy Markdown
Contributor

@Daedaelius Daedaelius commented Dec 26, 2025

Summary by CodeRabbit

Release Notes

  • New Features

    • Added comprehensive tracing and observability support for LLM calls and agent evaluations, enabling enhanced session grouping and request tracking.
    • Implemented automatic tracing metadata propagation throughout the API request and evaluation pipeline.
  • Improvements

    • Enhanced configuration resilience with automatic fallbacks for model endpoints and credentials.

✏️ Tip: You can customize this high-level summary in your review settings.

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Credits must be used to enable repository wide code reviews.

@claude
Copy link
Copy Markdown

claude bot commented Dec 26, 2025

Claude finished @Daedaelius's task —— View job


PR Code Review

Reviewing PR #86: "Added tracing metadata for LiteLLM provider"

Review Checklist:

  • Read repository guidelines (CLAUDE.md)
  • Analyze the 14 modified files to understand changes
  • Review code quality and best practices
  • Check for potential bugs or issues
  • Evaluate performance considerations
  • Assess security concerns
  • Review test coverage implications
  • Provide comprehensive feedback

Summary

This PR implements tracing metadata support for the LiteLLM provider to enable Langfuse integration. The changes establish a metadata flow from the evaluation framework through the API server, browser agent server, and frontend LLM client to the LiteLLM provider.

✅ Positive Aspects

1. Clean Architecture

  • Good separation of concerns with tracing metadata flowing through proper channels
  • Consistent interface design across the stack
  • Proper fallback mechanisms in LLMClient.ts:216-232

2. Robust Implementation

  • Graceful handling of missing tracing metadata with fallbacks
  • Good logging for debugging tracing flow
  • Proper TypeScript interfaces in LLMTypes.ts

3. Environment Variable Support

  • Smart endpoint resolution in api-server.js:596-601 with proper fallback chain
  • Automatic API key fallback for LiteLLM provider

🔍 Issues Found

1. Code Quality Issues

agent-server/nodejs/src/api-server.js:596-601 - Complex nested logic:

const getEndpoint = (tierConfig) => {
  const explicitEndpoint = tierConfig?.endpoint || requestBody.model.endpoint;
  if (explicitEndpoint) return explicitEndpoint;
  // Use env var default for litellm provider
  if (tierConfig?.provider === 'litellm') return defaultLiteLLMEndpoint;
  return undefined;
};

Recommendation: Extract this logic into a separate method resolveEndpointForTier() for better readability and testability.

2. Potential Bugs

front_end/panels/ai_chat/LLM/LLMClient.ts:214 - Async context loss risk:
The fallback to global tracing context could fail in complex async scenarios. Consider passing tracing metadata explicitly through all call chains rather than relying on global state.

3. Security Concerns

front_end/panels/ai_chat/LLM/LiteLLMProvider.ts:214-216 - Metadata exposure:

if (options?.tracingMetadata) {
  payloadBody.metadata = options.tracingMetadata;
}

Concern: Tracing metadata is passed directly to external LiteLLM service without validation or sanitization. Could potentially expose sensitive information.
Recommendation: Implement metadata filtering to only pass whitelisted fields.

4. Missing Error Handling

agent-server/nodejs/src/lib/BrowserAgentServer.js:739-744 - Debug logging without error handling:
The tracing metadata is accessed without checking for potential errors in the object structure.

🏗️ Architecture Considerations

1. Performance

  • ✅ Good: Tracing metadata is only processed when present
  • ✅ Good: Minimal overhead added to existing request flow
  • ⚠️ Consider: Additional object cloning could impact performance in high-throughput scenarios

2. Maintainability

  • ⚠️ The tracing metadata flow spans many files, making it harder to track
  • ⚠️ Mixed patterns: some places use explicit metadata passing, others rely on global context

🧪 Test Coverage

Missing Test Cases:

  1. Tracing metadata validation and sanitization
  2. Fallback behavior when global tracing context is unavailable
  3. Error scenarios when tracing metadata is malformed
  4. Integration tests for end-to-end metadata flow

📋 Recommendations

High Priority:

  1. Add metadata sanitization in LiteLLMProvider.ts before external API calls
  2. Add error handling for malformed tracing objects
  3. Extract complex endpoint resolution logic into separate methods

Medium Priority:

  1. Add comprehensive tests for tracing metadata flow
  2. Consider making tracing metadata passing more consistent (either all explicit or all context-based)
  3. Add validation for required tracing fields

Low Priority:

  1. Add JSDoc documentation for new tracing-related interfaces
  2. Consider adding metrics for tracing metadata usage

🎯 Overall Assessment

This is a well-architected addition that properly implements tracing metadata support. The code follows existing patterns and handles edge cases reasonably well. The main concerns are around security (metadata sanitization) and consistency (mixed explicit/context-based patterns).

Recommendation: Approve with minor changes - Address the metadata sanitization security concern before merging.


@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Dec 26, 2025

📝 Walkthrough

Walkthrough

The changes introduce end-to-end tracing metadata propagation from server-side API request handling through client-side LLM calls and agent evaluation flows. Server-side enhancements add environment-driven fallback mechanisms for LiteLLM configuration (endpoint and API key). Tracing metadata is extracted, forwarded, and preserved across protocol layers, evaluation requests, and tool invocations.

Changes

Cohort / File(s) Summary
Server-side request handling
agent-server/nodejs/src/api-server.js, agent-server/nodejs/src/lib/BrowserAgentServer.js
Added tracing metadata extraction and forwarding through dynamic request creation. Introduced environment-based fallbacks for LiteLLM endpoint and API key (LITELLM_ENDPOINT, LITELLM_API_KEY). Extended createDynamicRequestNested signature to accept tracingMetadata parameter and include it in returned EvaluationRequest.
LLM client and provider integration
front_end/panels/ai_chat/LLM/LLMClient.ts, front_end/panels/ai_chat/LLM/LLMTypes.ts, front_end/panels/ai_chat/LLM/LiteLLMProvider.ts
Added tracingMetadata field to LLMCallRequest and LLMCallOptions interfaces. Updated LLMClient.call to resolve tracingMetadata from request or global context, then pass to provider. Modified LiteLLMProvider to include metadata in callWithMessages payload when provided.
Agent execution framework
front_end/panels/ai_chat/agent_framework/AgentRunner.ts, front_end/panels/ai_chat/core/AgentNodes.ts, front_end/panels/ai_chat/core/AgentService.ts
Propagated tracing metadata through agent LLM calls. Modified AgentService to allow missing API key in AUTOMATED_MODE and normalize it to empty string. Updated AgentService to forward existing evaluation metadata within tracingContext.
Evaluation framework
front_end/panels/ai_chat/evaluation/EvaluationAgent.ts, front_end/panels/ai_chat/evaluation/EvaluationProtocol.ts, front_end/panels/ai_chat/evaluation/remote/EvaluationAgent.ts, front_end/panels/ai_chat/evaluation/remote/EvaluationProtocol.ts
Added optional tracing field to EvaluationParams. Updated both local and remote EvaluationAgent to read tracing metadata from requests, use provided trace/session IDs with fallbacks, and propagate metadata through evaluation lifecycle and tool execution. Enhanced logging for tracing metadata receipt.
Tracing infrastructure
front_end/panels/ai_chat/tools/LLMTracingWrapper.ts, front_end/panels/ai_chat/tracing/TracingProvider.ts
Added metadata field to TracingContext interface. Updated LLMTracingWrapper to forward tracingContext metadata as tracingMetadata in LLM client calls.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant APIServer
    participant LLMClient
    participant LLMProvider
    participant TracingService

    Client->>APIServer: POST with tracing metadata<br/>(requestBody.metadata)
    Note over APIServer: Extract tracingMetadata<br/>from requestBody
    
    APIServer->>APIServer: createDynamicRequestNested<br/>(input, config, tracingMetadata)
    Note over APIServer: Read fallback endpoints<br/>from LITELLM_ENDPOINT<br/>Read fallback API key<br/>from LITELLM_API_KEY
    
    APIServer->>APIServer: Create EvaluationRequest<br/>with tracing field
    
    APIServer->>LLMClient: call(request with<br/>tracingMetadata)
    
    alt Explicit tracingMetadata
        LLMClient->>LLMClient: Use request.tracingMetadata
    else No explicit metadata
        LLMClient->>TracingService: getCurrentTracingContext()
        TracingService-->>LLMClient: Return global context
    end
    
    Note over LLMClient: Merge metadata into<br/>options.tracingMetadata
    
    LLMClient->>LLMProvider: callWithMessages<br/>(payload with metadata)
    
    Note over LLMProvider: Include metadata in<br/>provider payload
    
    LLMProvider-->>LLMClient: Response
    LLMClient-->>APIServer: Result with tracing
    APIServer-->>Client: Response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • olesho
  • tysonthomas9

Poem

🐰 Hops through the traces
Metadata dances from queue to LLM,
Each hop leaves a trail,
Sessions grouped in the moon-light glow,
Langfuse smiles as whispers flow.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title check ❓ Inconclusive The title mentions adding tracing metadata for LiteLLM provider, but the changeset encompasses much broader tracing integration across the entire codebase (frontend, backend, agent framework, evaluation systems) with metadata propagation through multiple layers. Consider updating the title to reflect the broader scope, such as 'Add comprehensive tracing metadata integration across LLM calls' or similar, to accurately represent the full extent of changes beyond just the LiteLLM provider.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/litellm-tracing

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
agent-server/nodejs/src/api-server.js (2)

551-556: Consider guarding requestBody.metadata shape before treating it as tracing metadata

Deriving const tracingMetadata = requestBody.metadata || {} works when metadata is an object, but if callers accidentally send a non-object (string/number), that value will be forwarded into tracing and down into evaluation/tracing code. A lightweight guard such as typeof requestBody.metadata === 'object' && requestBody.metadata !== null would keep the protocol robust against malformed inputs.


585-617: LiteLLM endpoint/API key fallbacks mostly correct; a couple of edge cases worth tightening

The new defaultLiteLLMEndpoint and getEndpoint helper correctly prefer tier- or model-level endpoints and then fall back to LITELLM_ENDPOINT when the tier config explicitly has provider: 'litellm'. Likewise, extractModelTierConfig now sensibly falls back to LITELLM_API_KEY for litellm tier objects, and createDefaultModelConfig adds an endpoint for litellm based on defaults or LITELLM_ENDPOINT.

Two small follow‑ups to consider:

  • For nested configs where provider is only specified at the top level (model.provider === 'litellm') and tier entries are plain strings, getEndpoint(tierConfig) sees only the string and will never look at the provider, so litellm tiers won’t get the LITELLM_ENDPOINT fallback unless an endpoint is explicitly provided. You could extend getEndpoint to also consider requestBody.model.provider when tierConfig lacks a provider field.
  • Similarly, the string-tier path in extractModelTierConfig always uses OPENAI_API_KEY, even if the effective provider for that tier is litellm via defaults. If you expect string tiers under litellm, it might be worth branching on the resolved provider and using LITELLM_API_KEY in that case.

These are edge cases; the main happy paths for object-based litellm configs are handled well.

Also applies to: 637-654, 689-692

front_end/panels/ai_chat/tracing/TracingProvider.ts (1)

21-30: LGTM! Metadata structure aligns with evaluation protocol.

The metadata field properly extends TracingContext for Langfuse integration. The structure mirrors EvaluationParams.tracing, enabling seamless propagation through the evaluation pipeline.

Consider extracting this type to a shared location to avoid duplication with EvaluationProtocol.ts, but this is a minor cleanup for future work.

front_end/panels/ai_chat/LLM/LLMClient.ts (1)

212-242: Consider using DEBUG level for verbose tracing logs.

The tracing metadata propagation logic is correct. However, the logging at lines 219-225, 230-234, and 239 outputs detailed tracing information at INFO level. In production, this may generate excessive log volume.

🔎 Suggested change to use DEBUG level
-      logger.info('LLMClient.call() - Checking tracing context (fallback):', {
+      logger.debug('LLMClient.call() - Checking tracing context (fallback):', {
         hasContext: !!tracingContext,
         hasMetadata: !!tracingContext?.metadata,
         metadataKeys: tracingContext?.metadata ? Object.keys(tracingContext.metadata) : [],
         sessionId: tracingContext?.metadata?.session_id,
         traceId: tracingContext?.metadata?.trace_id
       });

Apply similar changes to lines 230-234 and 239.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 197f88f and d6bf633.

📒 Files selected for processing (14)
  • agent-server/nodejs/src/api-server.js
  • agent-server/nodejs/src/lib/BrowserAgentServer.js
  • front_end/panels/ai_chat/LLM/LLMClient.ts
  • front_end/panels/ai_chat/LLM/LLMTypes.ts
  • front_end/panels/ai_chat/LLM/LiteLLMProvider.ts
  • front_end/panels/ai_chat/agent_framework/AgentRunner.ts
  • front_end/panels/ai_chat/core/AgentNodes.ts
  • front_end/panels/ai_chat/core/AgentService.ts
  • front_end/panels/ai_chat/evaluation/EvaluationAgent.ts
  • front_end/panels/ai_chat/evaluation/EvaluationProtocol.ts
  • front_end/panels/ai_chat/evaluation/remote/EvaluationAgent.ts
  • front_end/panels/ai_chat/evaluation/remote/EvaluationProtocol.ts
  • front_end/panels/ai_chat/tools/LLMTracingWrapper.ts
  • front_end/panels/ai_chat/tracing/TracingProvider.ts
🧰 Additional context used
📓 Path-based instructions (1)
agent-server/nodejs/src/api-server.js

📄 CodeRabbit inference engine (agent-server/nodejs/CLAUDE.md)

agent-server/nodejs/src/api-server.js: Expose REST endpoint POST /v1/responses that accepts task input, URL, timeout, and model configuration, and returns OpenAI-compatible response with metadata
Use formatResponse() method to convert agent responses to OpenAI-compatible format and include metadata with clientId and tabId for screenshot capture
Model configuration must use canonical nested format with main_model, mini_model, and nano_model tiers, each containing provider, model, and api_key fields
POST /page/screenshot endpoint must accept clientId and tabId, use CDP Page.captureScreenshot, and return base64-encoded PNG with metadata and timestamp
POST /page/content endpoint must accept clientId, tabId, format (html or text), and includeIframes parameters; recursively capture iframe content when includeIframes is true
POST /page/execute endpoint must accept clientId, tabId, expression, returnByValue, and awaitPromise; use CDP Runtime.evaluate and return result with type and value
Accept POST /v1/responses input as either string format (simple message) or conversation array format with role and content fields; enforce at least one user message and max 100 messages/10,000 characters per message

Files:

  • agent-server/nodejs/src/api-server.js
🧠 Learnings (9)
📚 Learning: 2025-12-07T00:27:56.465Z
Learnt from: CR
Repo: BrowserOperator/browser-operator-core PR: 0
File: agent-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-12-07T00:27:56.465Z
Learning: Applies to agent-server/nodejs/src/lib/EvalServer.js : Use Chrome DevTools Protocol (CDP) for direct browser communication including screenshot capture via Page.captureScreenshot, page content via Runtime.evaluate, and tab management via Target.createTarget/closeTarget

Applied to files:

  • front_end/panels/ai_chat/evaluation/remote/EvaluationAgent.ts
  • agent-server/nodejs/src/lib/BrowserAgentServer.js
  • front_end/panels/ai_chat/evaluation/EvaluationAgent.ts
📚 Learning: 2025-12-07T00:27:56.465Z
Learnt from: CR
Repo: BrowserOperator/browser-operator-core PR: 0
File: agent-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-12-07T00:27:56.465Z
Learning: Applies to agent-server/nodejs/src/api-server.js : POST /page/execute endpoint must accept clientId, tabId, expression, returnByValue, and awaitPromise; use CDP Runtime.evaluate and return result with type and value

Applied to files:

  • front_end/panels/ai_chat/evaluation/remote/EvaluationAgent.ts
  • front_end/panels/ai_chat/evaluation/EvaluationAgent.ts
📚 Learning: 2025-12-07T00:27:56.465Z
Learnt from: CR
Repo: BrowserOperator/browser-operator-core PR: 0
File: agent-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-12-07T00:27:56.465Z
Learning: Applies to agent-server/nodejs/src/rpc-client.js : Implement JSON-RPC 2.0 protocol for bidirectional communication with request/response correlation using unique IDs, timeout handling, and error conditions

Applied to files:

  • agent-server/nodejs/src/lib/BrowserAgentServer.js
📚 Learning: 2025-12-07T00:27:56.465Z
Learnt from: CR
Repo: BrowserOperator/browser-operator-core PR: 0
File: agent-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-12-07T00:27:56.465Z
Learning: Applies to agent-server/nodejs/src/lib/EvalServer.js : Implement WebSocket server for browser agent connections with client lifecycle management (connect, ready, disconnect)

Applied to files:

  • agent-server/nodejs/src/lib/BrowserAgentServer.js
📚 Learning: 2025-12-07T00:27:56.465Z
Learnt from: CR
Repo: BrowserOperator/browser-operator-core PR: 0
File: agent-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-12-07T00:27:56.465Z
Learning: Applies to agent-server/nodejs/src/api-server.js : Use formatResponse() method to convert agent responses to OpenAI-compatible format and include metadata with clientId and tabId for screenshot capture

Applied to files:

  • agent-server/nodejs/src/lib/BrowserAgentServer.js
📚 Learning: 2025-12-07T00:27:56.465Z
Learnt from: CR
Repo: BrowserOperator/browser-operator-core PR: 0
File: agent-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-12-07T00:27:56.465Z
Learning: Applies to agent-server/nodejs/src/client-manager.js : Track WebSocket client connections and manage tab-level connections using composite clientId:tabId identifiers

Applied to files:

  • agent-server/nodejs/src/lib/BrowserAgentServer.js
📚 Learning: 2025-12-07T00:27:56.465Z
Learnt from: CR
Repo: BrowserOperator/browser-operator-core PR: 0
File: agent-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-12-07T00:27:56.465Z
Learning: Applies to agent-server/nodejs/src/logger.js : Implement structured logging using Winston with separate log files for combined, error, and API events in JSON format for easy parsing

Applied to files:

  • agent-server/nodejs/src/lib/BrowserAgentServer.js
📚 Learning: 2025-12-07T00:27:56.465Z
Learnt from: CR
Repo: BrowserOperator/browser-operator-core PR: 0
File: agent-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-12-07T00:27:56.465Z
Learning: Applies to agent-server/nodejs/src/api-server.js : Model configuration must use canonical nested format with main_model, mini_model, and nano_model tiers, each containing provider, model, and api_key fields

Applied to files:

  • agent-server/nodejs/src/api-server.js
📚 Learning: 2025-12-07T00:27:56.465Z
Learnt from: CR
Repo: BrowserOperator/browser-operator-core PR: 0
File: agent-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-12-07T00:27:56.465Z
Learning: Applies to agent-server/nodejs/src/api-server.js : Expose REST endpoint POST /v1/responses that accepts task input, URL, timeout, and model configuration, and returns OpenAI-compatible response with metadata

Applied to files:

  • agent-server/nodejs/src/api-server.js
🧬 Code graph analysis (2)
front_end/panels/ai_chat/LLM/LLMClient.ts (1)
front_end/panels/ai_chat/tracing/TracingConfig.ts (1)
  • getCurrentTracingContext (299-301)
agent-server/nodejs/src/api-server.js (2)
front_end/panels/ai_chat/LLM/LiteLLMProvider.ts (1)
  • getEndpoint (48-64)
front_end/panels/ai_chat/core/LLMConfigurationManager.ts (1)
  • getEndpoint (222-229)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: claude-review
🔇 Additional comments (14)
agent-server/nodejs/src/lib/BrowserAgentServer.js (1)

730-745: Tracing metadata propagation in RPC params looks consistent and safe

Forwarding request.tracing || {} into params.tracing and logging only presence/keys cleanly aligns this server with the updated EvaluationParams.tracing protocol, without impacting existing callers that omit tracing.

agent-server/nodejs/src/api-server.js (1)

763-815: Dynamic request tracing field is wired correctly into EvaluationRequest

Extending createDynamicRequestNested with a tracingMetadata parameter (default {}) and surfacing it as tracing on the returned evaluation object keeps the nested-model flow in sync with the updated EvaluationParams.tracing schema, while leaving existing metadata/tags behavior unchanged.

front_end/panels/ai_chat/agent_framework/AgentRunner.ts (1)

744-754: AgentRunner LLM calls now correctly forward tracing metadata

Passing tracingMetadata: tracingContext?.metadata into llm.call aligns AgentRunner generations with the broader tracing pipeline so Langfuse sees consistent session/trace identifiers without affecting behavior when no tracing context is present.

front_end/panels/ai_chat/tools/LLMTracingWrapper.ts (1)

88-91: Tracing wrapper correctly injects context metadata while allowing overrides

Including tracingMetadata: tracingContext?.metadata in the call, followed by ...llmCallConfig.options, means tools get Langfuse metadata by default, while callers that explicitly set options.tracingMetadata can still override it if needed. This is a good balance between convention and flexibility.

front_end/panels/ai_chat/LLM/LLMTypes.ts (1)

205-212: LLMCallOptions.tracingMetadata type matches the tracing payload being propagated

Adding the structured tracingMetadata field (session_id/trace_id/generation_name/tags plus an open map) to LLMCallOptions cleanly documents what providers can expect and lines up with the metadata coming from TracingContext, without impacting existing callers.

front_end/panels/ai_chat/LLM/LiteLLMProvider.ts (1)

212-216: LiteLLM payload metadata wiring aligns with Langfuse expectations

Conditionally assigning payloadBody.metadata = options.tracingMetadata is a minimal, backward‑compatible way to forward tracing data to LiteLLM/Langfuse without affecting calls that don’t opt in to tracing.

front_end/panels/ai_chat/evaluation/remote/EvaluationProtocol.ts (1)

107-116: EvaluationParams.tracing extension cleanly matches the new RPC payload shape

The optional tracing block on EvaluationParams matches what the server now sends (session_id/trace_id/eval_id/etc.), letting TypeScript track tracing metadata through remote evaluation calls without changing existing fields.

front_end/panels/ai_chat/core/AgentNodes.ts (1)

242-244: Stategraph AgentNode now participates in the shared tracing metadata flow

Forwarding state.context?.tracingContext?.metadata as tracingMetadata in the LLM call keeps AgentNode generations aligned with AgentRunner and tool-level tracing, giving Langfuse end‑to‑end visibility for these orchestrated calls.

front_end/panels/ai_chat/evaluation/EvaluationProtocol.ts (1)

94-103: LGTM! Tracing metadata extension is well-structured.

The optional tracing field properly supports Langfuse session grouping with appropriate optional fields. The snake_case naming aligns with Langfuse API conventions, and the index signature provides extensibility for future metadata needs.

front_end/panels/ai_chat/LLM/LLMClient.ts (1)

52-52: LGTM!

The optional tracingMetadata field extends LLMCallRequest correctly, enabling explicit tracing metadata to be passed through LLM calls.

front_end/panels/ai_chat/evaluation/EvaluationAgent.ts (1)

324-341: LGTM! Tracing context creation properly handles incoming metadata.

The implementation correctly:

  • Extracts tracing metadata from request params with safe default
  • Generates fallback IDs when not provided
  • Includes the full metadata object in the TracingContext for downstream propagation
front_end/panels/ai_chat/core/AgentService.ts (2)

279-288: LGTM! AUTOMATED_MODE bypass for API key validation is correct.

The change appropriately allows missing API keys in AUTOMATED_MODE where credentials come dynamically from request bodies. The empty string fallback at line 287 ensures the provider config structure remains valid.


600-603: LGTM! Metadata propagation through tracingContext.

Forwarding existingContext?.metadata enables Langfuse session grouping metadata to flow through the agent execution path, connecting evaluation framework tracing with downstream LLM calls.

front_end/panels/ai_chat/evaluation/remote/EvaluationAgent.ts (1)

429-448: LGTM! Tracing metadata handling is consistent with EvaluationAgent.ts.

The implementation correctly extracts tracing metadata from incoming requests and constructs a TracingContext with the metadata field for Langfuse integration. The fallback ID generation ensures evaluation tracing works even without upstream tracing data.

The pattern mirrors front_end/panels/ai_chat/evaluation/EvaluationAgent.ts (lines 324-341), which maintains consistency across evaluation paths.

@tysonthomas9 tysonthomas9 merged commit 74d3fc1 into main Dec 29, 2025
8 checks passed
This was referenced Dec 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants