Conversation
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| ); | ||
| } else { | ||
| throw new Error('Invalid prompt options'); | ||
| } |
There was a problem hiding this comment.
Invalid from value silently ignored with content
Medium Severity
When from is set to an invalid value (not "latest", "explicit", or a valid 64-char hash) but content is also provided, the invalid from value is silently ignored and the code falls through to the auto-tune branch. The error-throwing branch else if (fromMode) at line 120 is only reached when content is falsy. This contradicts the documented behavior that "If from is specified, it controls version behavior" with only three valid values.
|
|
||
| try { | ||
| const result = await originalMethod(...args); | ||
| const result = await originalMethod(modifiedParams); |
There was a problem hiding this comment.
Span never ended on streaming error in OpenAI wrapper
Medium Severity
In the wrapStream function, when a streaming error occurs, the span is never properly closed. The catch block sets errorOccurred = true and re-throws, but the finally block only calls tracer.endSpan(span) when !errorOccurred is true. This means streaming errors leave spans unclosed, causing incomplete trace data and potential resource issues. The Vercel AI wrapper's equivalent function correctly ends the span in both success and error paths.


Note
Medium Risk
Adds new backend-facing prompt/feedback flows and changes OpenAI/Vercel AI wrappers to rewrite prompts/messages (metadata stripping, variable interpolation, and model patching), which could affect runtime behavior and tracing if metadata is malformed or APIs are unavailable.
Overview
Adds prompt optimization support via
ze.prompt(): introduces a version-aware prompt helper that can auto-fetch the latest optimized prompt, forceexplicitcontent, or pin by content-hash, and returns prompts decorated with<zeroeval>metadata for task/version tracking.Extends integrations to understand prompt metadata: OpenAI and Vercel AI wrappers now extract/strip
<zeroeval>tags, interpolate{{variables}}into all messages, attach prompt metadata to span attributes, and (for OpenAI) optionally patch the requested model based on the bound model for aprompt_version_id.Also adds a prompt API client with TTL caching, a
sendFeedbackAPI for completion feedback, sharedgetApiUrl/getApiKeyhelpers, includeskindin span payloads, expands public exports/types, adds a newexample:prompt, updates dependencies (devopenai@6and peer range), and adds unit tests for the new utilities.Written by Cursor Bugbot for commit 0eddef1. This will update automatically on new commits. Configure here.