Skip to content

Prompt optimization support#7

Merged
sebastiancrossa merged 4 commits intomainfrom
prompt-optimization-support
Feb 3, 2026
Merged

Prompt optimization support#7
sebastiancrossa merged 4 commits intomainfrom
prompt-optimization-support

Conversation

@sebastiancrossa
Copy link
Member

@sebastiancrossa sebastiancrossa commented Feb 3, 2026

Note

Medium Risk
Adds new backend-facing prompt/feedback flows and changes OpenAI/Vercel AI wrappers to rewrite prompts/messages (metadata stripping, variable interpolation, and model patching), which could affect runtime behavior and tracing if metadata is malformed or APIs are unavailable.

Overview
Adds prompt optimization support via ze.prompt(): introduces a version-aware prompt helper that can auto-fetch the latest optimized prompt, force explicit content, or pin by content-hash, and returns prompts decorated with <zeroeval> metadata for task/version tracking.

Extends integrations to understand prompt metadata: OpenAI and Vercel AI wrappers now extract/strip <zeroeval> tags, interpolate {{variables}} into all messages, attach prompt metadata to span attributes, and (for OpenAI) optionally patch the requested model based on the bound model for a prompt_version_id.

Also adds a prompt API client with TTL caching, a sendFeedback API for completion feedback, shared getApiUrl/getApiKey helpers, includes kind in span payloads, expands public exports/types, adds a new example:prompt, updates dependencies (dev openai@6 and peer range), and adds unit tests for the new utilities.

Written by Cursor Bugbot for commit 0eddef1. This will update automatically on new commits. Configure here.

@sebastiancrossa sebastiancrossa merged commit 5739074 into main Feb 3, 2026
3 of 9 checks passed
@sebastiancrossa sebastiancrossa deleted the prompt-optimization-support branch February 3, 2026 20:46
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

);
} else {
throw new Error('Invalid prompt options');
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Invalid from value silently ignored with content

Medium Severity

When from is set to an invalid value (not "latest", "explicit", or a valid 64-char hash) but content is also provided, the invalid from value is silently ignored and the code falls through to the auto-tune branch. The error-throwing branch else if (fromMode) at line 120 is only reached when content is falsy. This contradicts the documented behavior that "If from is specified, it controls version behavior" with only three valid values.

Fix in Cursor Fix in Web


try {
const result = await originalMethod(...args);
const result = await originalMethod(modifiedParams);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Span never ended on streaming error in OpenAI wrapper

Medium Severity

In the wrapStream function, when a streaming error occurs, the span is never properly closed. The catch block sets errorOccurred = true and re-throws, but the finally block only calls tracer.endSpan(span) when !errorOccurred is true. This means streaming errors leave spans unclosed, causing incomplete trace data and potential resource issues. The Vercel AI wrapper's equivalent function correctly ends the span in both success and error paths.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant