-
Notifications
You must be signed in to change notification settings - Fork 78
feat: RSPEED-2538 add optional verbose metadata to /v1/infer endpoint #1305
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
tisnik
merged 5 commits into
lightspeed-core:main
from
Lifto:feat/verbose-infer-metadata
Mar 18, 2026
Merged
Changes from 4 commits
Commits
Show all changes
5 commits
Select commit
Hold shift + click to select a range
4c6b955
feat: add optional verbose metadata to /v1/infer endpoint
Lifto 8f3f247
test: add unit tests for /v1/infer standard and verbose metadata resp…
Lifto 4c82847
test: add dual-opt-in negative path for infer metadata (CodeRabbit)
Lifto e9b6df8
fix: pylint and pyright for /v1/infer; add test for minimal response …
Lifto c111131
fix(test): black-format infer minimal response assertion for CI
Lifto File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verbose path may skip token usage metrics recording.
The non-verbose path calls
retrieve_simple_response(), which invokesextract_token_usage()to record token usage metrics (via_increment_llm_call_metric()). The verbose path directly callsclient.responses.create()but does not callextract_token_usage(), so token usage metrics may not be recorded for verbose requests.Consider calling
extract_token_usage(response.usage, model_id)in the verbose path to ensure consistent metrics tracking:Proposed fix
response = cast(OpenAIResponseObject, response) response_text = extract_text_from_response_items(response.output) + extract_token_usage(response.usage, model_id) else:🤖 Prompt for AI Agents