Route LLM calls through AI Gateway for per-request cost tracking#2803
Route LLM calls through AI Gateway for per-request cost tracking#2803tim-inkeep merged 40 commits intomainfrom
Conversation
Add append-only usage_events table for tracking LLM generation usage across all call sites. Includes token counts (input, output, reasoning, cached), dynamic pricing cost estimate, generation type classification, and OTel correlation fields. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two-tier dynamic pricing: gateway getAvailableModels() as primary (when AI_GATEWAY_API_KEY is set), models.dev API as universal fallback. In-memory cache with periodic refresh (1h gateway, 6h models.dev). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Insert, query (paginated), and summary aggregation functions for usage_events table. Supports groupBy model/agent/day/generation_type. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
recordUsage() extracts tokens from AI SDK responses, looks up pricing, sets OTel span attributes, and fire-and-forgets a usage_event insert. New SPAN_KEYS: total_tokens, reasoning_tokens, cached_read_tokens, response.model, cost.estimated_usd, generation.step_count, generation.type. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add usage, totalUsage, and response fields to ResolvedGenerationResponse. resolveGenerationResponse now resolves these Promise-based getters from the AI SDK alongside steps/text/finishReason/output. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Call recordUsage() after resolveGenerationResponse in runGenerate(), capturing tenant/project/agent/subAgent context, model, streaming status, and finish reason. Fire-and-forget, non-blocking. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add recordUsage() calls for status_update and artifact_metadata generation types in AgentSession. Compression call sites deferred (need context threading through function signatures). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Consolidate estimateTokens() and AssembleResult into packages/agents-core/src/utils/token-estimator.ts. Update all 10 import sites in agents-api to use @inkeep/agents-core. Removes duplicate code and prepares for usage tracker integration. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace recordUsage() with trackedGenerate() — wraps generateText/ streamText calls to automatically record usage on success AND failure. Failed calls check error type: 429/network = 0 tokens, other errors = estimated input tokens from prompt. All call sites (generate.ts, AgentSession status updates + artifact metadata, EvaluationService simulation) now use the wrapper consistently. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
GET /manage/v1/usage/summary — aggregated usage by model/agent/day/ generation_type with optional projectId filter. GET /manage/v1/usage/events — paginated individual usage events with filters for project, agent, model, generation type. Both enforce tenant auth with project-level access checks. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Tenant-level usage dashboard at /{tenantId}/usage with:
- Summary stats: total tokens, estimated cost, generation count, models
- Token usage over time chart (daily buckets via AreaChartCard)
- Breakdown tables by model and generation type
- Project filter and date range picker
- Nav item added to sidebar
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Extract UsageDashboard, UsageStatCards, UsageBreakdownTable into
reusable component. Both tenant-level (/{tenantId}/usage) and
project-level (/{tenantId}/projects/{projectId}/usage) pages import
the shared component. Register Usage tag in OpenAPI spec + docs.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Route handlers use c.get('tenantId') from middleware context
- Client fetches through /api/usage Next.js proxy (forwards cookies)
- Initialize PricingService at server startup for cost estimation
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
resolvedModel from the AI SDK doesn't include provider prefix (e.g. 'claude-sonnet-4-6' not 'anthropic/claude-sonnet-4-6'). Parse requestedModel once at the top and use the extracted modelName for pricing lookup, falling back to resolvedModel when available. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…cking Data layer: - Add steps JSONB column for per-step token breakdown - Populate traceId/spanId from active OTel span - Add conversation/message groupBy + conversationId filter - Thread agentId/conversationId through compression call chain - Wrap compression generateText calls with trackedGenerate Traces integration: - Conversation detail route fetches usage events and merges cost into activities by spanId (with parentSpanId fallback) - Cost shows on timeline items and span detail panels - Usage Cost card on conversation detail page UI: - Events table with pagination, trace links, agent/sub-agent columns - 50/50 chart + events layout - conversationId filter in usage API client Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Write path: - Remove Postgres insert from persistEvent, keep OTel span attributes - Add all schema fields as span attributes (requested_model, provider, status, streamed, byok, finish_reason, duration_ms, error_code, message_id) - Add UsageCostSpanProcessor that enriches doGenerate/doStream spans with per-step cost from PricingService before export - Standardize experimental_telemetry.metadata.generationType across all call sites (status_update, artifact_metadata, compression) - Extract USAGE_GENERATION_TYPES constant for shared filter Read path: - Add getUsageCostSummary (SigNoz aggregation by model/agent/type/conversation) - Add getUsageEventsList (SigNoz LIST query for individual spans) - Dashboard fetches from SigNoz via existing signoz-stats client - Events table and chart derived from SigNoz span data - Filter to only spans with known generationType to reduce noise trackedGenerate no longer takes db parameter. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
🦋 Changeset detectedLatest commit: 1df0a2a The changes in this PR will be included in the next version bump. This PR includes changesets to release 10 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
@pullfrog, please review my PR |
There was a problem hiding this comment.
Solid PR — the shift from heuristic token estimation to actual AI SDK usage data is well-motivated and the implementation is clean. The wrapLanguageModel approach to fix pricing lookups is elegant. Two medium-severity issues (potential PricingService interval leak, v3 middleware spec version risk) and a few minor items below.
| if (this.initialized) return; | ||
| this.initialized = true; | ||
|
|
||
| await Promise.allSettled([this.refreshGateway(), this.refreshModelsDev()]); |
There was a problem hiding this comment.
Medium: initialize() is not idempotent against concurrent callers. The initialized guard prevents re-entry but uses a synchronous boolean — if two callers race past the guard before the first sets this.initialized = true, both will set up duplicate intervals. Consider storing the init promise:
private initPromise: Promise<void> | null = null;
async initialize(): Promise<void> {
if (!this.initPromise) {
this.initPromise = this.doInitialize();
}
return this.initPromise;
}Alternatively, guard the interval creation behind this.gatewayInterval == null to be safe.
| if (this.modelsDevInterval) clearInterval(this.modelsDevInterval); | ||
| this.gatewayInterval = null; | ||
| this.modelsDevInterval = null; | ||
| this.initialized = false; |
There was a problem hiding this comment.
Minor: destroy() does not clear initPromise / caches. If someone calls destroy() then initialize() again, this.initialized is false but the caches still contain stale data from the previous lifecycle. Not blocking — the singletons are long-lived in practice — but worth noting for test hygiene.
| } | ||
|
|
||
| export const usageCostMiddleware: LanguageModelMiddleware = { | ||
| specificationVersion: 'v3', |
There was a problem hiding this comment.
Medium: specificationVersion: 'v3' ties this to an unreleased/experimental middleware API version. If the AI SDK ships a breaking change to the v3 spec (usage shape, callback signatures), this will silently break cost tracking. Confirm this version is stable in the ai package version pinned in your lockfile. If not, add a comment noting the version dependency.
| const result = await doGenerate(); | ||
|
|
||
| try { | ||
| const inputTokens = result.usage.inputTokens.total ?? 0; |
There was a problem hiding this comment.
Minor: result.usage.inputTokens.total assumes a nested .total property. This matches the v3 spec's structured usage shape, but the old v1/v2 shape used flat inputTokens: number. If any codepath bypasses wrapLanguageModel and hits this middleware with the old shape, it will throw. The try/catch on line 77 guards against this, so it's safe — just noting the implicit contract.
| `To access other models, use OpenRouter (openrouter/model-id), Vercel AI Gateway (gateway/model-id), NVIDIA NIM (nim/model-id), or Custom OpenAI-compatible (custom/model-id).` | ||
| ); | ||
| } | ||
| return wrapLanguageModel({ |
There was a problem hiding this comment.
The modelId: modelString here passes the full provider/model-name string (e.g. anthropic/claude-sonnet-4). This is what calculateAndSetCost receives as modelId, and then it splits on / to extract the model name when providerId is present (line 29 of usage-cost-middleware.ts). This works correctly — just confirming the data flow is intentional since the middleware does its own parsing.
| if (hasReliableUsage) { | ||
| // Use actual token counts from the last completed step | ||
| // Next step's context ≈ last step's input + last step's output (assistant response appended) | ||
| totalTokens = actualInputTokens + (actualOutputTokens ?? 0); |
There was a problem hiding this comment.
Correctness check: totalTokens = actualInputTokens + (actualOutputTokens ?? 0) approximates the next step's context size as "last input + last output". This is a good heuristic but slightly oversimplifies — the output gets appended as a new assistant message, so the actual input for the next step includes the original context plus the output tokens, which is what inputTokens already captures for the current step. So the formula effectively double-counts the prior context. In practice this is conservative (triggers compression earlier), which is arguably safer. Worth documenting the rationale.
| safetyBuffer, | ||
| triggerAt, | ||
| remaining: hardLimit - totalTokens, | ||
| source: steps.length > 0 ? 'actual_sdk_usage' : 'estimated', |
There was a problem hiding this comment.
Nit: source: steps.length > 0 ? 'actual_sdk_usage' : 'estimated' — at this point in the code, we're inside the compressionNeeded branch. The source was already determined above, but this ternary re-derives it from steps.length which doesn't account for the hasReliableUsage check (e.g. steps.length > 0 but inputTokens was 0 → fell back to estimate). Consider using a local source variable set at the decision point.
| // USAGE GENERATION TYPES (table removed — usage now tracked via OTel/SigNoz) | ||
| // ============================================================================ | ||
|
|
||
| import { USAGE_GENERATION_TYPES } from '../../constants/otel-attributes'; |
There was a problem hiding this comment.
Importing from ../../constants/otel-attributes inside a schema file is a bit unusual — it creates a dependency from the DB schema layer to the telemetry constants layer. Since this is just a type re-export and the comment says "table removed — usage now tracked via OTel/SigNoz", it makes sense, but consider whether USAGE_GENERATION_TYPES + GenerationType belong in otel-attributes.ts or in a shared usage-types.ts to keep the schema file focused on DB concerns.
| }), | ||
| }; | ||
|
|
||
| const result = await generateText(genConfig as Parameters<typeof generateText>[0]); |
There was a problem hiding this comment.
The as Parameters<typeof generateText>[0] cast here and in several other places (AgentSession.ts, EvaluationService.ts) suggests the config object doesn't naturally satisfy the generateText parameter type. This is a known pattern when building configs incrementally, but the number of casts in this PR is growing. Not blocking — just flagging for awareness.
| const MODEL_ALIASES: Record<string, string[]> = { | ||
| 'claude-sonnet-4': ['claude-sonnet-4'], | ||
| 'claude-opus-4': ['claude-opus-4'], | ||
| 'claude-haiku-3.5': ['claude-3-5-haiku', 'claude-3.5-haiku'], | ||
| 'claude-sonnet-3.5': ['claude-3-5-sonnet', 'claude-3.5-sonnet'], | ||
| 'claude-opus-3': ['claude-3-opus'], | ||
| 'claude-haiku-3': ['claude-3-haiku'], | ||
| }; |
There was a problem hiding this comment.
The alias map is Anthropic-only right now. OpenAI, Google, and other providers have similar aliasing needs (e.g. gpt-4o vs gpt-4o-2024-08-06). This is fine as a starting point — the stripDateSuffix regex handles the most common case — but the map will need expansion as users hit pricing misses for other providers.
|
TL;DR — Replaces the inaccurate character-based token estimation heuristic with actual AI SDK token usage for mid-generation compression decisions, adds a Key changes
Summary | 46 files | 22 commits | base: Actual token usage for mid-generation compression
The new
Pricing service and cost middleware
Enriched OTEL span attributes
The
Usage Dashboard UI
Data is fetched from SigNoz via two new query methods —
|
|
TL;DR — Replaces the Key changes
Summary | 46 files | 21 commits | base: Actual SDK token counts for compression decisions
The
Pricing service and cost middleware
OTel instrumentation and generation telemetry
All generation call sites —
Usage dashboard UI and SigNoz API integration
|
…ent render events on this dashboard
There was a problem hiding this comment.
PR Review Summary
(0) Total Issues | Risk: Low
This is a delta review covering 5 commits since the last automated review (2140a3678).
✅ All Prior Issues Resolved
All Critical, Major, and Minor issues from the 11 prior automated review cycles have been addressed:
| Issue | Status |
|---|---|
| 🔴 External HTTP call without timeout | ✅ Fixed — AbortSignal.timeout(10_000) added to models.dev fetch |
| 🟠 Nested property access may throw TypeError | ✅ Fixed — extractUsageTokens() helper handles both nested and flat usage shapes |
🟠 initialize() not idempotent |
✅ Fixed — initPromise pattern ensures single initialization |
| 🟠 Serverless optimization | ✅ Fixed — On-access refresh strategy, no setInterval timers |
| 🟡 Project filter state not persisted in URL | ✅ Fixed — Now uses useQueryState('projectId', parseAsString) |
| 🟡 Unused SPAN_KEYS constants | ✅ Fixed — Trimmed to only 4 used GEN_AI_* constants |
🟡 Hardcoded gen_ai.cost.pricing_unavailable string |
✅ Fixed — Uses SPAN_KEYS.GEN_AI_COST_PRICING_UNAVAILABLE |
Delta Changes Reviewed
| Commit | Summary | Assessment |
|---|---|---|
026772a45 |
Removed inkeep-agents-manage-ui from service name filter |
✅ Intentional — component render events won't be tracked in cost dashboard |
57519243a |
Trimmed unused SPAN_KEYS, updated projectId persistence | ✅ Addresses @shagun-singh-inkeep's feedback |
f29e54b72 |
Major signoz-stats refactoring | ✅ Cleaner query construction |
7308c05cb, a2994de20 |
Merge commits from main | ✅ Unrelated to usage-tracker feature |
Implementation Quality ✅
The PR implements end-to-end LLM cost tracking with solid patterns:
- PricingService: Dual-source lookup (AI Gateway + models.dev) with lazy initialization and on-access refresh (serverless-optimized)
- usageCostMiddleware: AI SDK v3 middleware with
extractUsageTokens()handling both nested and flat usage shapes - Cost Dashboard: URL-persisted filters (
useQueryState), time range presets, project filtering - Test Coverage: Comprehensive tests for
pricing-service.ts(285 lines) andusage-cost-middleware.ts(296 lines) - OTEL Attributes: Clean constant organization with only used keys exported
Test Coverage Verified
Both new utility files have comprehensive test coverage:
| File | Tests | Coverage |
|---|---|---|
pricing-service.test.ts |
15 tests | calculateCost, getModelPricing, initialize idempotency, date suffix stripping, error handling, destroy lifecycle |
usage-cost-middleware.test.ts |
14 tests | v3 structured usage, flat usage, pricing_unavailable attribute, no-span scenarios, model ID parsing |
✅ APPROVE
Summary: After 11 review iterations and comprehensive human reviewer feedback from @shagun-singh-inkeep, this PR is production-ready. All Critical, Major, and Minor issues have been properly addressed. The implementation follows best practices for:
- Serverless optimization — Lazy init with stale-while-revalidate refresh pattern
- Error handling — Graceful degradation when pricing unavailable
- Type safety — Handles both AI SDK v3 structured and flat usage shapes
- Observability — Proper OTEL span attributes for cost tracking
- UI/UX — URL-persisted filters for shareable dashboard state
Ship it! 🚀
Note: Unable to submit formal GitHub approval due to permission constraints — this is a COMMENT review with approval recommendation.
Reviewers (1)
| Reviewer | Returned | Main Findings | Consider | While You're Here | Inline Comments | Pending Recs | Discarded |
|---|---|---|---|---|---|---|---|
orchestrator (delta) |
5 | 0 | 0 | 0 | 0 | 0 | 0 |
| Total | 5 | 0 | 0 | 0 | 0 | 0 | 0 |
Note: Delta review verified all prior findings addressed. No new issues found.
Ito Test Report ✅15 test cases ran. 15 passed. The unified QA run passed all 15/15 test cases with zero failures, confirming stable behavior across cost, AI-calls, conversation traces, and component render-generation flows in local non-production testing. Key findings were that access controls and deep-link login gating worked correctly, filtering/time-range interactions (including empty and future ranges plus rapid toggling) remained coherent without crashes, usage/cost totals and timeline estimated costs were accurate, mobile cost pages and trace navigation were usable, and security checks (cross-project tampering, malformed IDs, and query-parameter XSS payloads) produced safe denied/inert outcomes with no data leakage or backend internals exposure. ✅ Passed (15)Commit: Tell us how we did: Give Ito Feedback |
There was a problem hiding this comment.
PR Review Summary
(0) Total Issues | Risk: Low
This is a delta review covering 2 commits since the last automated review (026772a45f0d).
✅ Clean Architectural Simplification
The delta removes the PricingService entirely (573 lines) and simplifies cost tracking to use the gateway's own response metadata — exactly what was needed for serverless optimization.
| Change | Impact |
|---|---|
Deleted pricing-service.ts (288 lines) |
✅ No more interval-based refresh, no dual-source lookups |
Deleted pricing-service.test.ts (285 lines) |
✅ Tests no longer needed |
Simplified usage-cost-middleware.ts |
✅ Now extracts cost from providerMetadata.gateway.cost directly |
Added GATEWAY_ROUTABLE_PROVIDERS |
✅ Auto-routes anthropic/openai/google through gateway when AI_GATEWAY_API_KEY set |
Removed GEN_AI_COST_PRICING_UNAVAILABLE |
✅ No longer needed — gateway provides cost or we log a warning |
| Updated tests (293 lines) | ✅ Comprehensive coverage of new approach |
| Documentation note | ✅ Added note about automatic gateway routing |
Implementation Quality ✅
1. Gateway routing logic — Clean conditional routing:
const shouldRouteViaGateway =
!!process.env.AI_GATEWAY_API_KEY &&
(GATEWAY_ROUTABLE_PROVIDERS as readonly string[]).includes(provider) &&
Object.keys(providerConfig).length === 0;- Only activates when gateway API key is set
- Only for supported providers (anthropic, openai, google)
- Preserves direct connections for custom configurations
2. Cost extraction — Simple priority chain:
// gateway.cost (actual billing) → gateway.marketCost (BYOK estimate) → 0
const cost = parseFloat(gw.cost as string);
if (!Number.isNaN(cost) && cost > 0) return cost;
const marketCost = parseFloat(gw.marketCost as string);
if (!Number.isNaN(marketCost) && marketCost > 0) return marketCost;
return 0;- Logs warning when routed through gateway but no cost data (line 53-58)
3. Test coverage — Comprehensive:
- Cost extraction from
gateway.costandmarketCostfallback - BYOK scenario (cost=0 with marketCost available)
- Non-numeric strings handled gracefully
- No active span scenario
extractUsageTokensfor v3 structured and flat formats
Prior Issues Status
All prior issues from 12+ automated review cycles have been addressed. This delta directly resolves the serverless optimization concern from @robert-inkeep:
"@claude there's a network call being made for the pricing, can you check how this should be optimized, given we are running this with serverless vercel"
✅ Resolved — No more external network calls for pricing. Cost comes directly from the gateway response metadata that's already part of the LLM call result.
✅ APPROVE
Summary: Excellent simplification. The PR removes 573 lines of complex pricing service code and replaces it with a ~15-line cost extraction function that uses the gateway's own response metadata. This is the right architectural choice — the gateway knows the actual cost, so there's no need for fragile rate card lookups. Tests are comprehensive, documentation is updated, and serverless compatibility is now guaranteed. Ship it! 🚀
Note: Unable to submit formal GitHub approval due to permission constraints — this is a COMMENT review with approval recommendation.
Reviewers (1)
| Reviewer | Returned | Main Findings | Consider | While You're Here | Inline Comments | Pending Recs | Discarded |
|---|---|---|---|---|---|---|---|
orchestrator |
0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Total | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Note: Delta review of a clean simplification — no sub-reviewers dispatched as changes are straightforward architectural cleanup.
There was a problem hiding this comment.
PR Review Summary
(0) Total Issues | Risk: Low
This is a delta review covering 12 commits since the last automated review (df5ec748210c).
✅ Delta Analysis
The delta contains no changes to the usage-tracker feature. All commits are merge commits from main bringing in unrelated changes:
| Commit | Description | Impact on PR |
|---|---|---|
1df0a2af7 |
Merge branch 'main' | N/A — merge commit |
1199d452f |
PDF URL attachments (#2804) | Unrelated — file upload feature |
96aee7243 |
upgrade create-agents-template (#2832) | Unrelated — template upgrade |
0ec1c5c25 |
Update deploy-signoz-nango.mdx (#2831) | Unrelated — docs update |
4c57d6b8c |
Version Packages (#2812) | Unrelated — version bumps |
ef7987f69, 1b37b2966, 82bd424e4 |
UI component unification (#2829, #2830, #2810) | Unrelated — shadcn refactor |
249183068, 8a0ab005c, 6b9498616 |
Skills updates (#2824, #2825, #2826) | Unrelated — AI tooling |
296c79aeb |
Sidebar tooltip fix (#2811) | Unrelated — UI fix |
✅ Core Feature Unchanged
The usage-tracker implementation files remain stable:
usage-cost-middleware.ts— No changesmodel-factory.ts— No changesotel-attributes.ts— No changesai-sdk-callbacks.ts— No changescost-dashboard.tsx— No changessignoz-stats.ts— No changes
✅ Prior Issues Status
All Critical, Major, and Minor issues from the 14 prior automated review cycles have been addressed:
| Category | Status |
|---|---|
| 🔴 Critical issues | ✅ All resolved |
| 🟠 Major issues | ✅ All resolved |
| 🟡 Minor issues | ✅ All resolved |
| 👤 Human feedback (@shagun-singh-inkeep) | ✅ All addressed |
Implementation Summary
The PR successfully implements end-to-end LLM cost tracking:
- Gateway Routing — When
AI_GATEWAY_API_KEYis set, anthropic/openai/google models are automatically routed through Vercel AI Gateway - Cost Extraction —
gatewayCostMiddlewareextracts cost fromproviderMetadata.gateway.cost(actual billing) →marketCost(BYOK estimate) →0 - PricingService Removal — Deleted 573 lines of fragile rate card lookup code in favor of gateway response metadata
- Comprehensive Tests — 293 lines of test coverage for the middleware
- Cost Dashboard — New UI for cost analytics with project filtering and time range presets
✅ APPROVE
Summary: Clean merge from main with no changes to the usage-tracker feature. After 14 review cycles with all issues addressed, this PR is ready to ship. The implementation elegantly routes LLM calls through Vercel AI Gateway for per-request cost tracking, removing the need for fragile rate card lookups. 🚀
Note: Unable to submit formal GitHub approval due to permission constraints — this is a COMMENT review with approval recommendation.
Reviewers (1)
| Reviewer | Returned | Main Findings | Consider | While You're Here | Inline Comments | Pending Recs | Discarded |
|---|---|---|---|---|---|---|---|
orchestrator |
0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Total | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Note: Delta review — no usage-tracker files changed since last review. No sub-reviewers dispatched.
|
🔎💬 Inkeep AI search and chat service is syncing content for source 'Inkeep Agent Framework Docs' |
Ito Test Report ❌22 test cases ran. 1 failed, 21 passed. Overall, 21 of 22 tests passed, showing strong coverage and stable behavior across cost dashboard routing and filter/query-state handling, rapid interaction resilience, XSS-inert query rendering, auth sentinel redirects, back/forward restoration, cost empty/formatting/status edge cases, conversation detail cost/timeline fallbacks, AI-calls query/date-range safeguards, and mobile usability at iPhone 13 size. The single important failure was a medium-severity, pre-existing API defect where generate-render endpoints return HTTP 500 instead of the expected HTTP 404 for unknown data/artifact component IDs, which can mislead clients and monitoring by classifying normal not-found cases as server errors. ❌ Failed (1)
🟠 Generate-render API invalid ID handling
Relevant code:
const dataComponent = await fetchDataComponent(tenantId, projectId, dataComponentId);
if (!dataComponent) {
return new Response('Data component not found', { status: 404 });
}
if (!response.ok) {
let errorData: any;
try {
const text = await response.text();
errorData = text ? JSON.parse(text) : null;
} catch {
errorData = null;
}
throw new ApiError(
{
code: errorCode,
message: errorMessage,
},
response.status
);
} catch (error) {
console.error('Error generating artifact component render:', error);
return new Response(error instanceof Error ? error.message : 'Internal server error', {
status: 500,
});
}✅ Passed (21)Commit: Tell us how we did: Give Ito Feedback |






































Usage Tracking
ai-sdk-callbackswithonFinish/onStepFinishhandlers that captureusage.promptTokens,usage.completionTokens, finish reason, and cost data onto OTel spansusage_eventstable to runtime schema for future persistent usage trackingtoken-estimatorfromagents-apitoagents-coreand deprecated heuristic estimation in favor of actual AI SDK usage dataGateway Routing & Cost Tracking
AI_GATEWAY_API_KEYis set, Anthropic, OpenAI, and Google models are automatically routed through the Vercel AI Gateway viaModelFactoryproviderMetadata.gateway.cost(actual credits debited), falling back toproviderMetadata.gateway.marketCost(market rate estimate), then0PricingServiceentirely — no more rate card lookups from models.dev or gateway catalog, no alias maps, no silent lookup failures$0Manage UI
/costand/projects/:id/cost) withCostDashboardcomponentDocs
models.mdxabout automatic gateway routing for cost tracking whenAI_GATEWAY_API_KEYis setKey Technical Decisions
ModelFactory.createModel()levelgateway.cost>gateway.marketCost>0cost= actual billing,marketCost= estimate (used for BYOK where credits aren't debited),0= no gatewayextractUsageTokens()in middlewareestimateTokens()heuristicFiles Changed
Core (
packages/agents-core)model-factory.ts— gateway routing logic,GATEWAY_ROUTABLE_PROVIDERS,wrapLanguageModelwithgatewayCostMiddlewareusage-cost-middleware.ts— new file:extractGatewayCost()reads fromproviderMetadata.gateway,extractUsageTokens()normalizes token countsotel-attributes.ts— addedGEN_AI_COST_ESTIMATED_USD, generation type constants, scoping attributestoken-estimator.ts— moved from agents-api, deprecatedusage-tracker.ts— type export forGenerationTyperuntime-schema.ts—usage_eventstable definitionindex.ts— new exportspricing-service.ts,pricing-service.test.tsAPI (
agents-api)ai-sdk-callbacks.ts—onFinish/onStepFinishcallbacks that write usage + cost to OTel spansgenerate.ts— passes callbacks and generation context to AI SDK callsAgentSession.ts— passes usage context to status update and artifact metadata generationsdistill-utils.ts— passes usage context to distillation/compression callsEvaluationService.ts— passes usage context to eval simulation and scoring callsBaseCompressor.ts,ConversationCompressor.ts,MidGenerationCompressor.ts— passes usage context to compression callsagent-types.ts— extended generation response types with usage fieldsManage UI (
agents-manage-ui)cost-dashboard.tsx— new cost analytics dashboard componentcost/page.tsx— org-level cost pageprojects/[projectId]/cost/page.tsx— project-level cost pagesignoz-stats.ts— updated OTel queries for cost and usage dataconversation trace routes— enriched with per-step cost/token datasidebar-nav— added cost navigation