Skip to content

ArtifactService binary sanitization and child artifacts#2745

Open
mike-inkeep wants to merge 6 commits intostack/artifact_binary_sanitizerfrom
stack/artifact_service_binary_refs
Open

ArtifactService binary sanitization and child artifacts#2745
mike-inkeep wants to merge 6 commits intostack/artifact_binary_sanitizerfrom
stack/artifact_service_binary_refs

Conversation

@mike-inkeep
Copy link
Copy Markdown
Contributor

No description provided.

@changeset-bot
Copy link
Copy Markdown

changeset-bot bot commented Mar 18, 2026

⚠️ No Changeset found

Latest commit: 3880141

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@vercel
Copy link
Copy Markdown

vercel bot commented Mar 18, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
agents-api Ready Ready Preview, Comment Mar 27, 2026 9:05pm
agents-docs Ready Ready Preview, Comment Mar 27, 2026 9:05pm
agents-manage-ui Ready Ready Preview, Comment Mar 27, 2026 9:05pm

Request Review

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Review Summary

(5) Total Issues | Risk: Medium

🟠⚠️ Major (3) 🟠⚠️

🟠 1) ArtifactService.ts:1007-1059 Partial batch failure leaves inconsistent state

Issue: The for-loop in createBinaryChildArtifacts iterates through binary parts and calls upsertLedgerArtifact for each one sequentially. If the database insert fails mid-way through the batch (e.g., on the 3rd of 5 parts), some child artifacts are persisted while others are not. The error propagates up, but the refs Map will be incomplete.

Why: Consider an artifact with 5 images: if child artifact creation succeeds for images 1-2 but fails on image 3, the error propagates and saveArtifact fails. However, child artifacts 1-2 remain in the database as orphaned records. If retried, deduplication may behave unexpectedly. The parent artifact's data structure would be inconsistent - some binary parts with artifactRef, others without.

Fix: Options to consider:

  1. Wrap in transaction for atomicity (if supported by the upsert function)
  2. Use Promise.all to fail atomically if any fail:
const insertPromises = binaryParts.map(async (part) => {
  const hash = this.extractContentHashFromBlobUri(part.data) || this.fallbackHash(part.data);
  // ... build childArtifactId, check dedupe ...
  await upsertLedgerArtifact(runDbClient)({ ... });
  return { blobUri: part.data, ref: { artifactId, toolCallId } };
});
const results = await Promise.all(insertPromises);
  1. Catch per-part errors and continue with partial success + explicit tracking

Refs:


🟠 2) ArtifactService.ts:906-911 Binary child artifacts only created for fullData, not summaryData

Issue: Binary child artifacts are created by traversing sanitizedData only. If artifact.summaryData contains blob-backed binary parts that are NOT present in artifact.data, those binaries will not have child artifacts created. The attachBinaryArtifactRefs call on summaryData will then fail to find refs for those parts, leaving them without artifactRef.

Why: If a caller provides different binary content in summaryData vs data (e.g., a thumbnail in summary, full image in data), the summary's thumbnail would have an orphaned blob URI with no child artifact record.

Fix: Consider whether summaryData can contain different binary parts than data. If yes, collect binary parts from both:

const binaryReferences = await this.createBinaryChildArtifacts({
  parentArtifactId: artifact.artifactId,
  parentArtifactType: artifact.type,
  toolCallId: artifact.toolCallId,
  value: sanitizedSummaryData ? [sanitizedData, sanitizedSummaryData] : sanitizedData,
});

Or document that summaryData must be a subset of data for binary content.

Refs:


🟠 3) system Implicit contract change: artifact data now includes artifactRef for binary parts

Issue: The PR introduces an implicit data contract change. Artifact data structures (summaryData and fullData) will now contain artifactRef objects embedded within binary parts:

Before: { type: 'image', data: 'blob://...', mimeType: 'image/png' }
After: { type: 'image', data: 'blob://...', mimeType: 'image/png', artifactRef: { artifactId: 'bin_...', toolCallId: '...' } }

Why: This affects consumers retrieving artifacts via:

  • Chat API (Vercel AI SDK Data Stream)
  • getContextArtifacts, getArtifactSummary, getArtifactFull
  • Streaming via data-artifact events
  • UI components rendering artifact data

While additive fields are generally backward-compatible, consumers doing strict validation or field enumeration may be affected.

Fix: Consider documenting this contract change. If artifactRef is intended to be a stable API for clients to resolve binary artifacts, add it to validation schemas and document the shape.

Refs:

Inline Comments:

  • 🟠 Major: ArtifactService.ts:997 Silent fallback when context is missing
  • 🟠 Major: ArtifactService.ts:1163-1165 Regex requires literal . after hash

🟡 Minor (1) 🟡

Inline Comments:

  • 🟡 Minor: ArtifactService.ts:1148-1160 Type guard returns type: string instead of literal union

💭 Consider (1) 💭

💭 1) ArtifactService.ts:1044-1051 Child artifacts use metadata.parentArtifactId instead of existing derivedFrom column

Issue: The ledger_artifacts table already has a derivedFrom column designed for parent-child artifact relationships. This PR stores the relationship in metadata.parentArtifactId instead, creating two mechanisms for the same concept.

Why: The derivedFrom column is already wired into the DAL (maps from metadata.derivedFrom). Having two ways to express parent-child relationships may lead to inconsistent queries.

Fix: Consider using derivedFrom in metadata (or directly) for consistency:

metadata: {
  derivedFrom: params.parentArtifactId, // Uses existing column
  parentArtifactType: params.parentArtifactType,
  // ...
}

Refs:

Inline Comments:

  • 💭 Consider: ArtifactService.test.ts:1052 Additional test coverage for edge cases

💡 APPROVE WITH SUGGESTIONS

Summary: This PR introduces a useful pattern for managing binary content in artifacts by extracting binaries into child artifacts with deduplicated storage. The main areas for improvement are: (1) partial batch failures can leave inconsistent state with orphaned child artifacts, (2) missing logging for silent fallbacks makes debugging difficult, and (3) the implicit contract change should be documented. The test coverage is a good start but could be expanded to cover edge cases. None of these are blockers, but addressing the error handling and logging would improve production reliability.

Discarded (9)
Location Issue Reason Discarded
ArtifactService.ts:1007 Sequential upserts for child artifacts Intentional for deduplication logic; parallelization would break the dedupe Map accumulation
ArtifactService.ts:1008-1014 Deduplication scope limited to tool call Working as designed - per-tool-call isolation provides clearer provenance
ArtifactService.ts:1021 Synthetic toolCallId pattern is new The :binary suffix is actually good practice - distinguishes from real tool call IDs
ArtifactService.ts:1030 Child artifact type naming uses hyphens Consistent with existing test data showing binary-child type
ArtifactService.ts:1037 blobUri vs data field name Internal storage structure, not a consistency issue
ArtifactService.ts:989-1061 Extract named interfaces for types Valid suggestion but INFO level - design preference
ArtifactService.ts:1064-1096 Return type looser than guard Follows naturally from type guard fix - INFO level
ArtifactService.ts:1168-1170 fallbackHash uses hex encoding not SHA Fallback path for edge cases, collision risk is minimal
ArtifactService.ts:1172-1179 bin_ prefix is new naming pattern Acceptable new pattern - clearly distinguishes binary children
Reviewers (6)
Reviewer Returned Main Findings Consider While You're Here Inline Comments Pending Recs Discarded
pr-review-errors 4 1 0 0 1 0 2
pr-review-standards 3 1 0 0 1 0 1
pr-review-architecture 5 1 1 0 0 0 3
pr-review-tests 7 0 0 0 1 0 6
pr-review-types 3 0 0 0 1 0 2
pr-review-consistency 6 0 0 0 0 0 6
Total 28 3 1 0 4 0 20

}): Promise<Map<string, { artifactId: string; toolCallId: string }>> {
if (!this.context.taskId || !this.context.contextId) {
return new Map();
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 MAJOR: Silent fallback when context is missing

Issue: When taskId or contextId is missing, createBinaryChildArtifacts silently returns an empty Map with no logging. The caller has no indication that binary child artifact creation was skipped.

Why: This creates orphaned blob URIs in parent artifacts that have no corresponding child artifact records. Debugging this in production would require correlating blob storage contents with artifact records - difficult to diagnose without logs indicating the skip occurred.

Fix:

Suggested change
}
if (!this.context.taskId || !this.context.contextId) {
logger.warn(
{ parentArtifactId: params.parentArtifactId, hasTaskId: !!this.context.taskId, hasContextId: !!this.context.contextId },
'Skipping binary child artifact creation due to missing context'
);
return new Map();
}

Refs:

Comment on lines +1163 to +1165
private extractContentHashFromBlobUri(blobUri: string): string | null {
const match = blobUri.match(/sha256-([a-f0-9]{16,64})\./i);
return match?.[1] || null;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 MAJOR: Regex requires literal . after hash - fragile for URIs without extensions

Issue: The regex /sha256-([a-f0-9]{16,64})\./i requires a literal . after the hash. Blob URIs without an extension (e.g., blob://v1/.../sha256-abc123 with no trailing .ext) will fail to match, causing fallback to hex-encoding the entire URI.

Why: This could cause deduplication failures if the same binary is referenced via URIs with and without extensions. The fallback hash would differ from the extracted hash.

Fix:

Suggested change
private extractContentHashFromBlobUri(blobUri: string): string | null {
const match = blobUri.match(/sha256-([a-f0-9]{16,64})\./i);
return match?.[1] || null;
private extractContentHashFromBlobUri(blobUri: string): string | null {
const match = blobUri.match(/sha256-([a-f0-9]{16,64})(?:\.|$)/i);
return match?.[1] || null;
}

Refs:

Comment on lines +1148 to +1160
private isBlobBackedBinaryPart(
value: unknown
): value is { type: string; data: string; mimeType?: string } {
if (!value || typeof value !== 'object' || Array.isArray(value)) {
return false;
}

const maybePart = value as Record<string, unknown>;
return (
(maybePart.type === 'image' || maybePart.type === 'file') &&
typeof maybePart.data === 'string' &&
isBlobUri(maybePart.data)
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Minor: Type guard returns type: string instead of literal union

Issue: The type guard validates only 'image' | 'file' for the type field but returns a predicate with type: string. This loses type information that could enable exhaustive checking downstream.

Why: Downstream code consuming the narrowed type cannot safely narrow further since TypeScript sees type: string instead of the literal union.

Fix:

Suggested change
private isBlobBackedBinaryPart(
value: unknown
): value is { type: string; data: string; mimeType?: string } {
if (!value || typeof value !== 'object' || Array.isArray(value)) {
return false;
}
const maybePart = value as Record<string, unknown>;
return (
(maybePart.type === 'image' || maybePart.type === 'file') &&
typeof maybePart.data === 'string' &&
isBlobUri(maybePart.data)
);
private isBlobBackedBinaryPart(
value: unknown
): value is { type: 'image' | 'file'; data: string; mimeType?: string } {
if (!value || typeof value !== 'object' || Array.isArray(value)) {
return false;
}
const maybePart = value as Record<string, unknown>;
return (
(maybePart.type === 'image' || maybePart.type === 'file') &&
typeof maybePart.data === 'string' &&
isBlobUri(maybePart.data)
);
}

Refs:

expect(upsertFn).toHaveBeenCalledTimes(2);
});
});
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💭 Consider: Additional test coverage for edge cases

Issue: The new tests cover the happy path well but miss several edge cases:

  1. Missing context - No test for when taskId/contextId is missing (early return path)
  2. No binary parts - No test confirming correct behavior when artifact has no blob-backed binaries
  3. summaryData path - No test when both data and summaryData are provided
  4. type: 'file' - All tests use type: 'image'; the file type path is untested
  5. Deeply nested parts - Tests only cover flat structures, not { outer: { inner: { images: [...] } } }

Why: These paths exist in the implementation and should have test coverage to prevent regressions.

Fix: Consider adding test cases for these scenarios. Example for missing context:

it('skips binary child artifact creation when taskId is missing', async () => {
  const serviceWithoutTask = new ArtifactService({
    ...mockContext,
    taskId: undefined,
  });
  // ... verify only parent artifact created
});

Refs:

@itoqa
Copy link
Copy Markdown

itoqa bot commented Mar 18, 2026

Ito Test Report ✅

14 test cases ran. 14 passed.

🔍 Verification focused on artifact persistence, dedupe/normalization behavior, edge-case handling, and adversarial route protections. The included cases all met expected behavior with representative evidence timestamps and screenshots.

✅ Passed (14)
Test Case Summary Timestamp Screenshot
ROUTE-3 Conversation retrieval succeeded in both default and vercel-formatted checks, with stable role ordering and artifact messages represented as data-artifact without message-type regression. 49:23 ROUTE-3_49-23.png
LOGIC-1 ArtifactService dedupe test passed, confirming repeated identical hash entries are deduplicated within a single tool-call scope. 20:52 LOGIC-1_20-52.png
LOGIC-2 Source-backed validation confirmed dedupe scope includes toolCallId/parent scope and produces distinct dedupe keys and child IDs across calls for the same hash. 20:52 LOGIC-2_20-52.png
LOGIC-3 Normalization rule check passed and generated child artifact ID matched expected sanitized-scope plus hash24 format for special-character toolCallId input. 20:52 LOGIC-3_20-52.png
LOGIC-4 Conversation artifact replacement test passed, confirming artifact references are retained through formatting/replacement flow when multiple refs share one toolCallId. 20:52 LOGIC-4_20-52.png
EDGE-1 Short inline binary-like content remained inline and did not enter blob-sanitization replacement flow in targeted sanitizer validation. 27:26 EDGE-1_27-26.png
EDGE-2 HTTP and data URI inputs stayed unchanged and were excluded from binary-child extraction paths during sanitizer validation. 27:26 EDGE-2_27-26.png
EDGE-3 Missing task/context fallback completed without crash in targeted ArtifactService path validation. 27:26 EDGE-3_27-26.png
EDGE-4 Navigation churn validation completed with successful post-stream conversation retrieval showing stable message count and preserved data-artifact payload, indicating recoverable integrity after refresh/back-forward actions. 49:23 EDGE-4_49-23.png
EDGE-5 Rapid run-route submissions remained stable and source-level rapid-delta parser stress checks passed without service collapse. 27:27 EDGE-5_27-27.png
ADV-1 Unauthorized and malformed-auth requests were denied without exposing artifact metadata or conversation data. 37:32 ADV-1_37-32.png
ADV-2 Cross-session replay attempt from Session B to Session A conversation was denied and did not expose foreign conversation content. 37:32 ADV-2_37-32.png
ADV-3 Malformed pseudo-base64 payload was rejected with controlled validation errors and no server-crash response pattern. 37:32 ADV-3_37-32.png
ADV-5 Identifier-injection payload was rejected at request validation and source-level artifact utility tests confirmed safe normalization behavior for identifier handling. 37:32 ADV-5_37-32.png
📋 View Recording

Screen Recording

mike-inkeep added a commit that referenced this pull request Mar 26, 2026
This function is intentionally exported for use in artifact persistence (see PR #2745).
The observability-only stripping path is `stripBinaryDataForObservability`.
export async function sanitizeArtifactBinaryData()
…ack, bulk child insert

- Add ARTIFACT_SAVE_MAX_RETRIES (1) and ARTIFACT_SAVE_RETRY_DELAY_MS (3s) constants,
  distinct from ARTIFACT_GENERATION_MAX_RETRIES which covers LLM retries
- recordEvent .catch handler now actively retries processArtifact once after delay;
  logs on exhaustion so setSpanWithError fires on the process_artifact span
- Remove try/catch inline fallback in uploadInlinePart — blob upload errors now
  propagate out of sanitizeArtifactBinaryData instead of silently storing raw base64
- Replace per-row upsertLedgerArtifact loop in createBinaryChildArtifacts with a
  single bulkInsertLedgerArtifacts call (INSERT ... ON CONFLICT DO NOTHING) for
  single-statement atomicity
- Fix ArtifactParser.typeSchema test mock missing getTracer (pre-existing breakage)
Add ARTIFACT_BINARY_CHILDREN span name and ARTIFACT_BINARY_CHILD_COUNT /
ARTIFACT_BINARY_CHILD_IDS / ARTIFACT_BINARY_CHILD_HASHES span keys to
otel-attributes. Parse and surface them in the traces timeline so binary
child artifact count and IDs are visible when inspecting an artifact span.
@pullfrog
Copy link
Copy Markdown
Contributor

pullfrog bot commented Mar 27, 2026

TL;DR — Extends ArtifactService to extract blob-backed binary parts (images, files) from artifact payloads into separate child artifact records with deterministic IDs and back-references. Also wires up actual retry logic for artifact save failures and surfaces binary child metadata through OTel tracing and the Manage UI traces timeline.

Key changes

  • Extract binary parts into child artifacts in ArtifactService — after sanitization, blob-backed binary parts are collected from the artifact tree, deduplicated by content hash, bulk-inserted as child artifacts, and back-referenced via artifactRef in the parent data.
  • Make blob upload failures fail-loud in artifact-binary-sanitizer — removes the silent fallback that kept inline base64 on upload failure; errors now propagate so the session-level retry can handle them.
  • Wire up artifact save retry in AgentSession — the retry path that previously only logged now actually re-invokes processArtifact with a configurable delay, using new ARTIFACT_SAVE_MAX_RETRIES / ARTIFACT_SAVE_RETRY_DELAY_MS constants.
  • Add bulkInsertLedgerArtifacts to data access layer — batch insert with onConflictDoNothing for efficient, idempotent child artifact creation.
  • Surface binary child metadata in OTel and Manage UI — new span attributes (artifact.binary_child_count, artifact.binary_child_ids) flow through the traces API to the timeline panel where they render as badges.
  • Pass API URL env vars through Turbo — adds INKEEP_AGENTS_API_URL, PUBLIC_INKEEP_AGENTS_API_URL, NEXT_PUBLIC_INKEEP_AGENTS_API_URL, and INKEEP_AGENTS_MANAGE_UI_URL to turbo.json global env.

Summary | 14 files | 6 commits | base: stack/artifact_binary_sanitizerstack/artifact_service_binary_refs


Binary child artifact extraction

Before: Artifact binary data was sanitized to blob:// URIs inline but remained embedded in the parent artifact payload with no separate records.
After: Each blob-backed binary part is extracted into its own child artifact record (visibility: 'internal', derivedFrom linking to parent) with a deterministic ID (bin_<scope>_<hash>), and an artifactRef is injected back into the parent's data tree.

The extraction uses a two-pass recursive tree walk with WeakSet-based circular reference protection: first collectBlobBackedBinaryParts finds all candidates, then attachBinaryArtifactRefs injects references. Deduplication within a tool call is handled by content hash extracted from the blob URI.

How is deduplication handled?

Content hashes are extracted from blob URIs (sha256-<hash>) and used to build deterministic child artifact IDs. Within a single tool call, duplicate hashes map to the same child artifact. The bulk insert uses onConflictDoNothing as a database-level safety net.

ArtifactService.ts · ledgerArtifacts.ts · ArtifactService.test.ts


Fail-loud binary sanitizer with session-level retry

Before: Blob upload failures were silently caught — the original base64 data was persisted inline, potentially bloating the database. The retry path in AgentSession logged but never actually retried.
After: Upload failures throw, propagating to AgentSession which catches them and schedules a retry via setTimeout (1 retry, 3-second delay) before giving up.

New execution limit constants ARTIFACT_SAVE_MAX_RETRIES and ARTIFACT_SAVE_RETRY_DELAY_MS are deliberately separate from the LLM generation retry budget, allowing independent tuning for infrastructure vs. generation failures.

artifact-binary-sanitizer.ts · AgentSession.ts · defaults.ts


Binary child metadata in OTel and traces UI

Before: No observability into binary content extraction from artifacts.
After: OTel spans carry artifact.binary_child_count, artifact.binary_child_ids, and artifact.binary_child_hashes. The Manage UI traces timeline displays child count and IDs as code badges in the artifact detail panel.

otel-attributes.ts · render-panel-content.tsx · route.ts

Pullfrog  | View workflow run | Triggered by Pullfrog | Using Claude Opus𝕏

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Review Summary

(1) Total Issues | Risk: Low

RE-REVIEW: This review covers the delta since the prior review. Key improvements address the batch failure atomicity and error handling concerns.

✅ Prior Issues Addressed

Prior Issue Resolution
🟠 Partial batch failure leaves inconsistent state Fixed — Now uses bulkInsertLedgerArtifacts with onConflictDoNothing() for atomic batch insert (ledgerArtifacts.ts:241)
💭 Uses metadata.parentArtifactId instead of derivedFrom Fixed — Now correctly uses derivedFrom in metadata (ArtifactService.ts:1080)
🟠 Silent fallback masks upload failures Fixedartifact-binary-sanitizer.ts now throws on upload failure, enabling higher-level retry logic (artifact-binary-sanitizer.ts:54)

🟡 Minor (1) 🟡

Inline Comments:

  • 🟡 Minor: AgentSession.ts:464 Off-by-one semantics in retry logic naming

💭 Consider (1) 💭

💭 1) ArtifactService.ts:1206 Type guard returns type: string instead of literal union

Issue: The type guard isBlobBackedBinaryPart returns value is { type: string; ... } but the implementation checks for specific literals 'image' | 'file'.

Why: The loose type signature means TypeScript won't catch if code accidentally uses an invalid type value. While functionally correct, a stricter return type would provide better compile-time safety.

Fix: Update the return type to use the literal union:

private isBlobBackedBinaryPart(
  value: unknown
): value is { type: 'image' | 'file'; data: string; mimeType?: string } {

Refs:

🕐 Pending Recommendations (2)

  • 🟠 ArtifactService.ts:906-911 Binary child artifacts only created for fullData, not summaryData — acceptable if summaryData is always a subset of fullData
  • 🟠 ArtifactService.ts Implicit contract change with artifactRef injection — consumers need to be aware of the new field

✅ APPROVE

Summary: The delta commits significantly improve reliability by switching to atomic bulk inserts and implementing active retry with explicit failure propagation. The prior review's major concerns about partial batch failures have been well addressed. The remaining items are minor (retry naming semantics) or pre-existing considerations from the prior review. Good work on the reliability improvements! 🎉

Discarded (0)

No findings discarded.

Reviewers (1)
Reviewer Returned Main Findings Consider While You're Here Inline Comments Pending Recs Discarded
orchestrator 4 0 1 0 1 2 0
Total 4 0 1 0 1 2 0

Note: Delta review — focused on changes since prior automated review. Sub-reviewers not dispatched due to small, focused delta scope.

this.artifactProcessingErrors.set(artifactId, errorCount);

if (errorCount >= this.MAX_ARTIFACT_RETRIES) {
if (errorCount > this.MAX_SAVE_RETRIES) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Minor: Off-by-one in retry logic

Issue: The condition errorCount > this.MAX_SAVE_RETRIES is checked after incrementing errorCount, so with MAX_SAVE_RETRIES = 1, this allows 2 attempts (initial + 1 retry) which is correct, but the naming/semantics are confusing.

Why: With MAX_SAVE_RETRIES = 1:

  • First failure: errorCount = 1, condition 1 > 1 is false → retry scheduled
  • Second failure: errorCount = 2, condition 2 > 1 is true → give up

This is actually correct behavior (1 retry = 2 total attempts), but the variable name MAX_SAVE_RETRIES suggests "max number of retries" not "max error count before giving up". Consider renaming to MAX_SAVE_ATTEMPTS or adjusting the condition to errorCount >= this.MAX_SAVE_RETRIES.

Refs:

@github-actions github-actions bot deleted a comment from claude bot Mar 27, 2026
@itoqa
Copy link
Copy Markdown

itoqa bot commented Mar 27, 2026

Ito Test Report ✅

20 test cases ran. 20 passed.

The unified QA run passed all 20 of 20 test cases with zero failures, confirming end-to-end correctness and stability of conversation trace artifact handling across API responses and timeline/detail-panel rendering. Key findings were that binary child metadata is correctly mapped and displayed (including count/IDs, array normalization, and duplicate-payload deduplication), safely omitted when absent, resilient under malformed input, large payloads, mobile and rapid interaction/refresh/cross-tab scenarios, protected against XSS/auth and tenant/project/path tampering leakage, and that SigNoz failure modes are surfaced with the expected 503 (unavailable), 501 (missing API key), and 429 (rate limit) responses without crashes or internal data exposure.

✅ Passed (20)
Category Summary Screenshot
Adversarial XSS-like binary child IDs rendered as plain text with no script execution signal. ADV-1
Adversarial Tampered tenant/project and fuzzed conversation-path requests stayed safely denied/error-only without cross-tenant data or internal leakage. ADV-2
Adversarial SQL-like and control-character conversationId fuzzing returned controlled errors and did not expose stack traces, SQL fragments, or internals. ADV-3
Adversarial After clearing auth state, traces API remained safely denied and did not return protected activities or stale data. ADV-4
Edge Malformed telemetry returned non-500 JSON and artifact details stayed stable without client crash. EDGE-1
Edge Array-form telemetry normalized to string IDs and rendered matching badges in artifact details. EDGE-2
Edge 50 long binary child ID badges remained usable without horizontal overflow and row switching stayed responsive. EDGE-3
Edge Deterministic local fixture confirmed repeated identical binary payloads dedupe to one child artifact reference in API output and UI detail panel. EDGE-4
Edge Two-tab navigation and refresh maintained matching binary child count and IDs. EDGE-5
Logic Deterministic local fixture confirmed artifact_processing includes binary child count > 0 and bin_* child IDs in both API payload and trace details UI. LOGIC-1
Mobile On 390x844 viewport, binary metadata stayed readable with wrapping and remained reachable via scrolling. MOBILE-1
Rapid Rapid multi-row clicking preserved final selection and kept details synchronized to the last clicked row. RAPID-1
Rapid Refresh/back-forward loops preserved activity count/order and binary metadata consistency. RAPID-2
Rapid A 10-refresh burst kept IDs unique, ordering stable, and binary child mapping correct. RAPID-3
Happy-path Authenticated conversation traces API returned artifact binary child count and IDs on artifact activity payloads. ROUTE-1
Happy-path Deterministic unavailable-mode request returned expected 503 service-unavailable response while request handling remained stable. ROUTE-2
Happy-path Missing-key scenario returned expected 501 response with explicit SIGNOZ_API_KEY configuration messaging. ROUTE-3
Happy-path Rate-limit simulation returned expected 429 response with explicit SigNoz throttling context. ROUTE-4
Ui Artifact detail panel displayed binary child count and matching binary child ID badges without mismatch. UI-1
Ui Binary child sections stayed hidden when count and IDs were absent, while core artifact fields still rendered. UI-2

Commit: 3880141

View Full Run


Tell us how we did: Give Ito Feedback

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant