Skip to content

F035.4: Context Visibility — What the Model Actually Sees#249

Merged
tfatykhov merged 1 commit intomainfrom
spec/F035.4-context-visibility
Apr 4, 2026
Merged

F035.4: Context Visibility — What the Model Actually Sees#249
tfatykhov merged 1 commit intomainfrom
spec/F035.4-context-visibility

Conversation

@tfatykhov
Copy link
Copy Markdown
Owner

F035.4 — Context Visibility

Fourth layer in the F035 Observability stack. While F035.1-3 trace what the system did, F035.4 traces what the model was told.

What is in the spec

  • ContextLogEntry — structured metadata for every API call: token breakdown by system prompt section, memory loading inventory, tools included, message counts
  • Token breakdown — identity, user_profile, censors, frame_instructions, working_memory, related_decisions, relevant_facts, execution_ledger, tools, messages — each measured independently
  • Full payload ring buffer — optional capture of complete API payloads (bounded, auto-pruned)
  • REST API/context/log, /context/log/:id, /context/log/:id/payload, /context/diff
  • Telegram/context command + compact line in /status
  • Dashboard — Context Inspector panel with timeline, section breakdown, diff view

Architecture hook

Single insertion point: _build_api_payload() in runner.py (Issue #168 unified all LLM calls through AnthropicClient). Async DB write — zero latency impact on API calls.

Key design decisions

  • Metadata always captured (~500 bytes/call), full payloads optional (50-200KB/call)
  • len(text)/4 for fast token estimation, API usage response for ground truth
  • 30-day auto-retention, ring buffer for full payloads
  • Independent of F035.1-3 — hooks into runner pipeline, not event bus. Can be built in parallel.

The question it answers

Why did Nous not know X? → Check if the fact was loaded into context or got ranked out.
Why did Nous repeat that tool call? → Check if the execution ledger was in context.
Where are all the tokens going? → Token breakdown by section, every turn.

New observability sub-spec for full LLM context window transparency:
- ContextLogEntry with token breakdown by system prompt section
- Full payload ring buffer (optional, bounded)
- REST endpoints: /context/log, /context/diff
- Telegram /context command
- Dashboard Context Inspector panel
- Hooks into _build_api_payload() single choke point

Updated F035 umbrella spec:
- Added Layer 4 description
- Added F035.4 to sub-specs table
- Updated sequencing rationale (F035.4 is independent, parallelizable)
- Added success criterion #6 for context visibility
@chatgpt-codex-connector
Copy link
Copy Markdown

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.
To continue using code reviews, you can upgrade your account or add credits to your account and enable them for code reviews in your settings.

@tfatykhov tfatykhov merged commit 8ae5172 into main Apr 4, 2026
1 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant