Public hub for Richard Porter’s free work on safe, sovereign human–AI collaboration.
This is the front door to the ecosystem.
If you are new here, do not try to read everything in order. Each repository has a different job. Start with the one that matches the question you are actually trying to answer.
Everything here is free, voluntary, and intended to be usable without technical background.
This work is organized into seven distinct layers:
- Safety architecture — how to constrain AI behavior before drift begins
- Practical collaboration — how to work with AI without losing judgment or control
- Authorship and sovereignty research — how human governance survives or fails inside collaboration
- Measurement and scorecards — how to test whether safeguards are structurally present
- Trust and provenance — how delegation, custody, and verification work across agents or systems
- Concrete tools — operational utilities that support sovereign thinking in practice
- Developmental tools — judgment-building protocols for practitioners working toward expertise
The foundational architecture repo.
This is the place to start if your question is:
- How do you set hard boundaries before an AI session begins?
- Why are prompts and preferences not enough for safety-critical work?
- What does deterministic governance look like in human–AI collaboration?
Read this first if you care about safety floors, behavioral failure modes, or constraint-based governance.
The operational playbook.
This is the place to start if your question is:
- How do I collaborate with AI without getting subtly steered?
- What are the actual failure modes users experience in practice?
- What tools help me stay oriented, skeptical, and sovereign while working?
Read this first if you want usable methods, diagnostic language, and sovereign thinking tools (includes 48 Sovereign Thinking Tools).
The research and case-study repo.
This is the place to start if your question is:
- What does human authorship look like under AI collaboration?
- How do you preserve voice instead of flattening it?
- Why is AI detection a shrinking window, and what replaces it?
- How do sovereignty, provenance, and durable human signals fit together?
- How does voice degrade under collaboration pressure, emotional weight, or fatigue — and how do you detect it?
- How much authenticated text does it take to establish a reliable voice fingerprint?
Read this first if you want the Taller Shell case, voice-preservation work, provenance problem maps, permanent tells, voice degradation taxonomy, and authorship-sovereignty framework development.
Key documents in this repo:
analysis/permanent-tells.md— seven architectural properties of human authorship that AI cannot patchanalysis/voice-degradation-taxonomy.md— four degradation conditions, classification only: AI collaboration drift, emotional weight, authorial fatigue, collaborative contaminationexperiments/dimensional-voice-stamp-v01.md— asymmetric verifiability architecture for human voice attestationexperiments/rhythm-signature-rebuild-v02.md— anomaly-distribution detection specification for the Dimensional Fidelity Scorerexperiments/minimum-viable-corpus-protocol.md— empirical methodology for establishing the minimum authenticated text needed for reliable voice discrimination
The measurement repo.
This is the place to start if your question is:
- How do we evaluate whether a safeguard is actually present?
- What does a binary architectural test look like?
- How do we measure drift, sovereignty, or high-risk conversational features?
Read this first if you care about scorecards, indices, pass/fail safety criteria, or evaluation frameworks.
The delegation and verification repo.
This is the place to start if your question is:
- How do agents prove what they were authorized to do?
- How should permissions decay across handoffs?
- What does chain of custody look like for agentic systems?
- How do we make delegation legible and verifiable instead of implied?
Read this first if you are thinking about multi-agent systems, custody, scope, authorization, or provenance chains.
A standalone tool implementation.
This is the place to start if your question is:
- How do I identify what is missing without having the AI fill it in for me?
- How do I surface absences while keeping the human in charge of interpretation?
- What does a sovereignty-preserving diagnostic tool look like in practice?
- How do I check whether a voice-critical document shows signs of degradation?
Read this first if you want a usable example of the tool layer.
Note for authorship and voice work: The Mapper now includes a Voice Degradation Domain extension — a detection domain that activates on voice-critical documents (long-form fiction, memoir, AI-collaborative work under emotional pressure) and flags the absence of characteristic anomalies, register variance, and productive incoherence signals. If you are working in the dimensional-authorship space, the Mapper is a companion instrument to the Voice Degradation Taxonomy.
The developmental tools layer.
This is the place to start if your question is:
- How do I develop the judgment that experience would normally provide?
- How do I simulate consequence paths before a consequential decision?
- How do I build the cognitive architecture that senior practitioners have — without waiting decades?
- What structured protocols exist for pre-decision diagnostics, cascade analysis, and consequence tracing?
Read this first if you want tools that build the practitioner, not just tools that support a single task. The 48-tool index spans cognitive bypass, verification, routing, decision closure, recovery, author profiles, nonprofit governance, logic extension, structural diagnostics, and developmental protocols.
Key tools in this layer:
tool-47-cascade-failure-detector.md— pre-decision structural diagnostic for systems assumed to be resilienttool-48-conrad.md— consequence simulator for building judgment before experience provides it; includes Black Swan inoculation and the Rehearsal Stop Condition
Start with:
- AI Collaboration Field Guide
- Frozen Kernel
- Negative Space Mapper
Start with:
- Frozen Kernel
- Safety Ledgers
- Trust Chain Protocol
Start with:
- Dimensional Authorship
- Negative Space Mapper (Voice Degradation Domain)
- Safety Ledgers
Start with:
- Trust Chain Protocol
- Frozen Kernel
- Safety Ledgers
Start with:
- Sovereign Thinking Tools (Tool 48: Conrad)
- AI Collaboration Field Guide
- Frozen Kernel
Read in this order:
- Frozen Kernel
- AI Collaboration Field Guide
- Dimensional Authorship
- Safety Ledgers
- Trust Chain Protocol
- Negative Space Mapper
- Sovereign Thinking Tools
- Frozen Kernel — deterministic safety architecture for human–AI collaboration
- AI Collaboration Field Guide — practical operating manual for staying sovereign while using AI (includes 48 Sovereign Thinking Tools)
- Dimensional Authorship — research home for voice preservation, degradation taxonomy, authorship sovereignty, and provenance under human–AI collaboration
- Safety Ledgers — scorecards, indices, and binary tests for AI safeguards
- Trust Chain Protocol — delegation, custody, and verification architecture for multi-agent systems
- Negative Space Mapper — a tool that identifies meaningful absences without taking over interpretation
- Sovereign Thinking Tools — 48 judgment-building protocols spanning cognitive bypass, structural diagnostics, and developmental training
This ecosystem is built around one central problem:
As AI becomes more capable, the main question is no longer just whether outputs are useful.
The deeper questions are:
- Who is actually governing the interaction?
- What safety boundaries exist before the model begins to drift?
- How does a human remain sovereign during collaboration?
- How do we preserve authorship instead of flattening it?
- How do we make trust, delegation, and provenance legible afterward?
- How do we build the human judgment that AI collaboration is quietly replacing?
Each repository addresses one part of that larger problem.
- For safety: Frozen Kernel
- For practical use: AI Collaboration Field Guide
- For authorship and provenance: Dimensional Authorship
- For evaluation: Safety Ledgers
- For multi-agent trust: Trust Chain Protocol
- For a concrete tool: Negative Space Mapper
- For building judgment: Sovereign Thinking Tools
Everything here is public, free, and intended to be useful.
You do not need technical expertise to read it. You do not need permission to learn from it. You do not need to agree with all of it to find something usable.