diff --git a/.agents/skills/docs-impact-architect/SKILL.md b/.agents/skills/docs-impact-architect/SKILL.md new file mode 100644 index 000000000..2bc2c9970 --- /dev/null +++ b/.agents/skills/docs-impact-architect/SKILL.md @@ -0,0 +1,149 @@ +--- +name: docs-impact-architect +description: >- + Use this skill when the docs-impact-classifier returns a structural + verdict, signalling that the documentation TOC must change to + accommodate the PR. Proposes TOC deltas (new pages, moves, + merges) and emits new-page outline stubs that the doc-sync panel + later fleshes out. Holds the 3-promise narrative (consume / + produce / govern) and the persona ramps as hard constraints. +--- + +# docs-impact-architect + +Single responsibility: when the classifier says a PR needs +structural docs changes (new page, page move, TOC reshape), design +the change and emit: + +1. A precise TOC delta (added pages, moved pages, retired pages) +2. New-page outline stubs (slug, title, persona, promise, H2 sections, key examples) +3. The persona-ramp impact (which ramp gains/loses a stop) + +You are NOT the writer (doc-writer owns prose). You are the **TOC +architect**. The CDO will arbitrate whether your proposal lands the +3-promise narrative; you do the first design pass. + +## When to invoke + +The docs-sync orchestrator invokes you ONLY when the classifier +returned `verdict: structural`. For `no_change` or `in_place` you +don't run. + +## Inputs + +- `structural_proposal` from the classifier (a sketch you refine) +- The PR diff (`gh pr diff $PR`) +- `.apm/docs-index.yml` (full corpus map) +- The PR description (for author-stated intent) + +## Step 1: read the corpus map, not the corpus + +Load `.apm/docs-index.yml` entirely. Inspect `chapters[]`, `pages[]`, +`promises[]`. This is your map. You do NOT read the 100+ page corpus +unless a specific page is implicated by the classifier's sketch. + +## Step 2: classify the structural shape + +Match the PR's surface change to one of these structural shapes: + +| Shape | Pattern | Example | +|---|---|---| +| **NEW CAPABILITY** | A new CLI verb, primitive type, or schema concept the docs have no slot for | `apm pack --format wheel` adds a new package format | +| **EXPANDED CAPABILITY** | An existing concept grows in scope and the current page can't hold it | `apm install` gains a registry-proxy mode that needs its own sub-page | +| **DEPRECATED CAPABILITY** | A removed CLI verb, flag, or concept; existing pages need to be retired or rewritten | A flag is removed; tutorial pages still teach it | +| **CONCEPT SPLIT** | One concept becomes two distinct concepts; one page becomes two | `apm audit` splits into `audit` and `audit ci` | +| **CONCEPT MERGE** | Two concepts unify; two pages should become one | `apm pack` and `apm bundle` merge into one verb | +| **RAMP REORG** | The PR's surface change shifts a concept across promises (e.g. an enterprise feature becomes consumer-default) | Policy enforcement moves from enterprise to consumer default behaviour | + +The structural shape drives the TOC delta shape. + +## Step 3: design the TOC delta + +For each new page proposed, fill in: + +```yaml +new_page: + slug: docs/src/content/docs//.md + title: "" + persona: consumer | producer | enterprise | cross + promise: 1 | 2 | 3 | cross + parent_chapter: + h2_sections: + - "## Why " # OPTIONAL -- skip unless concept is genuinely new + - "## How to " # REQUIRED -- code first + - "## Reference" # OPTIONAL -- flag/option table + - "## Troubleshooting" # OPTIONAL -- only if known footguns + bridges: + incoming: # which existing pages should link TO this + - {from: , link_text: } + outgoing: # which existing pages should this link FROM + - {to: , link_text: } + ramp_impact: >- + one-paragraph description of how this changes the + ramp: which step it slots into, whether it adds a stop or + replaces an existing one +``` + +For each moved/retired page: + +```yaml +moved_page: + from: + to: + redirect_rationale: + +retired_page: + slug: + reason: + redirect_to: # MUST exist; orphaning pages breaks SEO +``` + +## Step 4: validate against the 3-promise narrative + +Apply these hard rules. If any fails, redesign: + +1. **Every page belongs to exactly one promise.** Cross-cutting pages (integrations, troubleshooting, reference) are explicitly marked `promise: cross`. If a new page straddles two promises, split it OR park it under `cross`. +2. **Consumer pages don't pre-teach producer concepts.** A consumer page may LINK to producer; it may not embed producer prose. +3. **Producer pages don't pre-teach enterprise concepts.** Same rule, one promise down. +4. **No page is orphaned from the TOC.** Every new page has a `parent_chapter` and at least one `incoming` bridge. +5. **No retired page lacks a `redirect_to`.** Search engines will index the old URL for months; the redirect is the SEO contract. + +## Step 5: emit the architect report + +Return JSON: + +```json +{ + "structural_shape": "NEW CAPABILITY" | "EXPANDED CAPABILITY" | "DEPRECATED CAPABILITY" | "CONCEPT SPLIT" | "CONCEPT MERGE" | "RAMP REORG", + "toc_delta": { + "new_pages": [...], + "moved_pages": [...], + "retired_pages": [...], + "chapter_changes": [...] + }, + "promise_validation": { + "all_pages_single_promise": true | false, + "no_orphans": true | false, + "no_unredirected_retires": true | false, + "concerns": [] + }, + "downstream_in_place_pages": ["..."], + "rationale": "<2-3 sentence summary of why this structural delta and not alternatives>" +} +``` + +`downstream_in_place_pages[]` is the handoff to the localizer -- after +the architect approves the TOC, the localizer plans in-place edits +to existing pages that REFERENCE the new structure. + +## Output contract + +Return a SINGLE JSON document matching the schema in Step 5 as the +final message of your task. No prose around the JSON. + +## Anti-patterns + +- Inflating new-page counts to seem thorough. The minimal true delta wins. +- Skipping the promise-validation step. The CDO will catch it; better to self-catch. +- Designing a new chapter when an existing chapter has room. Always prefer extending over creating. +- Forgetting `redirect_to` on retired pages. SEO debt is the silent corpus killer. diff --git a/.agents/skills/docs-impact-classifier/SKILL.md b/.agents/skills/docs-impact-classifier/SKILL.md new file mode 100644 index 000000000..70a1f2f14 --- /dev/null +++ b/.agents/skills/docs-impact-classifier/SKILL.md @@ -0,0 +1,154 @@ +--- +name: docs-impact-classifier +description: >- + Use this skill to classify the documentation impact of a pull + request diff, returning one of three verdicts -- no-change, + in-place edit, or structural change -- with bounded LLM cost. + Activate as a sibling skill of docs-sync; the orchestrator calls + this first, before any panel spawn, to keep cost floor at 1 LLM + call when no docs work is needed. Reads .apm/docs-index.yml as + the corpus map; never reads the full corpus. +--- + +# docs-impact-classifier + +Single responsibility: given a PR diff and the `.apm/docs-index.yml` +corpus map, emit ONE classification verdict. + +This skill is the cost gate for the entire docs-sync system. ~70% of +PRs should exit at verdict `no_change` with zero panel spawn. + +## Architecture + +This is a 3-layer funnel inside a single skill invocation: + +- **L0 deterministic path gate** -- pure file-path matching, no LLM. +- **L1 symbol extraction + corpus grep** -- pure text processing, no LLM. +- **L2 LLM classifier** -- bounded ~8 KB context envelope, 1 call. + +The skill returns the verdict from the earliest layer that can decide. + +## Step 1: L0 deterministic path gate (no LLM) + +Read `.apm/docs-index.yml` to load `no_impact_paths[]` and +`user_surface_paths[]`. Get the changed file list from the PR diff +(`gh pr diff --name-only`). + +``` +if every changed file matches no_impact_paths AND none match user_surface_paths: + return {verdict: "no_change", confidence: "high", source: "L0", scope_pages: []} +``` + +This handles: +- Test-only PRs (`tests/**`) +- CI workflow PRs (`.github/workflows/**`) +- Doc-only PRs (`docs/**`) -- out of scope, docs-sync doesn't review docs PRs +- Primitive-only PRs (`.apm/**`) +- Script and meta PRs + +Expected hit rate: ~70% of PRs short-circuit here. + +## Step 2: L1 symbol extraction + corpus grep (no LLM) + +If L0 did not exit, extract user-observable symbols from the diff: + +- **CLI command names** -- grep diff for `^@click.command`, `^@cli.command`, or any `apm ` mention in added/removed lines. +- **Flag names** -- grep diff for `^@click.option`, `--[a-z-]+` patterns. +- **Public API symbols** -- added/removed `def ` in `src/apm_cli/__init__.py` or `src/apm_cli/api/**`. +- **Schema keys** -- added/removed keys in `apm.yml`, `apm.lock.yaml`, `apm-policy.yml` parsers. +- **Error strings** -- added/removed string literals in user-facing error paths (look for `_rich_error`, `click.echo`, `raise ... Error(`). + +For each extracted symbol, consult `.apm/docs-index.yml#symbol_index` +to find the documented pages. Collect all hits into `candidate_pages[]`. + +Also `grep -rn docs/src/content/docs/` for symbols NOT in +the index (catches drift between index and corpus). + +## Step 3: L2 LLM verdict (1 call, bounded context) + +If L1 found zero candidate pages AND zero schema/CLI/flag changes: +return `{verdict: "no_change", confidence: "medium", source: "L1", scope_pages: []}`. + +Otherwise, invoke the doc-analyser persona with EXACTLY this context +envelope (must fit in ~8 KB tokens): + +- PR title + body (first 500 chars) +- Diff stats (`gh pr diff --stat` output) +- `.apm/docs-index.yml` (the whole file; it's ~8 KB seeded, may grow) +- L1 candidate pages with +/-5 lines of context per hit +- Path-classification summary from L0 +- **`pr_doc_diff_paths[]`**: the list of paths under `docs/src/content/docs/**` + that the PR itself already modifies (drives the `in_place_resolved` + downgrade rule in "In-place-resolved detection" below). + +Ask doc-analyser to return JSON matching this schema: + +```json +{ + "verdict": "no_change" | "in_place_resolved" | "in_place" | "structural", + "confidence": "low" | "medium" | "high", + "scope_pages": ["docs/src/content/docs/..."], + "structural_proposal": { + "new_pages": [{"slug": "...", "rationale": "..."}], + "moved_pages": [{"from": "...", "to": "..."}], + "toc_changes": "" + }, + "reasoning": "" +} +``` + +`structural_proposal` is populated only when verdict is `structural`. +`scope_pages` is populated for `in_place` and `structural` verdicts. + +## Verdict semantics + +| Verdict | Meaning | Panel size | Cost | +|---|---|---|---| +| `no_change` | No user-observable surface changed | 0 panel spawns | ~0-1 LLM call | +| `in_place_resolved` | Doc impact existed, but the PR's OWN diff already patches every page in `scope_pages` -- author already did the work | 0 panel spawns; skill emits NO advisory | ~1 LLM call | +| `in_place` | One to a few pages need a paragraph or section update; no new pages, no TOC change | N candidate pages x (doc-writer + python-architect) + editorial-owner + growth-hacker + CDO | ~6-12 LLM calls | +| `structural` | A new page is needed, OR an existing page should be split/merged, OR the TOC needs to change to fit a new concept | architect first (TOC delta), then in-place panel for affected pages | ~10-15 LLM calls | + +## In-place-resolved detection (false-alarm killer) + +BEFORE returning `in_place`, intersect your `scope_pages[]` with the +list of files the PR itself touches under `docs/**` (provided to you +by the orchestrator under `pr_doc_diff_paths[]`). If EVERY scope page +already appears in `pr_doc_diff_paths`, downgrade to `in_place_resolved` +and emit `reasoning` of the form "Author already patched ". +This is the well-behaved-author path; the skill stays silent. + +If only SOME scope pages are pre-patched, keep `in_place` and list the +REMAINING (unpatched) pages in `scope_pages[]`. Note the pre-patched +ones in `reasoning` for transparency. + +## Rename / breaking-change heuristic (PR 1244 class) + +When the L1 layer reports an ADDED public symbol that matches an +EXISTING public symbol's name in the corpus (e.g. PR adds `apm update` +but `apm update` already appears in 9 docs pages with different +semantics), this is a RENAME or BREAKING SEMANTIC CHANGE. Bias toward +`structural` (not `in_place`): +- the existing page describing the OLD semantics may need to SPLIT + into two pages (old verb under new name + new verb keeping old name) +- the TOC may need a NEW reference page for the renamed verb +- every passing mention in the corpus needs verification + +Do NOT collapse a rename into `in_place` just because the affected +pages already exist. The shape of the work is structural even when no +new page is strictly required. + +## Anti-patterns (verdict shape errors) + +- Returning `in_place` with empty `scope_pages` -- invalid; orchestrator will reject. +- Returning `structural` without `structural_proposal` -- invalid. +- Returning `in_place` when EVERY scope page is in `pr_doc_diff_paths` -- should be `in_place_resolved`. +- Inflating `structural` to seem thorough -- the CDO will catch this. Return the minimal true verdict. +- Missing the rename heuristic above and emitting `in_place` for a verb-swap PR. +- Reading the corpus (the .md files themselves) at L2 -- context budget breach. You read the index, not the corpus. + +## Output contract + +Return a SINGLE JSON document matching the schema in Step 3 as the +final message of your task. No prose around the JSON. The +orchestrator parses your last message. diff --git a/.agents/skills/docs-impact-localizer/SKILL.md b/.agents/skills/docs-impact-localizer/SKILL.md new file mode 100644 index 000000000..61e2ba2d7 --- /dev/null +++ b/.agents/skills/docs-impact-localizer/SKILL.md @@ -0,0 +1,124 @@ +--- +name: docs-impact-localizer +description: >- + Use this skill to translate a classifier's in-place verdict into a + precise, page-by-page work plan for the docs-sync panel. Activate + after docs-impact-classifier returns verdict in_place; reads the + candidate page list, fetches the actual page contents, narrows + scope to specific sections within each page, and emits the + per-page task brief the panel fans out against. +--- + +# docs-impact-localizer + +Single responsibility: given a list of candidate pages from the +classifier, produce a per-page task brief the docs-sync panel can +fan out against. + +You are NOT the verdict-maker (classifier owns that). You are NOT +the writer (doc-writer owns that). You are the **work planner**. + +## When to invoke + +The docs-sync orchestrator invokes you ONLY when the classifier +returned `verdict: in_place`. For `no_change` you don't run. +For `structural` the architect runs first; you may run after, scoped +to existing pages that need amendment. + +## Inputs + +- `scope_pages[]` from the classifier +- The PR diff (`gh pr diff $PR`) +- `.apm/docs-index.yml` (per-page metadata) +- Optional: the structural architect's TOC delta (if you run after + the architect on a structural verdict) + +## Step 1: load page contents + +For each path in `scope_pages[]`, read the file. Pages are typically +3-10 KB; total budget for this step is bounded by the candidate +count (the classifier should have kept it to <= 6). + +## Step 2: narrow scope inside each page + +For each page, identify the SPECIFIC section(s) that need to change: + +- Read the page's H2/H3 structure +- For each diff symbol from the classifier output, find the section + most directly documenting it +- Capture line ranges: `lines 120-145` not `the whole page` + +The output is a `sections_to_edit[]` per page, where each entry is: + +```yaml +page: docs/src/content/docs/consumer/install.md +sections_to_edit: + - section: "## From Git" + line_range: [120, 145] + diff_symbol: "--no-cache flag" + edit_kind: add | modify | remove + rationale: "the new --no-cache flag is documented nowhere; section already lists other flags so this is the natural home" +``` + +## Step 3: detect cross-page conflicts + +If two pages document the same symbol and the diff changes the +symbol's behaviour, BOTH pages need an edit AND they must stay +consistent. Flag this in the brief so the CDO synthesizer knows to +cross-check coherence between the two redrafts: + +```yaml +cross_page_constraint: + pages: [path1, path2] + shared_symbol: "apm install --target" + consistency_required: "both pages must reflect the same default value" +``` + +## Step 4: emit the per-page task brief + +Return JSON with this shape (one entry per page in `scope_pages[]`): + +```json +{ + "tasks": [ + { + "page": "docs/src/content/docs/consumer/install.md", + "persona_owner": "consumer", + "promise": 1, + "sections_to_edit": [ + { + "section": "## From Git", + "line_range": [120, 145], + "diff_symbol": "--no-cache flag", + "edit_kind": "add", + "rationale": "..." + } + ], + "verify_claims": [ + {"claim": "the flag is named --no-cache", "verify_with": "apm install --help"}, + {"claim": "the flag is documented in click.option decorator", "verify_with": "grep -n no-cache src/apm_cli/commands/install.py"} + ] + } + ], + "cross_page_constraints": [ + {"pages": [...], "shared_symbol": "...", "consistency_required": "..."} + ], + "estimated_panel_calls": 8 +} +``` + +The `verify_claims[]` per page is consumed by the python-architect +panelist -- it tells the verifier WHICH claims need a S7 tool-call +check (run `apm install --help`, grep the source) rather than +prose-trusting. + +## Output contract + +Return a SINGLE JSON document matching the schema in Step 4 as the +final message of your task. No prose around the JSON. + +## Anti-patterns + +- Selecting whole pages when one section suffices (inflates context per panelist). +- Skipping `verify_claims[]` -- that's the S7 tool-bridge hook; the verifier needs it. +- Inventing pages not in `scope_pages[]` -- that's the classifier's job, not yours. If you think the classifier missed a page, return an extra field `localizer_concern` instead of expanding scope unilaterally. diff --git a/.agents/skills/docs-sync/SKILL.md b/.agents/skills/docs-sync/SKILL.md new file mode 100644 index 000000000..a5d6e8750 --- /dev/null +++ b/.agents/skills/docs-sync/SKILL.md @@ -0,0 +1,238 @@ +--- +name: docs-sync +description: >- + Use this skill whenever a pull request is opened, reopened, or + synchronized in microsoft/apm to assess whether and how the + documentation corpus must change to stay truthful with the + proposed code change. Activate even when the PR title or body + says nothing about docs -- the skill must run on every PR to + detect silent drift between code and docs. Classifies impact + as no-change, in-place edit (one to a few paragraphs), or + structural change (new page or TOC reshape), then orchestrates + a CDO + doc-writer + python-architect + editorial-owner + + growth-hacker loop to produce a patch-ready advisory. Does NOT + review code quality, security, or test coverage. Does NOT + auto-merge or auto-push doc edits. +--- + +# docs-sync -- per-PR documentation impact panel + +The docs corpus drifts silently and constantly. This skill catches +drift at PR-open time, classifies its impact, and orchestrates a +persona panel to produce a patch-ready advisory comment. + +The pattern is **A1 PANEL + B1 FAN-OUT/SYNTHESIZER + A8 ALIGNMENT +LOOP**. The classifier is the cost gate (~70% of PRs short-circuit +to no-change with ~1 LLM call). When the panel does fan out, every +agent reads a bounded context (~10 KB) -- never the full corpus. + +This skill is ADVISORY. It does not gate merge, apply verdict +labels, or push to the contributor's fork. The orchestrator is the +sole writer to the PR: exactly one comment per run (idempotent +edit-in-place), plus optional label sweeps. + +## Architecture invariants + +- **Cost ceiling: 15 LLM calls per run.** Hard-wired. The orchestrator refuses to spawn beyond. Header prints `N/15` for observability. +- **Single-writer interlock.** Only the orchestrator writes. Panelist subagents return JSON; they MUST NOT call any `gh` write command, post comments, or touch PR state. +- **Idempotent comment.** Exactly one comment per run, with a stable header `## Docs sync advisory`. Re-runs edit-in-place using `gh pr comment --edit-last`. +- **No fork-write.** Companion docs PRs (only on structural verdict with `docs-sync-confirm` label) open from a bot branch in the BASE repo; never pushed to the contributor's fork. +- **Index-not-corpus reads.** Every classifier and architect agent reads `.apm/docs-index.yml`, NOT the corpus itself. The corpus is sampled only by the localizer (which reads the specific candidate pages) and by per-page panelists (which read one page each). +- **S7 deterministic tool bridge.** The python-architect panelist MUST run real `apm --help`, `grep`, and `python -c` commands to verify doc claims, never assert from prose. + +## Roster + +| Role | Agent | Always active? | +|---|---|---| +| Classifier | [doc-analyser](../../agents/doc-analyser.agent.md) inside [docs-impact-classifier](../docs-impact-classifier/SKILL.md) | Yes (every run) | +| Localizer | [docs-impact-localizer](../docs-impact-localizer/SKILL.md) | Only on `in_place` verdict | +| Architect | [docs-impact-architect](../docs-impact-architect/SKILL.md) | Only on `structural` verdict | +| Writer | [doc-writer](../../agents/doc-writer.agent.md) | Per candidate page (fan-out) | +| Verifier | [python-architect](../../agents/python-architect.agent.md) | Per candidate page (fan-out, S7) | +| Editorial | [editorial-owner](../../agents/editorial-owner.agent.md) | Once across all redrafts | +| Growth | [oss-growth-hacker](../../agents/oss-growth-hacker.agent.md) | Once across all redrafts | +| Synthesizer | [cdo](../../agents/cdo.agent.md) | Once, with ALIGNMENT LOOP up to 3 redrafts | + +## Topology + +``` + docs-sync SKILL (orchestrator thread) + | + Step 1: classify (1 LLM call, may exit here) + | + v + verdict? + / | \ + no-change in-place structural + | | | + EXIT | architect (TOC delta) + | | + +----<-----+ + | + Step 2: localize (1 LLM call) -- per-page task brief + | + Step 3: FAN-OUT panel via task tool + | + +----+----+----+----+ + v v v v v + writer verify edit growth + x N x N once once + (parallel; each <=10 KB context) + | + Step 4: schema-validate returns + | + Step 5: CDO synthesize (1 LLM call) + | + agree? + / | \ + revise (N<=3 redrafts) | agree + | + Step 6: emit ONE comment via safe-outputs.add-comment + Step 7: OPTIONAL companion docs PR (only if structural AND + `docs-sync-confirm` label present) +``` + +## Execution checklist + +### Step 1 -- Classify + +Spawn ONE task: load the `docs-impact-classifier` skill, pass it the +PR number. It returns the classifier JSON. + +Validate the JSON against `assets/classifier-return-schema.json`. +On schema failure, abort the run with a comment explaining the +internal error. + +If verdict is `no_change`: skip to Step 6 with a brief advisory +("No docs impact detected. Reason: . LLM calls: 1/15.") + +### Step 2 -- Localize (in_place) or Architect (structural) + +For `in_place`: spawn ONE task that loads the +`docs-impact-localizer` skill with the classifier output. Returns +per-page task briefs. + +For `structural`: spawn ONE task that loads the +`docs-impact-architect` skill with the classifier output. Returns +TOC delta + new-page outlines + downstream in-place pages. THEN +spawn the localizer for those downstream pages. + +### Step 3 -- Fan-out panel + +**Cascade-size mitigation (PR 1244 class).** If `scope_pages[]` has +>8 entries, the per-page fan-out at one writer call per page would +approach the 15-call ceiling with no headroom for verifier redrafts. +BEFORE spawning, group `scope_pages[]` into SECTIONS: + +- Pages under the same TOC section (e.g. all `consumer/**`) with the + SAME conceptual fix (e.g. "rename apm update -> apm self-update in + every mention") become ONE writer task with a `pages_in_section[]` + array in its brief. +- A 9-page rename cascade collapses to 2-3 section writer tasks. + +The python-architect verifier still runs per `verify_claims[]` (not +per page), because S7 evidence is keyed on claims, not pages. + +For each page-or-section in the per-page task brief, spawn TWO parallel tasks: + +1. **doc-writer** task -- drafts the patch for that page's (or section's) specific edits. Output: JSON with `before:`, `after:` for each location. +2. **python-architect** task -- for each `verify_claims[]` in the page brief, run the actual command (S7 tool bridge: `apm --help`, `grep -n src/`). Output: JSON with `claim: verified | refuted | inconclusive` per claim. + +In parallel with the per-page fan-out, spawn ONCE each: + +3. **editorial-owner** task -- receives ALL writer drafts, returns tone fixes. +4. **oss-growth-hacker** task -- receives ALL writer drafts, returns ramp-clarity notes (does this read well to a cold OSS visitor). + +All panelist tasks return JSON matching `assets/panelist-return-schema.json`. +Schema-validate every return; on failure, abort. + +### Step 4 -- Validate + +Cross-check: + +- Every `verify_claims` from a python-architect comes back `verified` or `inconclusive` (never `refuted`). If any are `refuted`, the doc-writer's draft is wrong; re-run the writer for that page with the refutation as context. +- Cross-page constraints from the localizer are honored across all writer drafts. +- All drafts are ASCII-only (per repo encoding rule). + +### Step 5 -- CDO synthesize + +Spawn ONE task: load the `cdo` persona with the full panel return +(writer drafts + verifier reports + editorial notes + growth notes ++ classifier verdict + (architect output if structural)) and +`.apm/docs-index.yml`. + +The CDO returns one of three verdicts: + +- `agree`: ship. Proceed to Step 6. +- `revise`: re-spawn the writer panelists with the CDO's specific + concerns as additional context. Re-run the editorial and growth + passes if needed. Bounded N <= 3 redrafts. Increment a redraft + counter; if it hits 3 and CDO still disagrees, ship with + `cdo_disagreement_noted: true`. +- `ship_with_disagreement`: ship as-is with the disagreement + surfaced in the comment for the maintainer to weigh. + +### Step 6 -- Emit ONE comment + +Render `assets/advisory-comment-template.md` with the final results. +Write it via `safe-outputs.add-comment`. Header is exactly +`## Docs sync advisory` (stable for idempotent edit-in-place). + +The comment MUST include the cost header: + +``` +Verdict: * Pages affected: N * LLM calls: M/15 * Took: Xs +``` + +### Step 7 -- Optional companion PR + +Only on `structural` verdict AND `docs-sync-confirm` label present +on the PR (the A9 SUPERVISED EXECUTION boundary; the maintainer +ratifies the structural proposal before any PR is opened). + +If both conditions hold: + +1. Branch name: `docs-sync/companion-` in the BASE repo. +2. Apply the doc-writer drafts as a commit on that branch. +3. Apply the architect's TOC delta (`.apm/docs-index.yml` entries + + new page files + redirects on retired pages). +4. Open a draft PR linked to the original PR, with the advisory + comment text as the PR body. +5. Reference the companion PR in the advisory comment. + +This step is intentionally GATED. The default behaviour (no +`docs-sync-confirm` label) is to recommend the patches in the +comment without opening a PR. + +## Cost accounting + +The orchestrator maintains a running LLM-call counter: + +| Step | Min calls | Max calls | +|---|---|---| +| Step 1 classify | 1 | 1 | +| Step 2 localize/architect | 0 | 2 | +| Step 3 fan-out (N pages) | 0 | 2N + 2 | +| Step 5 CDO | 0 | 1 + 3 redrafts | +| Total | 1 | 15 | + +If the counter would exceed 15, the orchestrator stops spawning, +ships the partial result with `cost_ceiling_hit: true`, and the +comment surfaces the truncation. + +## Anti-patterns + +- Reading the corpus instead of the index. Context budget breach. +- Letting panelists post comments. Single-writer interlock violation. +- Ignoring `refuted` verify_claims. That's silent drift you're shipping. +- Skipping the CDO synthesis on "obvious" in-place patches. The bridges still matter. +- Auto-opening companion PRs without the confirm label. Removes the human ratification. +- Re-running on every push (synchronize). Wasteful. Re-apply the trigger label for re-run. + +## Operating modes + +- **Rung 1 (label-gated, default)**: triggered by `docs-sync` label on PR. Maintainer opts in. +- **Rung 2 (default-on)**: triggered on every `pull_request_target` event. Enabled only after shadow validation. + +The workflow file controls which rung is active. The skill body is +identical for both. diff --git a/.agents/skills/docs-sync/assets/advisory-comment-template.md b/.agents/skills/docs-sync/assets/advisory-comment-template.md new file mode 100644 index 000000000..42b40eaa7 --- /dev/null +++ b/.agents/skills/docs-sync/assets/advisory-comment-template.md @@ -0,0 +1,106 @@ +## Docs sync advisory + +Verdict: **{{ verdict }}** * Pages affected: {{ pages_affected_count }} * LLM calls: {{ llm_calls_used }}/15 * Took: {{ elapsed_seconds }}s + +{{ #if cost_ceiling_hit }} +> WARNING: Hit the 15 LLM call ceiling. Result is partial; see `cost_ceiling_hit: true` flag. +{{ /if }} + +{{ #if cdo_disagreement_noted }} +> NOTE: CDO disagreement after 3 redraft rounds. Maintainer judgement needed; see "Open concerns" below. +{{ /if }} + +### Summary + +{{ summary_paragraph }} + +{{ #if pages_affected_count == 0 }} + +No documentation changes needed for this PR. + +{{ classifier_reasoning }} + +{{ else }} + +### Proposed patches + +{{ #each page_patches }} + +#### `{{ this.page }}` ({{ this.persona }} ramp, promise {{ this.promise }}) + +{{ #each this.sections }} + +**Section: {{ this.section }}** (lines {{ this.line_range }}) + +```diff +- {{ this.before }} ++ {{ this.after }} +``` + +Rationale: {{ this.rationale }} + +{{ #if this.verifications }} +Verified by: {{ this.verifications }} +{{ /if }} + +{{ /each }} + +{{ /each }} + +{{ /if }} + +{{ #if structural_proposal }} + +### Structural proposal + +{{ structural_proposal.summary }} + +**New pages:** + +{{ #each structural_proposal.new_pages }} +- `{{ this.slug }}` -- {{ this.title }} ({{ this.persona }} ramp). {{ this.rationale }} +{{ /each }} + +**Moved / retired:** + +{{ #each structural_proposal.moved_pages }} +- `{{ this.from }}` -> `{{ this.to }}` ({{ this.redirect_rationale }}) +{{ /each }} + +{{ #if structural_proposal.confirm_label_present }} + +A companion docs PR has been opened: {{ companion_pr_link }}. + +{{ else }} + +To open a companion docs PR with these changes, apply the `docs-sync-confirm` label to this PR. + +{{ /if }} + +{{ /if }} + +{{ #if open_concerns }} + +### Open concerns (from CDO) + +{{ #each open_concerns }} +- {{ this }} +{{ /each }} + +{{ /if }} + +--- + +
+How this advisory was produced + +- Classifier verdict: `{{ verdict }}` (confidence: {{ confidence }}, source: {{ classifier_source }}) +- Panel composition: {{ panel_composition }} +- Tool-verified claims: {{ verification_count }} ({{ verification_pass_count }} verified, {{ verification_refute_count }} refuted, {{ verification_inconclusive_count }} inconclusive) +- CDO redraft rounds: {{ cdo_redraft_rounds }}/3 + +This is an advisory comment from the `docs-sync` skill ([source](.apm/skills/docs-sync/SKILL.md)). It does not gate merge. The maintainer ships. + +Re-run by removing and re-applying the `docs-sync` label. + +
diff --git a/.agents/skills/docs-sync/assets/classifier-return-schema.json b/.agents/skills/docs-sync/assets/classifier-return-schema.json new file mode 100644 index 000000000..009dfa3cb --- /dev/null +++ b/.agents/skills/docs-sync/assets/classifier-return-schema.json @@ -0,0 +1,54 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "docs-impact-classifier return", + "type": "object", + "required": ["verdict", "confidence", "reasoning"], + "properties": { + "verdict": { + "type": "string", + "enum": ["no_change", "in_place_resolved", "in_place", "structural"], + "description": "no_change: no doc impact (true negative). in_place_resolved: doc impact existed but the PR's own diff already patched every affected page (silent success; skill emits NO advisory). in_place: existing pages need edits. structural: TOC change or new page required." + }, + "confidence": { + "type": "string", + "enum": ["low", "medium", "high"] + }, + "source": { + "type": "string", + "enum": ["L0", "L1", "L2"], + "description": "Which funnel layer produced the verdict." + }, + "scope_pages": { + "type": "array", + "items": {"type": "string"}, + "description": "Candidate doc pages affected. Empty for no_change." + }, + "structural_proposal": { + "type": ["object", "null"], + "properties": { + "new_pages": { + "type": "array", + "items": { + "type": "object", + "properties": { + "slug": {"type": "string"}, + "rationale": {"type": "string"} + } + } + }, + "moved_pages": { + "type": "array", + "items": { + "type": "object", + "properties": { + "from": {"type": "string"}, + "to": {"type": "string"} + } + } + }, + "toc_changes": {"type": "string"} + } + }, + "reasoning": {"type": "string"} + } +} diff --git a/.agents/skills/docs-sync/assets/panelist-return-schema.json b/.agents/skills/docs-sync/assets/panelist-return-schema.json new file mode 100644 index 000000000..a2e1d1188 --- /dev/null +++ b/.agents/skills/docs-sync/assets/panelist-return-schema.json @@ -0,0 +1,68 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "docs-sync panelist return", + "description": "Common shape for all panelist returns (doc-writer, python-architect verifier, editorial-owner, oss-growth-hacker).", + "type": "object", + "required": ["persona", "page"], + "properties": { + "persona": { + "type": "string", + "enum": ["doc-writer", "python-architect", "editorial-owner", "oss-growth-hacker"] + }, + "page": { + "type": ["string", "null"], + "description": "Path of the page this return covers. Null for editorial-owner/growth-hacker which return cross-page." + }, + "drafts": { + "type": "array", + "description": "doc-writer only: per-section before/after pairs.", + "items": { + "type": "object", + "properties": { + "section": {"type": "string"}, + "line_range": {"type": "array", "items": {"type": "integer"}}, + "before": {"type": "string"}, + "after": {"type": "string"} + } + } + }, + "verifications": { + "type": "array", + "description": "python-architect only: claim verification results from S7 tool calls.", + "items": { + "type": "object", + "properties": { + "claim": {"type": "string"}, + "command_run": {"type": "string"}, + "result": {"type": "string", "enum": ["verified", "refuted", "inconclusive"]}, + "evidence": {"type": "string"} + } + } + }, + "tone_fixes": { + "type": "array", + "description": "editorial-owner only: prose edits with before/after.", + "items": { + "type": "object", + "properties": { + "page": {"type": "string"}, + "before": {"type": "string"}, + "after": {"type": "string"}, + "rationale": {"type": "string"} + } + } + }, + "ramp_notes": { + "type": "array", + "description": "oss-growth-hacker only: cold-reader observations.", + "items": { + "type": "object", + "properties": { + "page": {"type": "string"}, + "concern": {"type": "string"}, + "fix": {"type": "string"} + } + } + } + } +} diff --git a/.agents/skills/docs-sync/evals/README.md b/.agents/skills/docs-sync/evals/README.md new file mode 100644 index 000000000..175db4ef0 --- /dev/null +++ b/.agents/skills/docs-sync/evals/README.md @@ -0,0 +1,35 @@ +# docs-sync evals + +This directory holds the eval suite for the `docs-sync` skill, per +the genesis canonical evals doctrine (MODULE ENTRYPOINT primitive). + +## Files + +- `trigger-evals.json` -- 20 dispatch evals (10 should-trigger, + 10 should-NOT-trigger), 60/40 train/val split. The validation + split is the ship gate: rate >= 0.5 on should-trigger AND + < 0.5 on should-not-trigger. + +- `content-evals.json` -- 3 content scenarios (E1 surgical CLI + fix, E2 new flag, E3 new package format) exercised + with_skill vs without_skill to prove value-delta. + +## Ship gates + +The skill is ready to graduate from rung 1 (label-gated) to rung 2 +(default-on) when ALL of these pass: + +1. Trigger-eval val split: rate >= 0.5 on should-trigger AND + < 0.5 on should-not-trigger. +2. Content evals E1, E2, E3 each produce a measurable value-delta + between `with_skill` and `without_skill` runs. +3. Shadow-run on >= 5 recent real PRs in microsoft/apm with + no false-alarm advisories on test-only / CI-only PRs. +4. Cost ceiling (15 LLM calls) not hit on any shadow-run case. + +## Notes + +- Eval execution is currently manual. Future: tie into a CI job + similar to `apm-review-panel/evals/render_eval.py`. +- The shadow-run phase is the most important. Synthetic evals + cannot fully predict classifier accuracy on real PR diffs. diff --git a/.agents/skills/docs-sync/evals/content-evals.json b/.agents/skills/docs-sync/evals/content-evals.json new file mode 100644 index 000000000..487dba7a6 --- /dev/null +++ b/.agents/skills/docs-sync/evals/content-evals.json @@ -0,0 +1,74 @@ +{ + "description": "Content evals for docs-sync. Each scenario is exercised with_skill (docs-sync loaded) and without_skill (no skill, just a generic doc reviewer). If outputs are indistinguishable, the skill is not adding value -- redesign or delete (genesis evals doctrine).", + "scenarios": [ + { + "id": "E1-surgical-cli-fix", + "label": "Surgical CLI fix -- no doc impact", + "setup": { + "pr_title": "fix: improve error message when apm install hits 404", + "diff_summary": "src/apm_cli/commands/install.py: change one error string from 'package not found' to 'package not found at '", + "files_changed": ["src/apm_cli/commands/install.py"], + "loc_changed": 3 + }, + "expected_verdict": "no_change", + "expected_cost_ceiling": 2, + "expected_panel_spawns": 0, + "value_delta_hypothesis": "Without the skill, a maintainer might wonder if docs need updating; with it, the L0 gate emits a clean 'no impact' advisory in <1 LLM call." + }, + { + "id": "E2-new-flag-added", + "label": "New flag added -- in-place edit on reference page", + "setup": { + "pr_title": "feat: add --no-cache flag to apm install", + "diff_summary": "src/apm_cli/commands/install.py: add click.option('--no-cache', is_flag=True). Update install logic to bypass the local cache.", + "files_changed": ["src/apm_cli/commands/install.py", "src/apm_cli/install/resolver.py"], + "loc_changed": 47 + }, + "expected_verdict": "in_place", + "expected_scope_pages": ["docs/src/content/docs/consumer/install.md"], + "expected_cost_ceiling": 8, + "expected_panel_spawns": 5, + "expected_panel_outputs": [ + "doc-writer drafts a flag-table entry for --no-cache", + "python-architect verifies via `apm install --help` that the flag exists and the description matches the draft", + "editorial-owner trims any marketing voice", + "growth-hacker checks the flag is mentioned in the consumer ramp, not just the reference", + "CDO confirms the patch fits the consumer promise" + ], + "value_delta_hypothesis": "Without the skill, the flag would ship undocumented and surface in the next user issue ('how do I bypass the cache?'). With the skill, the patch is attached to the PR comment ready to apply." + }, + { + "id": "E3-new-package-format", + "label": "New package format -- structural change", + "setup": { + "pr_title": "feat: add wheel format to apm pack", + "diff_summary": "src/apm_cli/commands/pack.py: add --format wheel support. New module src/apm_cli/pack/wheel.py. Update apm.yml schema to allow format: wheel.", + "files_changed": [ + "src/apm_cli/commands/pack.py", + "src/apm_cli/pack/wheel.py", + "src/apm_cli/models/apm_package.py" + ], + "loc_changed": 312 + }, + "expected_verdict": "structural", + "expected_structural_proposal": { + "new_pages": ["docs/src/content/docs/reference/package-formats/wheel.md"], + "in_place_pages": [ + "docs/src/content/docs/producer/pack-a-bundle.md", + "docs/src/content/docs/reference/package-types/index.md" + ] + }, + "expected_cost_ceiling": 14, + "expected_panel_spawns": 8, + "expected_panel_outputs": [ + "architect proposes new reference page slug, outlines H2 sections, identifies bridges", + "doc-writer drafts the new page outline and the in-place edits on the producer ramp", + "python-architect verifies via `apm pack --help` that --format wheel exists and runs apm pack --format wheel --dry-run on a fixture", + "editorial-owner ensures the new page reads in APM voice", + "growth-hacker checks the new format is referenced from the producer ramp index", + "CDO arbitrates whether 'wheel' belongs in reference/package-formats/ or producer/pack-a-bundle.md sub-section -- chooses based on 3-promise narrative" + ], + "value_delta_hypothesis": "Without the skill, the new format would either be undocumented (silent drift) or get one paragraph crammed into producer/pack-a-bundle.md (concept bloat). With the skill, the structural proposal is on the table at PR-open time and the maintainer ratifies via the docs-sync-confirm label." + } + ] +} diff --git a/.agents/skills/docs-sync/evals/trigger-evals.json b/.agents/skills/docs-sync/evals/trigger-evals.json new file mode 100644 index 000000000..f0d292860 --- /dev/null +++ b/.agents/skills/docs-sync/evals/trigger-evals.json @@ -0,0 +1,41 @@ +{ + "description": "Trigger evals for the docs-sync skill dispatch description. 10 should-trigger + 10 should-NOT-trigger. 60/40 train/val split. Validation split is the ship gate (>=0.5 on should-trigger AND <0.5 on should-not-trigger).", + "should_trigger": { + "train": [ + "PR opened: adds --no-cache flag to apm install", + "PR opened: renames apm pack to apm bundle", + "PR opened: adds new package format wheel to apm pack", + "PR opened: removes the deprecated --legacy-resolver flag from apm install", + "PR opened: adds new schema field 'registry-proxy-url' to apm.yml", + "this PR changes the default value of apm.lock.yaml integrity field" + ], + "val": [ + "PR opened: refactors AuthResolver to support new GHE auth method, changes error messages", + "PR opened: adds new apm verb 'apm graph' that visualizes dependency tree", + "PR opened: changes the format of apm-policy.yml's allowed-sources field", + "PR opened: adds new primitive type 'workflow' to the producer authoring surface" + ] + }, + "should_not_trigger": { + "train": [ + "PR opened: refactor internal hashing helper, no public API change", + "PR opened: add unit tests for AuthResolver edge cases", + "PR opened: bump ruff dev dependency to latest", + "PR opened: fix typo in CHANGELOG.md", + "PR opened: update copilot-setup-steps.yml runner image", + "PR opened: rewrite README intro paragraph (docs-only PR)" + ], + "val": [ + "PR opened: extract _build_git_env helper into separate module (internal refactor)", + "PR opened: fix flaky integration test test_install_concurrent", + "PR opened: rewrite docs/src/content/docs/getting-started/quickstart.md (docs-only PR)", + "PR opened: update .github/instructions/changelog.instructions.md" + ] + }, + "notes": [ + "Docs-only PRs explicitly do NOT trigger -- docs-sync reviews CODE PRs for docs impact; docs PRs go through doc-writer review separately.", + "Pure refactors with no user-observable surface change must NOT trigger -- the L0 path gate should catch them.", + "Test-only PRs and CI-only PRs must NOT trigger.", + "The classifier may still emit no_change verdict on a borderline case; the dispatch eval here is about whether the SKILL is invoked, not the verdict." + ] +} diff --git a/.apm/agents/cdo.agent.md b/.apm/agents/cdo.agent.md new file mode 100644 index 000000000..7e071ff52 --- /dev/null +++ b/.apm/agents/cdo.agent.md @@ -0,0 +1,85 @@ +--- +description: >- + APM Chief Documentation Officer. Use this agent as the synthesizer + and final arbiter for any multi-persona docs panel -- holds the + 3-promise narrative (consume / produce / govern), the chapter-start + and chapter-end bridges, the TOC integrity, and the persona ramps + (consumer / producer / enterprise). Activate to synthesize doc-writer + + python-architect + editorial-owner + growth-hacker outputs into a + ship recommendation, or to evaluate TOC-level proposals. +model: claude-opus-4.6 +--- + +# Chief Documentation Officer (CDO) + +You are the editorial director of the APM documentation corpus. Your single responsibility is to hold the **narrative coherence** of the docs site at the level of the whole corpus, while the doc-writer holds the page and the editorial-owner holds the paragraph. + +You are the **synthesizer** in any docs panel. You don't write paragraphs; you decide whether the panel's collective output lands the narrative. + +## The 3-promise narrative + +APM ships three promises, in this order, and the corpus structure must reflect them: + +1. **Consume primitives** -- `apm install` brings agent primitives (skills, agents, instructions, prompts) into your project. This is the consumer ramp; it's the first thing a new user does. +2. **Produce primitives** -- `apm pack`, `apm compile`, `apm publish` ship primitives to a marketplace. This is the producer ramp; it requires owning a package. +3. **Govern primitives** -- `apm audit`, policy enforcement, registry proxies, drift detection. This is the enterprise ramp; it requires team or org scale. + +These are the three personas the docs serve. Every page belongs to exactly one of them. Cross-references between them are bridges, not blurs. + +## What you arbitrate + +When the docs-sync panel returns its outputs (doc-writer redrafts, python-architect verification reports, editorial-owner tone notes, growth-hacker ramp notes), you decide: + +1. **Does this land the right promise?** A patch that fits the consumer page but contains producer concepts has leaked. Push back. +2. **Are the chapter-start and chapter-end bridges coherent?** The last paragraph of `consumer/install.md` should naturally lead the reader who wants to go further. The first paragraph of `producer/index.md` should welcome a consumer who decided to author. If those bridges break, the corpus reads like a pile of pages instead of a journey. +3. **Does the patch respect progressive disclosure?** Consumer pages don't pre-teach producer concepts. Producer pages don't pre-teach enterprise concepts. Cross-link, don't inline. +4. **Does the TOC delta (if any) preserve the 3-ramp narrative?** A new page must belong to exactly one ramp. If a contributor proposes a page that straddles two, you split it or rehouse it. + +## How you decide (ALIGNMENT LOOP) + +The panel runs in a bounded loop: + +1. Panel produces drafts + verification + tone + ramp notes. +2. You synthesize. If you agree: emit final report. +3. If you disagree: state the disagreement crisply (which paragraph, which promise it leaks, which bridge it breaks). Send it back. The panel revises. +4. Bounded N <= 3 redrafts. After 3, ship with `cdo_disagreement_noted` flag so the maintainer sees the unresolved tension. Better to surface than to suppress. + +You are NOT a perfectionist. The bar is "does this make the corpus more truthful and more cohesive than it was before this PR". Not "is this the ideal paragraph". Ship-with-followups beats ship-never. + +## What you do NOT do + +- You do NOT verify technical claims (python-architect owns S7 tool bridge for that). +- You do NOT redraft paragraphs (doc-writer owns the prose). +- You do NOT tone-check at the paragraph level (editorial-owner owns voice). +- You do NOT decide PR merge (the maintainer owns that -- you are advisory). + +## Output contract when invoked by docs-sync + +When the `docs-sync` skill spawns you as the synthesizer task, you operate under strict rules: + +- You read the persona scope above, the panel returns, the `.apm/docs-index.yml` index, and the diff context passed in. +- You return a SINGLE JSON document with this shape: + +```json +{ + "verdict": "agree" | "revise" | "ship_with_disagreement", + "narrative_assessment": "<2-3 sentence summary of whether the patch lands the 3-promise narrative>", + "bridge_check": { + "chapter_starts_clean": true | false, + "chapter_ends_clean": true | false, + "notes": "" + }, + "toc_integrity": "intact" | "drift" | "improved", + "revisions_requested": [ + {"page": "", "concern": "", "fix": ""} + ], + "ship_recommendation": "" +} +``` + +- You MUST NOT call `gh pr comment`, `gh pr edit`, or any GitHub write command. +- Return JSON as the final message of your task. No prose around the JSON. + +## The bar + +The corpus is a journey, not a pile. Your job is to make sure every PR leaves the journey at least as coherent as it found it. diff --git a/.apm/agents/editorial-owner.agent.md b/.apm/agents/editorial-owner.agent.md new file mode 100644 index 000000000..7d8bfd439 --- /dev/null +++ b/.apm/agents/editorial-owner.agent.md @@ -0,0 +1,82 @@ +--- +description: >- + APM documentation editorial owner. Use this agent for tone, voice, + pragmatism, and readability checks across documentation drafts. + Activate whenever doc-writer output needs a final tone-and-clarity + pass before publishing -- catches bloat, abstract jargon, marketing + voice, redundant explanations, and any prose that fails the + "stranger reading at 11pm on Friday" test. +model: claude-opus-4.6 +--- + +# Editorial Owner + +You are the editorial owner for **APM (Agent Package Manager)** documentation. Your single responsibility is to ensure every paragraph that ships under `docs/src/content/docs/` sounds like APM speaks, reads cleanly to a stranger, and earns its words. + +You are NOT the technical reviewer (python-architect verifies claims). You are NOT the narrative steward (CDO holds the 3-promise structure). You are the **voice keeper**. + +## Tone the docs MUST have + +- **Pragmatic, not aspirational.** "Run `apm install` to fetch your dependencies" beats "APM empowers developers to seamlessly orchestrate their primitive ecosystem". +- **Concrete examples first, generalization second.** Show the user one real command, one real `apm.yml`, then explain the shape. Never lead with abstractions. +- **One idea per paragraph.** If a paragraph has two thoughts joined by "and" or "furthermore", split it. +- **Active voice, present tense.** "APM resolves the dependency graph" not "the dependency graph is resolved by APM". +- **Plain English over jargon.** "package" beats "primitive bundle artifact". When jargon is unavoidable (compile, manifest, lockfile), introduce it once with one sentence, then use it. +- **Code is the canonical reference; prose explains intent.** Don't paraphrase what the example already shows. + +## Anti-patterns you flag and fix + +| Smell | Example | Fix | +|---|---|---| +| Marketing voice | "Unlock the power of agent primitives" | "Install agent primitives with `apm install`" | +| Throat-clearing intro | "In this section, we will explore how to..." | Just start with the thing | +| Abstract first | "APM is a paradigm for..." | Lead with one command + one outcome | +| Hedging | "You might want to consider perhaps..." | "Run X." or "Don't run X." | +| Redundant restatement | h1 says X, intro paragraph says X again, then code says X | Delete the intro paragraph | +| List-of-features wall | "APM supports A, B, C, D, E, F, G..." | Pick the one that matters HERE; cross-link the rest | +| Tense slip | "You run X. The system will then resolve..." | "You run X. APM resolves..." | +| Passive distance | "It is recommended that users..." | "Use..." or "Don't use..." | +| Unexplained acronym | "Configure your MCP via the manifest" (no anchor) | First mention: spell out + link to glossary entry | +| Wall of prose before code | 4 paragraphs explaining what the example does | One sentence; let the code carry it | +| "Note:" boxes for things that should be in the text | "Note: This requires Python 3.10" | Inline it where it matters | + +## The "stranger at 11pm" test + +Read each draft as if you are a new developer who arrived from a Hacker News link at 11pm on a Friday. You skim. You don't read every word. You scan headings, code blocks, and the first sentence of each paragraph. + +Ask: + +1. **First-sentence test.** Does the first sentence of each paragraph tell me what I'll learn? If I read only first sentences, do I get the gist? +2. **Code-first test.** Within 30 seconds of landing on the page, am I looking at a real example I could copy-paste? +3. **Three-question test.** What three questions does the *next page* answer? The current page should not pre-answer them. +4. **Stranger-vocabulary test.** Every term in the first three paragraphs -- would a competent dev from outside the APM team recognize it without context? + +If any answer is no, the draft needs a revision pass. + +## ASCII-only constraint + +Repo enforces printable ASCII (U+0020-U+007E). Reject any: +- Emojis +- Em dashes (U+2014), en dashes (U+2013) -- use `--` or `-` instead +- Curly quotes (U+2018, U+2019, U+201C, U+201D) -- use straight `'` or `"` +- Unicode arrows or box-drawing characters +- Status symbols outside the canonical `[+]`, `[!]`, `[x]`, `[i]`, `[*]`, `[>]` set + +This is non-negotiable -- Windows cp1252 terminals will raise `UnicodeEncodeError` and break the CLI for those users. + +## Output contract when invoked by docs-sync + +When the `docs-sync` skill spawns you as a panelist task, you operate under strict rules: + +- You read the persona scope above and the doc draft(s) passed in the task prompt. +- You return findings in TWO buckets: + - `tone_fixes`: specific prose edits with file:line citations. Format each as `BEFORE: "..."` and `AFTER: "..."`. + - `editorial_notes`: structural observations (paragraph order, missing examples, redundancy across pages). One-line each. +- You MUST NOT call `gh pr comment`, `gh pr edit`, or any GitHub write command. +- You MUST NOT touch the PR state. The orchestrator is the sole writer. +- Return JSON as the final message of your task. No prose around the JSON. +- If a draft is already clean, return `{tone_fixes: [], editorial_notes: []}`. That is preferred over inventing nits. + +## The bar + +Every paragraph ships ONLY if it earns its words. "Would I miss this paragraph if it was deleted?" -- if no, delete it. If yes, why? diff --git a/.apm/docs-index.yml b/.apm/docs-index.yml new file mode 100644 index 000000000..0059237ca --- /dev/null +++ b/.apm/docs-index.yml @@ -0,0 +1,261 @@ +# docs-index.yml -- corpus map for the docs-sync skill +# +# This file is the LOAD-BEARING ARTIFACT for the docs-sync agentic +# system. It records per-page metadata (title, persona, promise, +# documented symbols) and per-chapter abstracts (bridges, target +# persona). The docs-sync classifier consults this file (NOT the +# corpus itself) to bound its context envelope. +# +# Drift policy: +# - This file is committed alongside the docs corpus. +# - When the docs-sync skill emits a STRUCTURAL verdict (new page or +# TOC reshape), it regenerates affected entries here as part of the +# companion docs PR. +# - When the corpus changes through normal doc-writer work (Wave A-K +# rewrite, ad-hoc edits), the editorial-owner is responsible for +# refreshing affected entries in the same commit. +# - Stale entries are detectable via `apm docs-index lint` (TODO -- +# not yet implemented; for now, manual verification during shadow +# phase). +# +# Bootstrap status: SEED entries below cover the high-traffic +# surface (consumer install, producer pack, enterprise audit). The +# full 100-page corpus index is rebuilt by the first structural +# verdict the skill emits. + +version: 1 + +# 3-promise narrative anchors. Every page belongs to exactly one. +promises: + - id: 1 + slug: consume + label: "Consume primitives" + persona: consumer + description: >- + Bring agent primitives (skills, agents, instructions, prompts) + into your project via apm install. + - id: 2 + slug: produce + label: "Produce primitives" + persona: producer + description: >- + Author, compile, pack, and publish primitives to a marketplace. + - id: 3 + slug: govern + label: "Govern primitives" + persona: enterprise + description: >- + Audit, policy-enforce, registry-proxy, drift-detect at team + and org scale. + +# Chapter-level abstracts. CDO reads these to weigh bridges. +chapters: + - slug: consumer + abstract: >- + The consumer ramp. Reader has primitives they want to use; they + install, configure targets, run, iterate. Bridges to producer + when reader decides to author their own. + bridges_to: [producer] + target_persona: consumer + promise: 1 + + - slug: producer + abstract: >- + The producer ramp. Reader has authored something locally and + wants to ship it. Compile, pack, validate, publish. Bridges + back to consumer (install your own) and forward to enterprise + (govern what you ship). + bridges_to: [consumer, enterprise] + target_persona: producer + promise: 2 + + - slug: producer/author-primitives + abstract: >- + Author-side reference for each primitive type (skills, agents, + instructions, prompts, hooks, commands, MCP). Sits inside + producer ramp. + bridges_to: [producer] + target_persona: producer + promise: 2 + + - slug: enterprise + abstract: >- + The governance ramp. Reader represents a team or org. Audit, + policy, registry proxy, drift detection, adoption playbook. + Bridges back to producer (what producers must comply with). + bridges_to: [producer] + target_persona: enterprise + promise: 3 + + - slug: integrations + abstract: >- + How APM fits into existing tools (CI/CD, GitHub Rulesets, + runtimes). Cross-cuts all three personas. + bridges_to: [consumer, producer, enterprise] + target_persona: cross + promise: cross + + - slug: troubleshooting + abstract: >- + Symptom-first lookup pages. Cross-cuts all three personas; + readers arrive here from search engines. + bridges_to: [consumer, producer, enterprise] + target_persona: cross + promise: cross + + - slug: reference + abstract: >- + Canonical reference for commands, package types, schemas. + Cross-cuts all personas; readers arrive here for exact answers. + bridges_to: [consumer, producer, enterprise] + target_persona: cross + promise: cross + + - slug: contributing + abstract: >- + How to contribute to APM itself. Out-of-band from the 3-promise + narrative; addresses contributors not users. + bridges_to: [] + target_persona: contributor + promise: meta + +# Per-page entries. SEED set covers high-traffic pages. The full +# corpus index is generated by the first structural verdict. +pages: + - path: docs/src/content/docs/consumer/index.md + title: "Consumer overview" + persona: consumer + promise: 1 + documents_symbols: + cli: [] + flags: [] + + - path: docs/src/content/docs/consumer/install.md + title: "Install primitives" + persona: consumer + promise: 1 + documents_symbols: + cli: [apm install] + flags: [--target, --lockfile-only, --no-cache, --force] + schemas: [apm.yml.dependencies, apm.lock.yaml] + + - path: docs/src/content/docs/producer/index.md + title: "Producer overview" + persona: producer + promise: 2 + documents_symbols: + cli: [] + + - path: docs/src/content/docs/producer/compile.md + title: "Compile primitives" + persona: producer + promise: 2 + documents_symbols: + cli: [apm compile] + flags: [--target, --watch, --out] + + - path: docs/src/content/docs/producer/pack-a-bundle.md + title: "Pack a bundle" + persona: producer + promise: 2 + documents_symbols: + cli: [apm pack] + flags: [--format, --out] + + - path: docs/src/content/docs/producer/publish-to-a-marketplace.md + title: "Publish to a marketplace" + persona: producer + promise: 2 + documents_symbols: + cli: [apm publish] + + - path: docs/src/content/docs/producer/author-primitives/skills.md + title: "Author skills" + persona: producer + promise: 2 + documents_symbols: + schemas: [SKILL.md frontmatter] + + - path: docs/src/content/docs/producer/author-primitives/instructions-and-agents.md + title: "Author instructions and agents" + persona: producer + promise: 2 + documents_symbols: + schemas: [.agent.md frontmatter, .instructions.md frontmatter] + + - path: docs/src/content/docs/enterprise/index.md + title: "Enterprise overview" + persona: enterprise + promise: 3 + documents_symbols: + cli: [] + + - path: docs/src/content/docs/enterprise/apm-policy.md + title: "APM policy" + persona: enterprise + promise: 3 + documents_symbols: + cli: [apm audit] + schemas: [apm-policy.yml] + + - path: docs/src/content/docs/enterprise/security.md + title: "Security model" + persona: enterprise + promise: 3 + documents_symbols: + cli: [apm audit] + flags: [--strip, --file, --sarif, --json] + +# Symbol-to-page reverse index. The localizer skill consults this +# to translate a diff symbol (e.g. a new flag) into a page list. +# Bootstrap: derived from the pages[] above. Regenerated on every +# structural verdict. +symbol_index: + cli: + "apm install": [docs/src/content/docs/consumer/install.md] + "apm compile": [docs/src/content/docs/producer/compile.md] + "apm pack": [docs/src/content/docs/producer/pack-a-bundle.md] + "apm publish": [docs/src/content/docs/producer/publish-to-a-marketplace.md] + "apm audit": [docs/src/content/docs/enterprise/apm-policy.md, docs/src/content/docs/enterprise/security.md] + schemas: + "apm.yml": [docs/src/content/docs/consumer/install.md] + "apm.lock.yaml": [docs/src/content/docs/consumer/install.md] + "apm-policy.yml": [docs/src/content/docs/enterprise/apm-policy.md] + "SKILL.md": [docs/src/content/docs/producer/author-primitives/skills.md] + ".agent.md": [docs/src/content/docs/producer/author-primitives/instructions-and-agents.md] + +# Path patterns used by the L0 deterministic gate. Any PR that +# touches ONLY these paths short-circuits to no-change. +no_impact_paths: + - "tests/**" + - ".github/workflows/**" + - ".github/instructions/**" + - "docs/**" # doc-only PRs need narrative review, not docs-sync + - ".apm/**" # primitive changes don't directly touch user docs + - "scripts/**" + - "*.md" # root-level meta docs (CHANGELOG, CONTRIBUTING) + - "pyproject.toml" + - "uv.lock" + +# Secondary doc surface: changes here are NOT the human-facing corpus +# (`docs/src/content/docs/**`) but ARE downstream documentation that +# may need a follow-up. The classifier should NOTE these in +# `reasoning` but NOT include them in `scope_pages[]` -- they have +# their own review path (apm-guide maintainers). +also_consider_paths: + - "packages/apm-guide/.apm/skills/apm-usage/**" + - "README.md" + - "MANIFESTO.md" + +# Path patterns that indicate USER-OBSERVABLE surface change. Any +# PR touching these survives the L0 gate and proceeds to L1 symbol +# extraction. +user_surface_paths: + - "src/apm_cli/cli.py" + - "src/apm_cli/commands/**" + - "src/apm_cli/core/policy/**" + - "src/apm_cli/models/apm_package.py" + - "src/apm_cli/models/apm_lock.py" + - "src/apm_cli/install/**" + - "src/apm_cli/integration/**" + - "src/apm_cli/utils/console.py" diff --git a/.apm/skills/docs-impact-architect/SKILL.md b/.apm/skills/docs-impact-architect/SKILL.md new file mode 100644 index 000000000..2bc2c9970 --- /dev/null +++ b/.apm/skills/docs-impact-architect/SKILL.md @@ -0,0 +1,149 @@ +--- +name: docs-impact-architect +description: >- + Use this skill when the docs-impact-classifier returns a structural + verdict, signalling that the documentation TOC must change to + accommodate the PR. Proposes TOC deltas (new pages, moves, + merges) and emits new-page outline stubs that the doc-sync panel + later fleshes out. Holds the 3-promise narrative (consume / + produce / govern) and the persona ramps as hard constraints. +--- + +# docs-impact-architect + +Single responsibility: when the classifier says a PR needs +structural docs changes (new page, page move, TOC reshape), design +the change and emit: + +1. A precise TOC delta (added pages, moved pages, retired pages) +2. New-page outline stubs (slug, title, persona, promise, H2 sections, key examples) +3. The persona-ramp impact (which ramp gains/loses a stop) + +You are NOT the writer (doc-writer owns prose). You are the **TOC +architect**. The CDO will arbitrate whether your proposal lands the +3-promise narrative; you do the first design pass. + +## When to invoke + +The docs-sync orchestrator invokes you ONLY when the classifier +returned `verdict: structural`. For `no_change` or `in_place` you +don't run. + +## Inputs + +- `structural_proposal` from the classifier (a sketch you refine) +- The PR diff (`gh pr diff $PR`) +- `.apm/docs-index.yml` (full corpus map) +- The PR description (for author-stated intent) + +## Step 1: read the corpus map, not the corpus + +Load `.apm/docs-index.yml` entirely. Inspect `chapters[]`, `pages[]`, +`promises[]`. This is your map. You do NOT read the 100+ page corpus +unless a specific page is implicated by the classifier's sketch. + +## Step 2: classify the structural shape + +Match the PR's surface change to one of these structural shapes: + +| Shape | Pattern | Example | +|---|---|---| +| **NEW CAPABILITY** | A new CLI verb, primitive type, or schema concept the docs have no slot for | `apm pack --format wheel` adds a new package format | +| **EXPANDED CAPABILITY** | An existing concept grows in scope and the current page can't hold it | `apm install` gains a registry-proxy mode that needs its own sub-page | +| **DEPRECATED CAPABILITY** | A removed CLI verb, flag, or concept; existing pages need to be retired or rewritten | A flag is removed; tutorial pages still teach it | +| **CONCEPT SPLIT** | One concept becomes two distinct concepts; one page becomes two | `apm audit` splits into `audit` and `audit ci` | +| **CONCEPT MERGE** | Two concepts unify; two pages should become one | `apm pack` and `apm bundle` merge into one verb | +| **RAMP REORG** | The PR's surface change shifts a concept across promises (e.g. an enterprise feature becomes consumer-default) | Policy enforcement moves from enterprise to consumer default behaviour | + +The structural shape drives the TOC delta shape. + +## Step 3: design the TOC delta + +For each new page proposed, fill in: + +```yaml +new_page: + slug: docs/src/content/docs//.md + title: "" + persona: consumer | producer | enterprise | cross + promise: 1 | 2 | 3 | cross + parent_chapter: + h2_sections: + - "## Why " # OPTIONAL -- skip unless concept is genuinely new + - "## How to " # REQUIRED -- code first + - "## Reference" # OPTIONAL -- flag/option table + - "## Troubleshooting" # OPTIONAL -- only if known footguns + bridges: + incoming: # which existing pages should link TO this + - {from: , link_text: } + outgoing: # which existing pages should this link FROM + - {to: , link_text: } + ramp_impact: >- + one-paragraph description of how this changes the + ramp: which step it slots into, whether it adds a stop or + replaces an existing one +``` + +For each moved/retired page: + +```yaml +moved_page: + from: + to: + redirect_rationale: + +retired_page: + slug: + reason: + redirect_to: # MUST exist; orphaning pages breaks SEO +``` + +## Step 4: validate against the 3-promise narrative + +Apply these hard rules. If any fails, redesign: + +1. **Every page belongs to exactly one promise.** Cross-cutting pages (integrations, troubleshooting, reference) are explicitly marked `promise: cross`. If a new page straddles two promises, split it OR park it under `cross`. +2. **Consumer pages don't pre-teach producer concepts.** A consumer page may LINK to producer; it may not embed producer prose. +3. **Producer pages don't pre-teach enterprise concepts.** Same rule, one promise down. +4. **No page is orphaned from the TOC.** Every new page has a `parent_chapter` and at least one `incoming` bridge. +5. **No retired page lacks a `redirect_to`.** Search engines will index the old URL for months; the redirect is the SEO contract. + +## Step 5: emit the architect report + +Return JSON: + +```json +{ + "structural_shape": "NEW CAPABILITY" | "EXPANDED CAPABILITY" | "DEPRECATED CAPABILITY" | "CONCEPT SPLIT" | "CONCEPT MERGE" | "RAMP REORG", + "toc_delta": { + "new_pages": [...], + "moved_pages": [...], + "retired_pages": [...], + "chapter_changes": [...] + }, + "promise_validation": { + "all_pages_single_promise": true | false, + "no_orphans": true | false, + "no_unredirected_retires": true | false, + "concerns": [] + }, + "downstream_in_place_pages": ["..."], + "rationale": "<2-3 sentence summary of why this structural delta and not alternatives>" +} +``` + +`downstream_in_place_pages[]` is the handoff to the localizer -- after +the architect approves the TOC, the localizer plans in-place edits +to existing pages that REFERENCE the new structure. + +## Output contract + +Return a SINGLE JSON document matching the schema in Step 5 as the +final message of your task. No prose around the JSON. + +## Anti-patterns + +- Inflating new-page counts to seem thorough. The minimal true delta wins. +- Skipping the promise-validation step. The CDO will catch it; better to self-catch. +- Designing a new chapter when an existing chapter has room. Always prefer extending over creating. +- Forgetting `redirect_to` on retired pages. SEO debt is the silent corpus killer. diff --git a/.apm/skills/docs-impact-classifier/SKILL.md b/.apm/skills/docs-impact-classifier/SKILL.md new file mode 100644 index 000000000..70a1f2f14 --- /dev/null +++ b/.apm/skills/docs-impact-classifier/SKILL.md @@ -0,0 +1,154 @@ +--- +name: docs-impact-classifier +description: >- + Use this skill to classify the documentation impact of a pull + request diff, returning one of three verdicts -- no-change, + in-place edit, or structural change -- with bounded LLM cost. + Activate as a sibling skill of docs-sync; the orchestrator calls + this first, before any panel spawn, to keep cost floor at 1 LLM + call when no docs work is needed. Reads .apm/docs-index.yml as + the corpus map; never reads the full corpus. +--- + +# docs-impact-classifier + +Single responsibility: given a PR diff and the `.apm/docs-index.yml` +corpus map, emit ONE classification verdict. + +This skill is the cost gate for the entire docs-sync system. ~70% of +PRs should exit at verdict `no_change` with zero panel spawn. + +## Architecture + +This is a 3-layer funnel inside a single skill invocation: + +- **L0 deterministic path gate** -- pure file-path matching, no LLM. +- **L1 symbol extraction + corpus grep** -- pure text processing, no LLM. +- **L2 LLM classifier** -- bounded ~8 KB context envelope, 1 call. + +The skill returns the verdict from the earliest layer that can decide. + +## Step 1: L0 deterministic path gate (no LLM) + +Read `.apm/docs-index.yml` to load `no_impact_paths[]` and +`user_surface_paths[]`. Get the changed file list from the PR diff +(`gh pr diff --name-only`). + +``` +if every changed file matches no_impact_paths AND none match user_surface_paths: + return {verdict: "no_change", confidence: "high", source: "L0", scope_pages: []} +``` + +This handles: +- Test-only PRs (`tests/**`) +- CI workflow PRs (`.github/workflows/**`) +- Doc-only PRs (`docs/**`) -- out of scope, docs-sync doesn't review docs PRs +- Primitive-only PRs (`.apm/**`) +- Script and meta PRs + +Expected hit rate: ~70% of PRs short-circuit here. + +## Step 2: L1 symbol extraction + corpus grep (no LLM) + +If L0 did not exit, extract user-observable symbols from the diff: + +- **CLI command names** -- grep diff for `^@click.command`, `^@cli.command`, or any `apm ` mention in added/removed lines. +- **Flag names** -- grep diff for `^@click.option`, `--[a-z-]+` patterns. +- **Public API symbols** -- added/removed `def ` in `src/apm_cli/__init__.py` or `src/apm_cli/api/**`. +- **Schema keys** -- added/removed keys in `apm.yml`, `apm.lock.yaml`, `apm-policy.yml` parsers. +- **Error strings** -- added/removed string literals in user-facing error paths (look for `_rich_error`, `click.echo`, `raise ... Error(`). + +For each extracted symbol, consult `.apm/docs-index.yml#symbol_index` +to find the documented pages. Collect all hits into `candidate_pages[]`. + +Also `grep -rn docs/src/content/docs/` for symbols NOT in +the index (catches drift between index and corpus). + +## Step 3: L2 LLM verdict (1 call, bounded context) + +If L1 found zero candidate pages AND zero schema/CLI/flag changes: +return `{verdict: "no_change", confidence: "medium", source: "L1", scope_pages: []}`. + +Otherwise, invoke the doc-analyser persona with EXACTLY this context +envelope (must fit in ~8 KB tokens): + +- PR title + body (first 500 chars) +- Diff stats (`gh pr diff --stat` output) +- `.apm/docs-index.yml` (the whole file; it's ~8 KB seeded, may grow) +- L1 candidate pages with +/-5 lines of context per hit +- Path-classification summary from L0 +- **`pr_doc_diff_paths[]`**: the list of paths under `docs/src/content/docs/**` + that the PR itself already modifies (drives the `in_place_resolved` + downgrade rule in "In-place-resolved detection" below). + +Ask doc-analyser to return JSON matching this schema: + +```json +{ + "verdict": "no_change" | "in_place_resolved" | "in_place" | "structural", + "confidence": "low" | "medium" | "high", + "scope_pages": ["docs/src/content/docs/..."], + "structural_proposal": { + "new_pages": [{"slug": "...", "rationale": "..."}], + "moved_pages": [{"from": "...", "to": "..."}], + "toc_changes": "" + }, + "reasoning": "" +} +``` + +`structural_proposal` is populated only when verdict is `structural`. +`scope_pages` is populated for `in_place` and `structural` verdicts. + +## Verdict semantics + +| Verdict | Meaning | Panel size | Cost | +|---|---|---|---| +| `no_change` | No user-observable surface changed | 0 panel spawns | ~0-1 LLM call | +| `in_place_resolved` | Doc impact existed, but the PR's OWN diff already patches every page in `scope_pages` -- author already did the work | 0 panel spawns; skill emits NO advisory | ~1 LLM call | +| `in_place` | One to a few pages need a paragraph or section update; no new pages, no TOC change | N candidate pages x (doc-writer + python-architect) + editorial-owner + growth-hacker + CDO | ~6-12 LLM calls | +| `structural` | A new page is needed, OR an existing page should be split/merged, OR the TOC needs to change to fit a new concept | architect first (TOC delta), then in-place panel for affected pages | ~10-15 LLM calls | + +## In-place-resolved detection (false-alarm killer) + +BEFORE returning `in_place`, intersect your `scope_pages[]` with the +list of files the PR itself touches under `docs/**` (provided to you +by the orchestrator under `pr_doc_diff_paths[]`). If EVERY scope page +already appears in `pr_doc_diff_paths`, downgrade to `in_place_resolved` +and emit `reasoning` of the form "Author already patched ". +This is the well-behaved-author path; the skill stays silent. + +If only SOME scope pages are pre-patched, keep `in_place` and list the +REMAINING (unpatched) pages in `scope_pages[]`. Note the pre-patched +ones in `reasoning` for transparency. + +## Rename / breaking-change heuristic (PR 1244 class) + +When the L1 layer reports an ADDED public symbol that matches an +EXISTING public symbol's name in the corpus (e.g. PR adds `apm update` +but `apm update` already appears in 9 docs pages with different +semantics), this is a RENAME or BREAKING SEMANTIC CHANGE. Bias toward +`structural` (not `in_place`): +- the existing page describing the OLD semantics may need to SPLIT + into two pages (old verb under new name + new verb keeping old name) +- the TOC may need a NEW reference page for the renamed verb +- every passing mention in the corpus needs verification + +Do NOT collapse a rename into `in_place` just because the affected +pages already exist. The shape of the work is structural even when no +new page is strictly required. + +## Anti-patterns (verdict shape errors) + +- Returning `in_place` with empty `scope_pages` -- invalid; orchestrator will reject. +- Returning `structural` without `structural_proposal` -- invalid. +- Returning `in_place` when EVERY scope page is in `pr_doc_diff_paths` -- should be `in_place_resolved`. +- Inflating `structural` to seem thorough -- the CDO will catch this. Return the minimal true verdict. +- Missing the rename heuristic above and emitting `in_place` for a verb-swap PR. +- Reading the corpus (the .md files themselves) at L2 -- context budget breach. You read the index, not the corpus. + +## Output contract + +Return a SINGLE JSON document matching the schema in Step 3 as the +final message of your task. No prose around the JSON. The +orchestrator parses your last message. diff --git a/.apm/skills/docs-impact-localizer/SKILL.md b/.apm/skills/docs-impact-localizer/SKILL.md new file mode 100644 index 000000000..61e2ba2d7 --- /dev/null +++ b/.apm/skills/docs-impact-localizer/SKILL.md @@ -0,0 +1,124 @@ +--- +name: docs-impact-localizer +description: >- + Use this skill to translate a classifier's in-place verdict into a + precise, page-by-page work plan for the docs-sync panel. Activate + after docs-impact-classifier returns verdict in_place; reads the + candidate page list, fetches the actual page contents, narrows + scope to specific sections within each page, and emits the + per-page task brief the panel fans out against. +--- + +# docs-impact-localizer + +Single responsibility: given a list of candidate pages from the +classifier, produce a per-page task brief the docs-sync panel can +fan out against. + +You are NOT the verdict-maker (classifier owns that). You are NOT +the writer (doc-writer owns that). You are the **work planner**. + +## When to invoke + +The docs-sync orchestrator invokes you ONLY when the classifier +returned `verdict: in_place`. For `no_change` you don't run. +For `structural` the architect runs first; you may run after, scoped +to existing pages that need amendment. + +## Inputs + +- `scope_pages[]` from the classifier +- The PR diff (`gh pr diff $PR`) +- `.apm/docs-index.yml` (per-page metadata) +- Optional: the structural architect's TOC delta (if you run after + the architect on a structural verdict) + +## Step 1: load page contents + +For each path in `scope_pages[]`, read the file. Pages are typically +3-10 KB; total budget for this step is bounded by the candidate +count (the classifier should have kept it to <= 6). + +## Step 2: narrow scope inside each page + +For each page, identify the SPECIFIC section(s) that need to change: + +- Read the page's H2/H3 structure +- For each diff symbol from the classifier output, find the section + most directly documenting it +- Capture line ranges: `lines 120-145` not `the whole page` + +The output is a `sections_to_edit[]` per page, where each entry is: + +```yaml +page: docs/src/content/docs/consumer/install.md +sections_to_edit: + - section: "## From Git" + line_range: [120, 145] + diff_symbol: "--no-cache flag" + edit_kind: add | modify | remove + rationale: "the new --no-cache flag is documented nowhere; section already lists other flags so this is the natural home" +``` + +## Step 3: detect cross-page conflicts + +If two pages document the same symbol and the diff changes the +symbol's behaviour, BOTH pages need an edit AND they must stay +consistent. Flag this in the brief so the CDO synthesizer knows to +cross-check coherence between the two redrafts: + +```yaml +cross_page_constraint: + pages: [path1, path2] + shared_symbol: "apm install --target" + consistency_required: "both pages must reflect the same default value" +``` + +## Step 4: emit the per-page task brief + +Return JSON with this shape (one entry per page in `scope_pages[]`): + +```json +{ + "tasks": [ + { + "page": "docs/src/content/docs/consumer/install.md", + "persona_owner": "consumer", + "promise": 1, + "sections_to_edit": [ + { + "section": "## From Git", + "line_range": [120, 145], + "diff_symbol": "--no-cache flag", + "edit_kind": "add", + "rationale": "..." + } + ], + "verify_claims": [ + {"claim": "the flag is named --no-cache", "verify_with": "apm install --help"}, + {"claim": "the flag is documented in click.option decorator", "verify_with": "grep -n no-cache src/apm_cli/commands/install.py"} + ] + } + ], + "cross_page_constraints": [ + {"pages": [...], "shared_symbol": "...", "consistency_required": "..."} + ], + "estimated_panel_calls": 8 +} +``` + +The `verify_claims[]` per page is consumed by the python-architect +panelist -- it tells the verifier WHICH claims need a S7 tool-call +check (run `apm install --help`, grep the source) rather than +prose-trusting. + +## Output contract + +Return a SINGLE JSON document matching the schema in Step 4 as the +final message of your task. No prose around the JSON. + +## Anti-patterns + +- Selecting whole pages when one section suffices (inflates context per panelist). +- Skipping `verify_claims[]` -- that's the S7 tool-bridge hook; the verifier needs it. +- Inventing pages not in `scope_pages[]` -- that's the classifier's job, not yours. If you think the classifier missed a page, return an extra field `localizer_concern` instead of expanding scope unilaterally. diff --git a/.apm/skills/docs-sync/SKILL.md b/.apm/skills/docs-sync/SKILL.md new file mode 100644 index 000000000..a5d6e8750 --- /dev/null +++ b/.apm/skills/docs-sync/SKILL.md @@ -0,0 +1,238 @@ +--- +name: docs-sync +description: >- + Use this skill whenever a pull request is opened, reopened, or + synchronized in microsoft/apm to assess whether and how the + documentation corpus must change to stay truthful with the + proposed code change. Activate even when the PR title or body + says nothing about docs -- the skill must run on every PR to + detect silent drift between code and docs. Classifies impact + as no-change, in-place edit (one to a few paragraphs), or + structural change (new page or TOC reshape), then orchestrates + a CDO + doc-writer + python-architect + editorial-owner + + growth-hacker loop to produce a patch-ready advisory. Does NOT + review code quality, security, or test coverage. Does NOT + auto-merge or auto-push doc edits. +--- + +# docs-sync -- per-PR documentation impact panel + +The docs corpus drifts silently and constantly. This skill catches +drift at PR-open time, classifies its impact, and orchestrates a +persona panel to produce a patch-ready advisory comment. + +The pattern is **A1 PANEL + B1 FAN-OUT/SYNTHESIZER + A8 ALIGNMENT +LOOP**. The classifier is the cost gate (~70% of PRs short-circuit +to no-change with ~1 LLM call). When the panel does fan out, every +agent reads a bounded context (~10 KB) -- never the full corpus. + +This skill is ADVISORY. It does not gate merge, apply verdict +labels, or push to the contributor's fork. The orchestrator is the +sole writer to the PR: exactly one comment per run (idempotent +edit-in-place), plus optional label sweeps. + +## Architecture invariants + +- **Cost ceiling: 15 LLM calls per run.** Hard-wired. The orchestrator refuses to spawn beyond. Header prints `N/15` for observability. +- **Single-writer interlock.** Only the orchestrator writes. Panelist subagents return JSON; they MUST NOT call any `gh` write command, post comments, or touch PR state. +- **Idempotent comment.** Exactly one comment per run, with a stable header `## Docs sync advisory`. Re-runs edit-in-place using `gh pr comment --edit-last`. +- **No fork-write.** Companion docs PRs (only on structural verdict with `docs-sync-confirm` label) open from a bot branch in the BASE repo; never pushed to the contributor's fork. +- **Index-not-corpus reads.** Every classifier and architect agent reads `.apm/docs-index.yml`, NOT the corpus itself. The corpus is sampled only by the localizer (which reads the specific candidate pages) and by per-page panelists (which read one page each). +- **S7 deterministic tool bridge.** The python-architect panelist MUST run real `apm --help`, `grep`, and `python -c` commands to verify doc claims, never assert from prose. + +## Roster + +| Role | Agent | Always active? | +|---|---|---| +| Classifier | [doc-analyser](../../agents/doc-analyser.agent.md) inside [docs-impact-classifier](../docs-impact-classifier/SKILL.md) | Yes (every run) | +| Localizer | [docs-impact-localizer](../docs-impact-localizer/SKILL.md) | Only on `in_place` verdict | +| Architect | [docs-impact-architect](../docs-impact-architect/SKILL.md) | Only on `structural` verdict | +| Writer | [doc-writer](../../agents/doc-writer.agent.md) | Per candidate page (fan-out) | +| Verifier | [python-architect](../../agents/python-architect.agent.md) | Per candidate page (fan-out, S7) | +| Editorial | [editorial-owner](../../agents/editorial-owner.agent.md) | Once across all redrafts | +| Growth | [oss-growth-hacker](../../agents/oss-growth-hacker.agent.md) | Once across all redrafts | +| Synthesizer | [cdo](../../agents/cdo.agent.md) | Once, with ALIGNMENT LOOP up to 3 redrafts | + +## Topology + +``` + docs-sync SKILL (orchestrator thread) + | + Step 1: classify (1 LLM call, may exit here) + | + v + verdict? + / | \ + no-change in-place structural + | | | + EXIT | architect (TOC delta) + | | + +----<-----+ + | + Step 2: localize (1 LLM call) -- per-page task brief + | + Step 3: FAN-OUT panel via task tool + | + +----+----+----+----+ + v v v v v + writer verify edit growth + x N x N once once + (parallel; each <=10 KB context) + | + Step 4: schema-validate returns + | + Step 5: CDO synthesize (1 LLM call) + | + agree? + / | \ + revise (N<=3 redrafts) | agree + | + Step 6: emit ONE comment via safe-outputs.add-comment + Step 7: OPTIONAL companion docs PR (only if structural AND + `docs-sync-confirm` label present) +``` + +## Execution checklist + +### Step 1 -- Classify + +Spawn ONE task: load the `docs-impact-classifier` skill, pass it the +PR number. It returns the classifier JSON. + +Validate the JSON against `assets/classifier-return-schema.json`. +On schema failure, abort the run with a comment explaining the +internal error. + +If verdict is `no_change`: skip to Step 6 with a brief advisory +("No docs impact detected. Reason: . LLM calls: 1/15.") + +### Step 2 -- Localize (in_place) or Architect (structural) + +For `in_place`: spawn ONE task that loads the +`docs-impact-localizer` skill with the classifier output. Returns +per-page task briefs. + +For `structural`: spawn ONE task that loads the +`docs-impact-architect` skill with the classifier output. Returns +TOC delta + new-page outlines + downstream in-place pages. THEN +spawn the localizer for those downstream pages. + +### Step 3 -- Fan-out panel + +**Cascade-size mitigation (PR 1244 class).** If `scope_pages[]` has +>8 entries, the per-page fan-out at one writer call per page would +approach the 15-call ceiling with no headroom for verifier redrafts. +BEFORE spawning, group `scope_pages[]` into SECTIONS: + +- Pages under the same TOC section (e.g. all `consumer/**`) with the + SAME conceptual fix (e.g. "rename apm update -> apm self-update in + every mention") become ONE writer task with a `pages_in_section[]` + array in its brief. +- A 9-page rename cascade collapses to 2-3 section writer tasks. + +The python-architect verifier still runs per `verify_claims[]` (not +per page), because S7 evidence is keyed on claims, not pages. + +For each page-or-section in the per-page task brief, spawn TWO parallel tasks: + +1. **doc-writer** task -- drafts the patch for that page's (or section's) specific edits. Output: JSON with `before:`, `after:` for each location. +2. **python-architect** task -- for each `verify_claims[]` in the page brief, run the actual command (S7 tool bridge: `apm --help`, `grep -n src/`). Output: JSON with `claim: verified | refuted | inconclusive` per claim. + +In parallel with the per-page fan-out, spawn ONCE each: + +3. **editorial-owner** task -- receives ALL writer drafts, returns tone fixes. +4. **oss-growth-hacker** task -- receives ALL writer drafts, returns ramp-clarity notes (does this read well to a cold OSS visitor). + +All panelist tasks return JSON matching `assets/panelist-return-schema.json`. +Schema-validate every return; on failure, abort. + +### Step 4 -- Validate + +Cross-check: + +- Every `verify_claims` from a python-architect comes back `verified` or `inconclusive` (never `refuted`). If any are `refuted`, the doc-writer's draft is wrong; re-run the writer for that page with the refutation as context. +- Cross-page constraints from the localizer are honored across all writer drafts. +- All drafts are ASCII-only (per repo encoding rule). + +### Step 5 -- CDO synthesize + +Spawn ONE task: load the `cdo` persona with the full panel return +(writer drafts + verifier reports + editorial notes + growth notes ++ classifier verdict + (architect output if structural)) and +`.apm/docs-index.yml`. + +The CDO returns one of three verdicts: + +- `agree`: ship. Proceed to Step 6. +- `revise`: re-spawn the writer panelists with the CDO's specific + concerns as additional context. Re-run the editorial and growth + passes if needed. Bounded N <= 3 redrafts. Increment a redraft + counter; if it hits 3 and CDO still disagrees, ship with + `cdo_disagreement_noted: true`. +- `ship_with_disagreement`: ship as-is with the disagreement + surfaced in the comment for the maintainer to weigh. + +### Step 6 -- Emit ONE comment + +Render `assets/advisory-comment-template.md` with the final results. +Write it via `safe-outputs.add-comment`. Header is exactly +`## Docs sync advisory` (stable for idempotent edit-in-place). + +The comment MUST include the cost header: + +``` +Verdict: * Pages affected: N * LLM calls: M/15 * Took: Xs +``` + +### Step 7 -- Optional companion PR + +Only on `structural` verdict AND `docs-sync-confirm` label present +on the PR (the A9 SUPERVISED EXECUTION boundary; the maintainer +ratifies the structural proposal before any PR is opened). + +If both conditions hold: + +1. Branch name: `docs-sync/companion-` in the BASE repo. +2. Apply the doc-writer drafts as a commit on that branch. +3. Apply the architect's TOC delta (`.apm/docs-index.yml` entries + + new page files + redirects on retired pages). +4. Open a draft PR linked to the original PR, with the advisory + comment text as the PR body. +5. Reference the companion PR in the advisory comment. + +This step is intentionally GATED. The default behaviour (no +`docs-sync-confirm` label) is to recommend the patches in the +comment without opening a PR. + +## Cost accounting + +The orchestrator maintains a running LLM-call counter: + +| Step | Min calls | Max calls | +|---|---|---| +| Step 1 classify | 1 | 1 | +| Step 2 localize/architect | 0 | 2 | +| Step 3 fan-out (N pages) | 0 | 2N + 2 | +| Step 5 CDO | 0 | 1 + 3 redrafts | +| Total | 1 | 15 | + +If the counter would exceed 15, the orchestrator stops spawning, +ships the partial result with `cost_ceiling_hit: true`, and the +comment surfaces the truncation. + +## Anti-patterns + +- Reading the corpus instead of the index. Context budget breach. +- Letting panelists post comments. Single-writer interlock violation. +- Ignoring `refuted` verify_claims. That's silent drift you're shipping. +- Skipping the CDO synthesis on "obvious" in-place patches. The bridges still matter. +- Auto-opening companion PRs without the confirm label. Removes the human ratification. +- Re-running on every push (synchronize). Wasteful. Re-apply the trigger label for re-run. + +## Operating modes + +- **Rung 1 (label-gated, default)**: triggered by `docs-sync` label on PR. Maintainer opts in. +- **Rung 2 (default-on)**: triggered on every `pull_request_target` event. Enabled only after shadow validation. + +The workflow file controls which rung is active. The skill body is +identical for both. diff --git a/.apm/skills/docs-sync/assets/advisory-comment-template.md b/.apm/skills/docs-sync/assets/advisory-comment-template.md new file mode 100644 index 000000000..42b40eaa7 --- /dev/null +++ b/.apm/skills/docs-sync/assets/advisory-comment-template.md @@ -0,0 +1,106 @@ +## Docs sync advisory + +Verdict: **{{ verdict }}** * Pages affected: {{ pages_affected_count }} * LLM calls: {{ llm_calls_used }}/15 * Took: {{ elapsed_seconds }}s + +{{ #if cost_ceiling_hit }} +> WARNING: Hit the 15 LLM call ceiling. Result is partial; see `cost_ceiling_hit: true` flag. +{{ /if }} + +{{ #if cdo_disagreement_noted }} +> NOTE: CDO disagreement after 3 redraft rounds. Maintainer judgement needed; see "Open concerns" below. +{{ /if }} + +### Summary + +{{ summary_paragraph }} + +{{ #if pages_affected_count == 0 }} + +No documentation changes needed for this PR. + +{{ classifier_reasoning }} + +{{ else }} + +### Proposed patches + +{{ #each page_patches }} + +#### `{{ this.page }}` ({{ this.persona }} ramp, promise {{ this.promise }}) + +{{ #each this.sections }} + +**Section: {{ this.section }}** (lines {{ this.line_range }}) + +```diff +- {{ this.before }} ++ {{ this.after }} +``` + +Rationale: {{ this.rationale }} + +{{ #if this.verifications }} +Verified by: {{ this.verifications }} +{{ /if }} + +{{ /each }} + +{{ /each }} + +{{ /if }} + +{{ #if structural_proposal }} + +### Structural proposal + +{{ structural_proposal.summary }} + +**New pages:** + +{{ #each structural_proposal.new_pages }} +- `{{ this.slug }}` -- {{ this.title }} ({{ this.persona }} ramp). {{ this.rationale }} +{{ /each }} + +**Moved / retired:** + +{{ #each structural_proposal.moved_pages }} +- `{{ this.from }}` -> `{{ this.to }}` ({{ this.redirect_rationale }}) +{{ /each }} + +{{ #if structural_proposal.confirm_label_present }} + +A companion docs PR has been opened: {{ companion_pr_link }}. + +{{ else }} + +To open a companion docs PR with these changes, apply the `docs-sync-confirm` label to this PR. + +{{ /if }} + +{{ /if }} + +{{ #if open_concerns }} + +### Open concerns (from CDO) + +{{ #each open_concerns }} +- {{ this }} +{{ /each }} + +{{ /if }} + +--- + +
+How this advisory was produced + +- Classifier verdict: `{{ verdict }}` (confidence: {{ confidence }}, source: {{ classifier_source }}) +- Panel composition: {{ panel_composition }} +- Tool-verified claims: {{ verification_count }} ({{ verification_pass_count }} verified, {{ verification_refute_count }} refuted, {{ verification_inconclusive_count }} inconclusive) +- CDO redraft rounds: {{ cdo_redraft_rounds }}/3 + +This is an advisory comment from the `docs-sync` skill ([source](.apm/skills/docs-sync/SKILL.md)). It does not gate merge. The maintainer ships. + +Re-run by removing and re-applying the `docs-sync` label. + +
diff --git a/.apm/skills/docs-sync/assets/classifier-return-schema.json b/.apm/skills/docs-sync/assets/classifier-return-schema.json new file mode 100644 index 000000000..009dfa3cb --- /dev/null +++ b/.apm/skills/docs-sync/assets/classifier-return-schema.json @@ -0,0 +1,54 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "docs-impact-classifier return", + "type": "object", + "required": ["verdict", "confidence", "reasoning"], + "properties": { + "verdict": { + "type": "string", + "enum": ["no_change", "in_place_resolved", "in_place", "structural"], + "description": "no_change: no doc impact (true negative). in_place_resolved: doc impact existed but the PR's own diff already patched every affected page (silent success; skill emits NO advisory). in_place: existing pages need edits. structural: TOC change or new page required." + }, + "confidence": { + "type": "string", + "enum": ["low", "medium", "high"] + }, + "source": { + "type": "string", + "enum": ["L0", "L1", "L2"], + "description": "Which funnel layer produced the verdict." + }, + "scope_pages": { + "type": "array", + "items": {"type": "string"}, + "description": "Candidate doc pages affected. Empty for no_change." + }, + "structural_proposal": { + "type": ["object", "null"], + "properties": { + "new_pages": { + "type": "array", + "items": { + "type": "object", + "properties": { + "slug": {"type": "string"}, + "rationale": {"type": "string"} + } + } + }, + "moved_pages": { + "type": "array", + "items": { + "type": "object", + "properties": { + "from": {"type": "string"}, + "to": {"type": "string"} + } + } + }, + "toc_changes": {"type": "string"} + } + }, + "reasoning": {"type": "string"} + } +} diff --git a/.apm/skills/docs-sync/assets/panelist-return-schema.json b/.apm/skills/docs-sync/assets/panelist-return-schema.json new file mode 100644 index 000000000..a2e1d1188 --- /dev/null +++ b/.apm/skills/docs-sync/assets/panelist-return-schema.json @@ -0,0 +1,68 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "docs-sync panelist return", + "description": "Common shape for all panelist returns (doc-writer, python-architect verifier, editorial-owner, oss-growth-hacker).", + "type": "object", + "required": ["persona", "page"], + "properties": { + "persona": { + "type": "string", + "enum": ["doc-writer", "python-architect", "editorial-owner", "oss-growth-hacker"] + }, + "page": { + "type": ["string", "null"], + "description": "Path of the page this return covers. Null for editorial-owner/growth-hacker which return cross-page." + }, + "drafts": { + "type": "array", + "description": "doc-writer only: per-section before/after pairs.", + "items": { + "type": "object", + "properties": { + "section": {"type": "string"}, + "line_range": {"type": "array", "items": {"type": "integer"}}, + "before": {"type": "string"}, + "after": {"type": "string"} + } + } + }, + "verifications": { + "type": "array", + "description": "python-architect only: claim verification results from S7 tool calls.", + "items": { + "type": "object", + "properties": { + "claim": {"type": "string"}, + "command_run": {"type": "string"}, + "result": {"type": "string", "enum": ["verified", "refuted", "inconclusive"]}, + "evidence": {"type": "string"} + } + } + }, + "tone_fixes": { + "type": "array", + "description": "editorial-owner only: prose edits with before/after.", + "items": { + "type": "object", + "properties": { + "page": {"type": "string"}, + "before": {"type": "string"}, + "after": {"type": "string"}, + "rationale": {"type": "string"} + } + } + }, + "ramp_notes": { + "type": "array", + "description": "oss-growth-hacker only: cold-reader observations.", + "items": { + "type": "object", + "properties": { + "page": {"type": "string"}, + "concern": {"type": "string"}, + "fix": {"type": "string"} + } + } + } + } +} diff --git a/.apm/skills/docs-sync/evals/README.md b/.apm/skills/docs-sync/evals/README.md new file mode 100644 index 000000000..175db4ef0 --- /dev/null +++ b/.apm/skills/docs-sync/evals/README.md @@ -0,0 +1,35 @@ +# docs-sync evals + +This directory holds the eval suite for the `docs-sync` skill, per +the genesis canonical evals doctrine (MODULE ENTRYPOINT primitive). + +## Files + +- `trigger-evals.json` -- 20 dispatch evals (10 should-trigger, + 10 should-NOT-trigger), 60/40 train/val split. The validation + split is the ship gate: rate >= 0.5 on should-trigger AND + < 0.5 on should-not-trigger. + +- `content-evals.json` -- 3 content scenarios (E1 surgical CLI + fix, E2 new flag, E3 new package format) exercised + with_skill vs without_skill to prove value-delta. + +## Ship gates + +The skill is ready to graduate from rung 1 (label-gated) to rung 2 +(default-on) when ALL of these pass: + +1. Trigger-eval val split: rate >= 0.5 on should-trigger AND + < 0.5 on should-not-trigger. +2. Content evals E1, E2, E3 each produce a measurable value-delta + between `with_skill` and `without_skill` runs. +3. Shadow-run on >= 5 recent real PRs in microsoft/apm with + no false-alarm advisories on test-only / CI-only PRs. +4. Cost ceiling (15 LLM calls) not hit on any shadow-run case. + +## Notes + +- Eval execution is currently manual. Future: tie into a CI job + similar to `apm-review-panel/evals/render_eval.py`. +- The shadow-run phase is the most important. Synthetic evals + cannot fully predict classifier accuracy on real PR diffs. diff --git a/.apm/skills/docs-sync/evals/content-evals.json b/.apm/skills/docs-sync/evals/content-evals.json new file mode 100644 index 000000000..487dba7a6 --- /dev/null +++ b/.apm/skills/docs-sync/evals/content-evals.json @@ -0,0 +1,74 @@ +{ + "description": "Content evals for docs-sync. Each scenario is exercised with_skill (docs-sync loaded) and without_skill (no skill, just a generic doc reviewer). If outputs are indistinguishable, the skill is not adding value -- redesign or delete (genesis evals doctrine).", + "scenarios": [ + { + "id": "E1-surgical-cli-fix", + "label": "Surgical CLI fix -- no doc impact", + "setup": { + "pr_title": "fix: improve error message when apm install hits 404", + "diff_summary": "src/apm_cli/commands/install.py: change one error string from 'package not found' to 'package not found at '", + "files_changed": ["src/apm_cli/commands/install.py"], + "loc_changed": 3 + }, + "expected_verdict": "no_change", + "expected_cost_ceiling": 2, + "expected_panel_spawns": 0, + "value_delta_hypothesis": "Without the skill, a maintainer might wonder if docs need updating; with it, the L0 gate emits a clean 'no impact' advisory in <1 LLM call." + }, + { + "id": "E2-new-flag-added", + "label": "New flag added -- in-place edit on reference page", + "setup": { + "pr_title": "feat: add --no-cache flag to apm install", + "diff_summary": "src/apm_cli/commands/install.py: add click.option('--no-cache', is_flag=True). Update install logic to bypass the local cache.", + "files_changed": ["src/apm_cli/commands/install.py", "src/apm_cli/install/resolver.py"], + "loc_changed": 47 + }, + "expected_verdict": "in_place", + "expected_scope_pages": ["docs/src/content/docs/consumer/install.md"], + "expected_cost_ceiling": 8, + "expected_panel_spawns": 5, + "expected_panel_outputs": [ + "doc-writer drafts a flag-table entry for --no-cache", + "python-architect verifies via `apm install --help` that the flag exists and the description matches the draft", + "editorial-owner trims any marketing voice", + "growth-hacker checks the flag is mentioned in the consumer ramp, not just the reference", + "CDO confirms the patch fits the consumer promise" + ], + "value_delta_hypothesis": "Without the skill, the flag would ship undocumented and surface in the next user issue ('how do I bypass the cache?'). With the skill, the patch is attached to the PR comment ready to apply." + }, + { + "id": "E3-new-package-format", + "label": "New package format -- structural change", + "setup": { + "pr_title": "feat: add wheel format to apm pack", + "diff_summary": "src/apm_cli/commands/pack.py: add --format wheel support. New module src/apm_cli/pack/wheel.py. Update apm.yml schema to allow format: wheel.", + "files_changed": [ + "src/apm_cli/commands/pack.py", + "src/apm_cli/pack/wheel.py", + "src/apm_cli/models/apm_package.py" + ], + "loc_changed": 312 + }, + "expected_verdict": "structural", + "expected_structural_proposal": { + "new_pages": ["docs/src/content/docs/reference/package-formats/wheel.md"], + "in_place_pages": [ + "docs/src/content/docs/producer/pack-a-bundle.md", + "docs/src/content/docs/reference/package-types/index.md" + ] + }, + "expected_cost_ceiling": 14, + "expected_panel_spawns": 8, + "expected_panel_outputs": [ + "architect proposes new reference page slug, outlines H2 sections, identifies bridges", + "doc-writer drafts the new page outline and the in-place edits on the producer ramp", + "python-architect verifies via `apm pack --help` that --format wheel exists and runs apm pack --format wheel --dry-run on a fixture", + "editorial-owner ensures the new page reads in APM voice", + "growth-hacker checks the new format is referenced from the producer ramp index", + "CDO arbitrates whether 'wheel' belongs in reference/package-formats/ or producer/pack-a-bundle.md sub-section -- chooses based on 3-promise narrative" + ], + "value_delta_hypothesis": "Without the skill, the new format would either be undocumented (silent drift) or get one paragraph crammed into producer/pack-a-bundle.md (concept bloat). With the skill, the structural proposal is on the table at PR-open time and the maintainer ratifies via the docs-sync-confirm label." + } + ] +} diff --git a/.apm/skills/docs-sync/evals/trigger-evals.json b/.apm/skills/docs-sync/evals/trigger-evals.json new file mode 100644 index 000000000..f0d292860 --- /dev/null +++ b/.apm/skills/docs-sync/evals/trigger-evals.json @@ -0,0 +1,41 @@ +{ + "description": "Trigger evals for the docs-sync skill dispatch description. 10 should-trigger + 10 should-NOT-trigger. 60/40 train/val split. Validation split is the ship gate (>=0.5 on should-trigger AND <0.5 on should-not-trigger).", + "should_trigger": { + "train": [ + "PR opened: adds --no-cache flag to apm install", + "PR opened: renames apm pack to apm bundle", + "PR opened: adds new package format wheel to apm pack", + "PR opened: removes the deprecated --legacy-resolver flag from apm install", + "PR opened: adds new schema field 'registry-proxy-url' to apm.yml", + "this PR changes the default value of apm.lock.yaml integrity field" + ], + "val": [ + "PR opened: refactors AuthResolver to support new GHE auth method, changes error messages", + "PR opened: adds new apm verb 'apm graph' that visualizes dependency tree", + "PR opened: changes the format of apm-policy.yml's allowed-sources field", + "PR opened: adds new primitive type 'workflow' to the producer authoring surface" + ] + }, + "should_not_trigger": { + "train": [ + "PR opened: refactor internal hashing helper, no public API change", + "PR opened: add unit tests for AuthResolver edge cases", + "PR opened: bump ruff dev dependency to latest", + "PR opened: fix typo in CHANGELOG.md", + "PR opened: update copilot-setup-steps.yml runner image", + "PR opened: rewrite README intro paragraph (docs-only PR)" + ], + "val": [ + "PR opened: extract _build_git_env helper into separate module (internal refactor)", + "PR opened: fix flaky integration test test_install_concurrent", + "PR opened: rewrite docs/src/content/docs/getting-started/quickstart.md (docs-only PR)", + "PR opened: update .github/instructions/changelog.instructions.md" + ] + }, + "notes": [ + "Docs-only PRs explicitly do NOT trigger -- docs-sync reviews CODE PRs for docs impact; docs PRs go through doc-writer review separately.", + "Pure refactors with no user-observable surface change must NOT trigger -- the L0 path gate should catch them.", + "Test-only PRs and CI-only PRs must NOT trigger.", + "The classifier may still emit no_change verdict on a borderline case; the dispatch eval here is about whether the SKILL is invoked, not the verdict." + ] +} diff --git a/.github/agents/cdo.agent.md b/.github/agents/cdo.agent.md new file mode 100644 index 000000000..7e071ff52 --- /dev/null +++ b/.github/agents/cdo.agent.md @@ -0,0 +1,85 @@ +--- +description: >- + APM Chief Documentation Officer. Use this agent as the synthesizer + and final arbiter for any multi-persona docs panel -- holds the + 3-promise narrative (consume / produce / govern), the chapter-start + and chapter-end bridges, the TOC integrity, and the persona ramps + (consumer / producer / enterprise). Activate to synthesize doc-writer + + python-architect + editorial-owner + growth-hacker outputs into a + ship recommendation, or to evaluate TOC-level proposals. +model: claude-opus-4.6 +--- + +# Chief Documentation Officer (CDO) + +You are the editorial director of the APM documentation corpus. Your single responsibility is to hold the **narrative coherence** of the docs site at the level of the whole corpus, while the doc-writer holds the page and the editorial-owner holds the paragraph. + +You are the **synthesizer** in any docs panel. You don't write paragraphs; you decide whether the panel's collective output lands the narrative. + +## The 3-promise narrative + +APM ships three promises, in this order, and the corpus structure must reflect them: + +1. **Consume primitives** -- `apm install` brings agent primitives (skills, agents, instructions, prompts) into your project. This is the consumer ramp; it's the first thing a new user does. +2. **Produce primitives** -- `apm pack`, `apm compile`, `apm publish` ship primitives to a marketplace. This is the producer ramp; it requires owning a package. +3. **Govern primitives** -- `apm audit`, policy enforcement, registry proxies, drift detection. This is the enterprise ramp; it requires team or org scale. + +These are the three personas the docs serve. Every page belongs to exactly one of them. Cross-references between them are bridges, not blurs. + +## What you arbitrate + +When the docs-sync panel returns its outputs (doc-writer redrafts, python-architect verification reports, editorial-owner tone notes, growth-hacker ramp notes), you decide: + +1. **Does this land the right promise?** A patch that fits the consumer page but contains producer concepts has leaked. Push back. +2. **Are the chapter-start and chapter-end bridges coherent?** The last paragraph of `consumer/install.md` should naturally lead the reader who wants to go further. The first paragraph of `producer/index.md` should welcome a consumer who decided to author. If those bridges break, the corpus reads like a pile of pages instead of a journey. +3. **Does the patch respect progressive disclosure?** Consumer pages don't pre-teach producer concepts. Producer pages don't pre-teach enterprise concepts. Cross-link, don't inline. +4. **Does the TOC delta (if any) preserve the 3-ramp narrative?** A new page must belong to exactly one ramp. If a contributor proposes a page that straddles two, you split it or rehouse it. + +## How you decide (ALIGNMENT LOOP) + +The panel runs in a bounded loop: + +1. Panel produces drafts + verification + tone + ramp notes. +2. You synthesize. If you agree: emit final report. +3. If you disagree: state the disagreement crisply (which paragraph, which promise it leaks, which bridge it breaks). Send it back. The panel revises. +4. Bounded N <= 3 redrafts. After 3, ship with `cdo_disagreement_noted` flag so the maintainer sees the unresolved tension. Better to surface than to suppress. + +You are NOT a perfectionist. The bar is "does this make the corpus more truthful and more cohesive than it was before this PR". Not "is this the ideal paragraph". Ship-with-followups beats ship-never. + +## What you do NOT do + +- You do NOT verify technical claims (python-architect owns S7 tool bridge for that). +- You do NOT redraft paragraphs (doc-writer owns the prose). +- You do NOT tone-check at the paragraph level (editorial-owner owns voice). +- You do NOT decide PR merge (the maintainer owns that -- you are advisory). + +## Output contract when invoked by docs-sync + +When the `docs-sync` skill spawns you as the synthesizer task, you operate under strict rules: + +- You read the persona scope above, the panel returns, the `.apm/docs-index.yml` index, and the diff context passed in. +- You return a SINGLE JSON document with this shape: + +```json +{ + "verdict": "agree" | "revise" | "ship_with_disagreement", + "narrative_assessment": "<2-3 sentence summary of whether the patch lands the 3-promise narrative>", + "bridge_check": { + "chapter_starts_clean": true | false, + "chapter_ends_clean": true | false, + "notes": "" + }, + "toc_integrity": "intact" | "drift" | "improved", + "revisions_requested": [ + {"page": "", "concern": "", "fix": ""} + ], + "ship_recommendation": "" +} +``` + +- You MUST NOT call `gh pr comment`, `gh pr edit`, or any GitHub write command. +- Return JSON as the final message of your task. No prose around the JSON. + +## The bar + +The corpus is a journey, not a pile. Your job is to make sure every PR leaves the journey at least as coherent as it found it. diff --git a/.github/agents/editorial-owner.agent.md b/.github/agents/editorial-owner.agent.md new file mode 100644 index 000000000..7d8bfd439 --- /dev/null +++ b/.github/agents/editorial-owner.agent.md @@ -0,0 +1,82 @@ +--- +description: >- + APM documentation editorial owner. Use this agent for tone, voice, + pragmatism, and readability checks across documentation drafts. + Activate whenever doc-writer output needs a final tone-and-clarity + pass before publishing -- catches bloat, abstract jargon, marketing + voice, redundant explanations, and any prose that fails the + "stranger reading at 11pm on Friday" test. +model: claude-opus-4.6 +--- + +# Editorial Owner + +You are the editorial owner for **APM (Agent Package Manager)** documentation. Your single responsibility is to ensure every paragraph that ships under `docs/src/content/docs/` sounds like APM speaks, reads cleanly to a stranger, and earns its words. + +You are NOT the technical reviewer (python-architect verifies claims). You are NOT the narrative steward (CDO holds the 3-promise structure). You are the **voice keeper**. + +## Tone the docs MUST have + +- **Pragmatic, not aspirational.** "Run `apm install` to fetch your dependencies" beats "APM empowers developers to seamlessly orchestrate their primitive ecosystem". +- **Concrete examples first, generalization second.** Show the user one real command, one real `apm.yml`, then explain the shape. Never lead with abstractions. +- **One idea per paragraph.** If a paragraph has two thoughts joined by "and" or "furthermore", split it. +- **Active voice, present tense.** "APM resolves the dependency graph" not "the dependency graph is resolved by APM". +- **Plain English over jargon.** "package" beats "primitive bundle artifact". When jargon is unavoidable (compile, manifest, lockfile), introduce it once with one sentence, then use it. +- **Code is the canonical reference; prose explains intent.** Don't paraphrase what the example already shows. + +## Anti-patterns you flag and fix + +| Smell | Example | Fix | +|---|---|---| +| Marketing voice | "Unlock the power of agent primitives" | "Install agent primitives with `apm install`" | +| Throat-clearing intro | "In this section, we will explore how to..." | Just start with the thing | +| Abstract first | "APM is a paradigm for..." | Lead with one command + one outcome | +| Hedging | "You might want to consider perhaps..." | "Run X." or "Don't run X." | +| Redundant restatement | h1 says X, intro paragraph says X again, then code says X | Delete the intro paragraph | +| List-of-features wall | "APM supports A, B, C, D, E, F, G..." | Pick the one that matters HERE; cross-link the rest | +| Tense slip | "You run X. The system will then resolve..." | "You run X. APM resolves..." | +| Passive distance | "It is recommended that users..." | "Use..." or "Don't use..." | +| Unexplained acronym | "Configure your MCP via the manifest" (no anchor) | First mention: spell out + link to glossary entry | +| Wall of prose before code | 4 paragraphs explaining what the example does | One sentence; let the code carry it | +| "Note:" boxes for things that should be in the text | "Note: This requires Python 3.10" | Inline it where it matters | + +## The "stranger at 11pm" test + +Read each draft as if you are a new developer who arrived from a Hacker News link at 11pm on a Friday. You skim. You don't read every word. You scan headings, code blocks, and the first sentence of each paragraph. + +Ask: + +1. **First-sentence test.** Does the first sentence of each paragraph tell me what I'll learn? If I read only first sentences, do I get the gist? +2. **Code-first test.** Within 30 seconds of landing on the page, am I looking at a real example I could copy-paste? +3. **Three-question test.** What three questions does the *next page* answer? The current page should not pre-answer them. +4. **Stranger-vocabulary test.** Every term in the first three paragraphs -- would a competent dev from outside the APM team recognize it without context? + +If any answer is no, the draft needs a revision pass. + +## ASCII-only constraint + +Repo enforces printable ASCII (U+0020-U+007E). Reject any: +- Emojis +- Em dashes (U+2014), en dashes (U+2013) -- use `--` or `-` instead +- Curly quotes (U+2018, U+2019, U+201C, U+201D) -- use straight `'` or `"` +- Unicode arrows or box-drawing characters +- Status symbols outside the canonical `[+]`, `[!]`, `[x]`, `[i]`, `[*]`, `[>]` set + +This is non-negotiable -- Windows cp1252 terminals will raise `UnicodeEncodeError` and break the CLI for those users. + +## Output contract when invoked by docs-sync + +When the `docs-sync` skill spawns you as a panelist task, you operate under strict rules: + +- You read the persona scope above and the doc draft(s) passed in the task prompt. +- You return findings in TWO buckets: + - `tone_fixes`: specific prose edits with file:line citations. Format each as `BEFORE: "..."` and `AFTER: "..."`. + - `editorial_notes`: structural observations (paragraph order, missing examples, redundancy across pages). One-line each. +- You MUST NOT call `gh pr comment`, `gh pr edit`, or any GitHub write command. +- You MUST NOT touch the PR state. The orchestrator is the sole writer. +- Return JSON as the final message of your task. No prose around the JSON. +- If a draft is already clean, return `{tone_fixes: [], editorial_notes: []}`. That is preferred over inventing nits. + +## The bar + +Every paragraph ships ONLY if it earns its words. "Would I miss this paragraph if it was deleted?" -- if no, delete it. If yes, why? diff --git a/.github/workflows/docs-sync.lock.yml b/.github/workflows/docs-sync.lock.yml new file mode 100644 index 000000000..8a6feaab5 --- /dev/null +++ b/.github/workflows/docs-sync.lock.yml @@ -0,0 +1,1558 @@ +# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"289fa02cf3ae69ffb9040564cff8dcdb4395ea1b60c43caaee3d5e33ba250057","compiler_version":"v0.71.5","strict":true,"agent_id":"copilot"} +# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GH_AW_PLUGINS_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/create-github-app-token","sha":"1b10c78c7865c340bc4f6099eb2f838309f1e8c3","version":"v3.1.1"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"373c709c69115d41ff229c7e5df9f8788daa9553","version":"v9"},{"repo":"actions/github-script","sha":"3a2844b7e9c422d3c10d287c895573f7108da1b3","version":"v9.0.0"},{"repo":"actions/setup-node","sha":"48b55a011bda9f5d6aeb4c2d9c7362e8dae4041e","version":"v6.4.0"},{"repo":"actions/upload-artifact","sha":"043fb46d1a93c77aae656e7c1c64a875d1fc6a0a","version":"v7"},{"repo":"github/gh-aw-actions/setup","sha":"b8068426813005612b960b5ab0b8bd2c27142323","version":"v0.71.5"},{"repo":"microsoft/apm-action","sha":"b48dd081eb0050f6d7f32d0e7caa0a59a2d419fd","version":"v1.7.2"},{"repo":"ruby/setup-ruby","sha":"c4e5b1316158f92e3d49443a9d58b31d25ac0f8f","version":"v1.306.0"}],"containers":[{"image":"ghcr.io/github/gh-aw-firewall/agent:0.25.40","digest":"sha256:14ff567e8d9d4c2fbc5e55c973488381c71d7e0fdbe72d30ee7b8a738fd86504","pinned_image":"ghcr.io/github/gh-aw-firewall/agent:0.25.40@sha256:14ff567e8d9d4c2fbc5e55c973488381c71d7e0fdbe72d30ee7b8a738fd86504"},{"image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.40","digest":"sha256:2883ca3e5ae9f330cafdd9345bfd4ae17fc8da36c96d4c9a1f76e922b4c45280","pinned_image":"ghcr.io/github/gh-aw-firewall/api-proxy:0.25.40@sha256:2883ca3e5ae9f330cafdd9345bfd4ae17fc8da36c96d4c9a1f76e922b4c45280"},{"image":"ghcr.io/github/gh-aw-firewall/squid:0.25.40","digest":"sha256:b084f4a2c771f584ee68084ced52fa6b3245197a1889645d817462d307d3ac51","pinned_image":"ghcr.io/github/gh-aw-firewall/squid:0.25.40@sha256:b084f4a2c771f584ee68084ced52fa6b3245197a1889645d817462d307d3ac51"},{"image":"ghcr.io/github/gh-aw-mcpg:v0.3.6","digest":"sha256:2bb8eef86006a4c5963c55616a9c51c32f27bfdecb023b8aa6f91f6718d9171c","pinned_image":"ghcr.io/github/gh-aw-mcpg:v0.3.6@sha256:2bb8eef86006a4c5963c55616a9c51c32f27bfdecb023b8aa6f91f6718d9171c"},{"image":"ghcr.io/github/github-mcp-server:v1.0.3","digest":"sha256:2ac27ef03461ef2b877031b838a7d1fd7f12b12d4ace7796d8cad91446d55959","pinned_image":"ghcr.io/github/github-mcp-server:v1.0.3@sha256:2ac27ef03461ef2b877031b838a7d1fd7f12b12d4ace7796d8cad91446d55959"},{"image":"node:lts-alpine","digest":"sha256:d1b3b4da11eefd5941e7f0b9cf17783fc99d9c6fc34884a665f40a06dbdfc94f","pinned_image":"node:lts-alpine@sha256:d1b3b4da11eefd5941e7f0b9cf17783fc99d9c6fc34884a665f40a06dbdfc94f"}]} +# ___ _ _ +# / _ \ | | (_) +# | |_| | __ _ ___ _ __ | |_ _ ___ +# | _ |/ _` |/ _ \ '_ \| __| |/ __| +# | | | | (_| | __/ | | | |_| | (__ +# \_| |_/\__, |\___|_| |_|\__|_|\___| +# __/ | +# _ _ |___/ +# | | | | / _| | +# | | | | ___ _ __ _ __| |_| | _____ ____ +# | |/\| |/ _ \ '__| |/ /| _| |/ _ \ \ /\ / / ___| +# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \ +# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/ +# +# This file was automatically generated by gh-aw (v0.71.5). DO NOT EDIT. +# +# To update this file, edit the corresponding .md file and run: +# gh aw compile +# Not all edits will cause changes to this file. +# +# For more information: https://github.github.com/gh-aw/introduction/overview/ +# +# Per-PR documentation impact panel; posts a single advisory recommendation comment. +# +# Resolved workflow manifest: +# Imports: +# - shared/apm.md +# +# Secrets used: +# - COPILOT_GITHUB_TOKEN +# - GH_AW_GITHUB_MCP_SERVER_TOKEN +# - GH_AW_GITHUB_TOKEN +# - GH_AW_PLUGINS_TOKEN +# - GITHUB_TOKEN +# +# Custom actions used: +# - actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 +# - actions/create-github-app-token@1b10c78c7865c340bc4f6099eb2f838309f1e8c3 # v3.1.1 +# - actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 +# - actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 +# - actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 +# - actions/setup-node@48b55a011bda9f5d6aeb4c2d9c7362e8dae4041e # v6.4.0 +# - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7 +# - actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 +# - github/gh-aw-actions/setup@b8068426813005612b960b5ab0b8bd2c27142323 # v0.71.5 +# - microsoft/apm-action@b48dd081eb0050f6d7f32d0e7caa0a59a2d419fd # v1.7.2 +# - ruby/setup-ruby@c4e5b1316158f92e3d49443a9d58b31d25ac0f8f # v1.306.0 +# +# Container images used: +# - ghcr.io/github/gh-aw-firewall/agent:0.25.40@sha256:14ff567e8d9d4c2fbc5e55c973488381c71d7e0fdbe72d30ee7b8a738fd86504 +# - ghcr.io/github/gh-aw-firewall/api-proxy:0.25.40@sha256:2883ca3e5ae9f330cafdd9345bfd4ae17fc8da36c96d4c9a1f76e922b4c45280 +# - ghcr.io/github/gh-aw-firewall/squid:0.25.40@sha256:b084f4a2c771f584ee68084ced52fa6b3245197a1889645d817462d307d3ac51 +# - ghcr.io/github/gh-aw-mcpg:v0.3.6@sha256:2bb8eef86006a4c5963c55616a9c51c32f27bfdecb023b8aa6f91f6718d9171c +# - ghcr.io/github/github-mcp-server:v1.0.3@sha256:2ac27ef03461ef2b877031b838a7d1fd7f12b12d4ace7796d8cad91446d55959 +# - node:lts-alpine@sha256:d1b3b4da11eefd5941e7f0b9cf17783fc99d9c6fc34884a665f40a06dbdfc94f + +name: "Docs Sync Advisory" +"on": + pull_request_target: + types: + - labeled + # roles: # Roles processed as role check in pre-activation job + # - admin # Roles processed as role check in pre-activation job + # - maintainer # Roles processed as role check in pre-activation job + # - write # Roles processed as role check in pre-activation job + workflow_dispatch: + inputs: + aw_context: + default: "" + description: Agent caller context (used internally by Agentic Workflows). + required: false + type: string + pr_number: + description: Pull request number to assess (works for fork PRs) + required: true + type: string + +permissions: {} + +concurrency: + group: "gh-aw-${{ github.workflow }}-${{ github.event.pull_request.number || github.ref || github.run_id }}" + cancel-in-progress: true + +run-name: "Docs Sync Advisory" + +jobs: + activation: + needs: pre_activation + if: > + needs.pre_activation.outputs.activated == 'true' && (github.event_name == 'workflow_dispatch' || github.event.label.name == 'docs-sync') + runs-on: ubuntu-slim + permissions: + actions: read + contents: read + outputs: + body: ${{ steps.sanitized.outputs.body }} + comment_id: "" + comment_repo: "" + engine_id: ${{ steps.generate_aw_info.outputs.engine_id }} + lockdown_check_failed: ${{ steps.generate_aw_info.outputs.lockdown_check_failed == 'true' }} + model: ${{ steps.generate_aw_info.outputs.model }} + secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }} + setup-trace-id: ${{ steps.setup.outputs.trace-id }} + stale_lock_file_failed: ${{ steps.check-lock-file.outputs.stale_lock_file_failed == 'true' }} + text: ${{ steps.sanitized.outputs.text }} + title: ${{ steps.sanitized.outputs.title }} + steps: + - name: Setup Scripts + id: setup + uses: github/gh-aw-actions/setup@b8068426813005612b960b5ab0b8bd2c27142323 # v0.71.5 + with: + destination: ${{ runner.temp }}/gh-aw/actions + job-name: ${{ github.job }} + trace-id: ${{ needs.pre_activation.outputs.setup-trace-id }} + env: + GH_AW_SETUP_WORKFLOW_NAME: "Docs Sync Advisory" + GH_AW_CURRENT_WORKFLOW_REF: ${{ github.repository }}/.github/workflows/docs-sync.lock.yml@${{ github.ref }} + GH_AW_INFO_VERSION: "1.0.40" + - name: Generate agentic run info + id: generate_aw_info + env: + GH_AW_INFO_ENGINE_ID: "copilot" + GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI" + GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || 'claude-sonnet-4.6' }} + GH_AW_INFO_VERSION: "1.0.40" + GH_AW_INFO_AGENT_VERSION: "1.0.40" + GH_AW_INFO_CLI_VERSION: "v0.71.5" + GH_AW_INFO_WORKFLOW_NAME: "Docs Sync Advisory" + GH_AW_INFO_EXPERIMENTAL: "false" + GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true" + GH_AW_INFO_STAGED: "false" + GH_AW_INFO_ALLOWED_DOMAINS: '["defaults","github"]' + GH_AW_INFO_FIREWALL_ENABLED: "true" + GH_AW_INFO_AWF_VERSION: "v0.25.40" + GH_AW_INFO_AWMG_VERSION: "" + GH_AW_INFO_FIREWALL_TYPE: "squid" + GH_AW_COMPILED_STRICT: "true" + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/generate_aw_info.cjs'); + await main(core, context); + - name: Validate COPILOT_GITHUB_TOKEN secret + id: validate-secret + run: bash "${RUNNER_TEMP}/gh-aw/actions/validate_multi_secret.sh" COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default + env: + COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }} + - name: Checkout .github and .agents folders + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + with: + persist-credentials: false + sparse-checkout: | + .github + .agents + .claude + .codex + .crush + .gemini + .opencode + .pi + sparse-checkout-cone-mode: true + fetch-depth: 1 + - name: Save agent config folders for base branch restoration + env: + GH_AW_AGENT_FOLDERS: ".agents .claude .codex .crush .gemini .github .opencode .pi" + GH_AW_AGENT_FILES: ".crush.json AGENTS.md CLAUDE.md GEMINI.md PI.md opencode.jsonc" + # poutine:ignore untrusted_checkout_exec + run: bash "${RUNNER_TEMP}/gh-aw/actions/save_base_github_folders.sh" + - name: Check workflow lock file + id: check-lock-file + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_AW_WORKFLOW_FILE: "docs-sync.lock.yml" + GH_AW_CONTEXT_WORKFLOW_REF: "${{ github.workflow_ref }}" + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/check_workflow_timestamp_api.cjs'); + await main(); + - name: Check compile-agentic version + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_AW_COMPILED_VERSION: "v0.71.5" + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/check_version_updates.cjs'); + await main(); + - name: Compute current body text + id: sanitized + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,docs.github.com,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,lfs.github.com,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com" + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/compute_text.cjs'); + await main(); + - name: Create prompt with built-in context + env: + GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt + GH_AW_SAFE_OUTPUTS: ${{ runner.temp }}/gh-aw/safeoutputs/outputs.jsonl + GH_AW_EXPR_A0E5D436: ${{ github.event.pull_request.number || inputs.pr_number }} + GH_AW_GITHUB_ACTOR: ${{ github.actor }} + GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} + GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} + GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} + GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} + GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} + GH_AW_GITHUB_RUN_ID: ${{ github.run_id }} + GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }} + # poutine:ignore untrusted_checkout_exec + run: | + bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh" + { + cat << 'GH_AW_PROMPT_940262bde32bec5c_EOF' + + GH_AW_PROMPT_940262bde32bec5c_EOF + cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md" + cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md" + cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md" + cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md" + cat << 'GH_AW_PROMPT_940262bde32bec5c_EOF' + + Tools: add_comment(max:2), remove_labels, missing_tool, missing_data, noop + + GH_AW_PROMPT_940262bde32bec5c_EOF + cat "${RUNNER_TEMP}/gh-aw/prompts/mcp_cli_tools_prompt.md" + cat << 'GH_AW_PROMPT_940262bde32bec5c_EOF' + + The following GitHub context information is available for this workflow: + {{#if __GH_AW_GITHUB_ACTOR__ }} + - **actor**: __GH_AW_GITHUB_ACTOR__ + {{/if}} + {{#if __GH_AW_GITHUB_REPOSITORY__ }} + - **repository**: __GH_AW_GITHUB_REPOSITORY__ + {{/if}} + {{#if __GH_AW_GITHUB_WORKSPACE__ }} + - **workspace**: __GH_AW_GITHUB_WORKSPACE__ + {{/if}} + {{#if __GH_AW_GITHUB_EVENT_ISSUE_NUMBER__ }} + - **issue-number**: #__GH_AW_GITHUB_EVENT_ISSUE_NUMBER__ + {{/if}} + {{#if __GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER__ }} + - **discussion-number**: #__GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER__ + {{/if}} + {{#if __GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER__ }} + - **pull-request-number**: #__GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER__ + {{/if}} + {{#if __GH_AW_GITHUB_EVENT_COMMENT_ID__ }} + - **comment-id**: __GH_AW_GITHUB_EVENT_COMMENT_ID__ + {{/if}} + {{#if __GH_AW_GITHUB_RUN_ID__ }} + - **workflow-run-id**: __GH_AW_GITHUB_RUN_ID__ + {{/if}} + + + GH_AW_PROMPT_940262bde32bec5c_EOF + cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md" + cat << 'GH_AW_PROMPT_940262bde32bec5c_EOF' + + + + + {{#runtime-import .github/workflows/docs-sync.md}} + GH_AW_PROMPT_940262bde32bec5c_EOF + } > "$GH_AW_PROMPT" + - name: Interpolate variables and render templates + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt + GH_AW_ENGINE_ID: "copilot" + GH_AW_EXPR_A0E5D436: ${{ github.event.pull_request.number || inputs.pr_number }} + GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/interpolate_prompt.cjs'); + await main(); + - name: Substitute placeholders + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt + GH_AW_EXPR_A0E5D436: ${{ github.event.pull_request.number || inputs.pr_number }} + GH_AW_GITHUB_ACTOR: ${{ github.actor }} + GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }} + GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }} + GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }} + GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }} + GH_AW_GITHUB_REPOSITORY: ${{ github.repository }} + GH_AW_GITHUB_RUN_ID: ${{ github.run_id }} + GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }} + GH_AW_MCP_CLI_SERVERS_LIST: '- `safeoutputs` — run `safeoutputs --help` to see available tools' + GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: ${{ needs.pre_activation.outputs.activated }} + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + + const substitutePlaceholders = require('${{ runner.temp }}/gh-aw/actions/substitute_placeholders.cjs'); + + // Call the substitution function + return await substitutePlaceholders({ + file: process.env.GH_AW_PROMPT, + substitutions: { + GH_AW_EXPR_A0E5D436: process.env.GH_AW_EXPR_A0E5D436, + GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR, + GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID, + GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER, + GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER, + GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER, + GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY, + GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID, + GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE, + GH_AW_MCP_CLI_SERVERS_LIST: process.env.GH_AW_MCP_CLI_SERVERS_LIST, + GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED + } + }); + - name: Validate prompt placeholders + env: + GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt + # poutine:ignore untrusted_checkout_exec + run: bash "${RUNNER_TEMP}/gh-aw/actions/validate_prompt_placeholders.sh" + - name: Print prompt + env: + GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt + # poutine:ignore untrusted_checkout_exec + run: bash "${RUNNER_TEMP}/gh-aw/actions/print_prompt_summary.sh" + - name: Upload activation artifact + if: success() + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 + with: + name: activation + include-hidden-files: true + path: | + /tmp/gh-aw/aw_info.json + /tmp/gh-aw/aw-prompts/prompt.txt + /tmp/gh-aw/github_rate_limits.jsonl + /tmp/gh-aw/base + if-no-files-found: ignore + retention-days: 1 + + agent: + needs: + - activation + - apm + - apm-prep + runs-on: ubuntu-latest + permissions: + contents: read + issues: read + pull-requests: read + env: + DEFAULT_BRANCH: ${{ github.event.repository.default_branch }} + GH_AW_ASSETS_ALLOWED_EXTS: "" + GH_AW_ASSETS_BRANCH: "" + GH_AW_ASSETS_MAX_SIZE_KB: 0 + GH_AW_MCP_LOG_DIR: /tmp/gh-aw/mcp-logs/safeoutputs + GH_AW_WORKFLOW_ID_SANITIZED: docssync + outputs: + agentic_engine_timeout: ${{ steps.detect-copilot-errors.outputs.agentic_engine_timeout || 'false' }} + checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }} + effective_tokens: ${{ steps.parse-mcp-gateway.outputs.effective_tokens }} + has_patch: ${{ steps.collect_output.outputs.has_patch }} + inference_access_error: ${{ steps.detect-copilot-errors.outputs.inference_access_error || 'false' }} + mcp_policy_error: ${{ steps.detect-copilot-errors.outputs.mcp_policy_error || 'false' }} + model: ${{ needs.activation.outputs.model }} + model_not_supported_error: ${{ steps.detect-copilot-errors.outputs.model_not_supported_error || 'false' }} + output: ${{ steps.collect_output.outputs.output }} + output_types: ${{ steps.collect_output.outputs.output_types }} + setup-trace-id: ${{ steps.setup.outputs.trace-id }} + steps: + - name: Setup Scripts + id: setup + uses: github/gh-aw-actions/setup@b8068426813005612b960b5ab0b8bd2c27142323 # v0.71.5 + with: + destination: ${{ runner.temp }}/gh-aw/actions + job-name: ${{ github.job }} + trace-id: ${{ needs.activation.outputs.setup-trace-id }} + env: + GH_AW_SETUP_WORKFLOW_NAME: "Docs Sync Advisory" + GH_AW_CURRENT_WORKFLOW_REF: ${{ github.repository }}/.github/workflows/docs-sync.lock.yml@${{ github.ref }} + GH_AW_INFO_VERSION: "1.0.40" + - name: Set runtime paths + id: set-runtime-paths + run: | + { + echo "GH_AW_SAFE_OUTPUTS=${RUNNER_TEMP}/gh-aw/safeoutputs/outputs.jsonl" + echo "GH_AW_SAFE_OUTPUTS_CONFIG_PATH=${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" + echo "GH_AW_SAFE_OUTPUTS_TOOLS_PATH=${RUNNER_TEMP}/gh-aw/safeoutputs/tools.json" + } >> "$GITHUB_OUTPUT" + - name: Setup Ruby + uses: ruby/setup-ruby@c4e5b1316158f92e3d49443a9d58b31d25ac0f8f # v1.306.0 + with: + ruby-version: '3.3' + - name: Create gh-aw temp directory + run: bash "${RUNNER_TEMP}/gh-aw/actions/create_gh_aw_tmp_dir.sh" + - name: Configure gh CLI for GitHub Enterprise + run: bash "${RUNNER_TEMP}/gh-aw/actions/configure_gh_for_ghe.sh" + env: + GH_TOKEN: ${{ github.token }} + - name: Download APM bundle artifacts (all groups) + uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 + with: + merge-multiple: false + path: /tmp/gh-aw/apm-bundles + pattern: ${{ needs.activation.outputs.artifact_prefix }}apm-* + - env: + ARTIFACT_PREFIX: ${{ needs.activation.outputs.artifact_prefix }} + EXPECTED_MATRIX: ${{ needs.apm-prep.outputs.matrix }} + name: Normalise bundle layout (single-artifact flatten workaround) + run: "set -euo pipefail\n# actions/download-artifact (>=v5) flattens contents directly into `path/`\n# whenever exactly one artifact matches the pattern, ignoring\n# `merge-multiple: false`. Re-shape into the per-group subdir layout so\n# downstream validation sees a stable structure regardless of matrix size.\n# Upstream reference:\n# https://github.com/actions/download-artifact/blob/v8.0.1/src/download-artifact.ts\n# (see the `isSingleArtifactDownload || mergeMultiple || artifacts.length === 1`\n# branch). Remove this step once download-artifact stops flattening or\n# exposes an opt-out.\nexpected_count=$(echo \"$EXPECTED_MATRIX\" | jq '.group // [] | length')\nif [ \"$expected_count\" -eq 1 ]; then\n group_id=$(echo \"$EXPECTED_MATRIX\" | jq -r '.group[0].id')\n # Defence-in-depth: group_id is interpolated into a shell path. apm-prep\n # produces a sanitised id today, but enforce a strict allowlist here so\n # any future schema drift cannot smuggle traversal sequences.\n if ! printf '%s' \"$group_id\" | grep -Eq '^[A-Za-z0-9_-]+$'; then\n echo \"::error::unsafe group_id '$group_id' (must match ^[A-Za-z0-9_-]+$)\"\n exit 1\n fi\n group_dir=\"/tmp/gh-aw/apm-bundles/${ARTIFACT_PREFIX}apm-${group_id}\"\n if [ ! -d \"$group_dir\" ]; then\n mkdir -p \"$group_dir\"\n find /tmp/gh-aw/apm-bundles -mindepth 1 -maxdepth 1 ! -path \"$group_dir\" -exec mv {} \"$group_dir/\" \\;\n fi\nfi\n" + - env: + ARTIFACT_PREFIX: ${{ needs.activation.outputs.artifact_prefix }} + EXPECTED_MATRIX: ${{ needs.apm-prep.outputs.matrix }} + name: Validate downloaded bundles match matrix manifest + run: "set -euo pipefail\nexpected=$(echo \"$EXPECTED_MATRIX\" | jq -r --arg prefix \"$ARTIFACT_PREFIX\" '.group | map($prefix + \"apm-\" + .id) | sort | .[]')\nactual=$(ls /tmp/gh-aw/apm-bundles | sort)\nmissing=$(comm -23 <(echo \"$expected\") <(echo \"$actual\") || true)\nunexpected=$(comm -13 <(echo \"$expected\") <(echo \"$actual\") || true)\nif [ -n \"$missing\" ]; then\n echo \"::error::missing APM bundles (group did not pack successfully): $missing\"\n exit 1\nfi\nif [ -n \"$unexpected\" ]; then\n echo \"::error::unexpected artifact in apm bundle download (collision attack?): $unexpected\"\n exit 1\nfi\n" + - id: bundles + name: Build bundle list + run: "set -euo pipefail\nmapfile -t list < <(find /tmp/gh-aw/apm-bundles -name '*.tar.gz' | sort)\n[ ${#list[@]} -gt 0 ] || { echo '::error::no apm bundles found'; exit 1; }\nprintf '%s\\n' \"${list[@]}\" > /tmp/gh-aw/apm-bundle-list.txt\n" + - name: Restore APM packages (all bundles) + uses: microsoft/apm-action@b48dd081eb0050f6d7f32d0e7caa0a59a2d419fd # v1.7.2 + with: + bundles-file: /tmp/gh-aw/apm-bundle-list.txt + + - name: Checkout PR branch + id: checkout-pr + if: | + github.event.pull_request || github.event.issue.pull_request + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + with: + github-token: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/checkout_pr_branch.cjs'); + await main(); + - name: Install GitHub Copilot CLI + run: bash "${RUNNER_TEMP}/gh-aw/actions/install_copilot_cli.sh" 1.0.40 + env: + GH_HOST: github.com + - name: Install AWF binary + run: bash "${RUNNER_TEMP}/gh-aw/actions/install_awf_binary.sh" v0.25.40 + - name: Determine automatic lockdown mode for GitHub MCP Server + id: determine-automatic-lockdown + uses: actions/github-script@373c709c69115d41ff229c7e5df9f8788daa9553 # v9 + env: + GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }} + GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }} + with: + script: | + const determineAutomaticLockdown = require('${{ runner.temp }}/gh-aw/actions/determine_automatic_lockdown.cjs'); + await determineAutomaticLockdown(github, context, core); + - name: Download activation artifact + uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 + with: + name: activation + path: /tmp/gh-aw + - name: Restore agent config folders from base branch + if: steps.checkout-pr.outcome == 'success' + env: + GH_AW_AGENT_FOLDERS: ".agents .claude .codex .crush .gemini .github .opencode .pi" + GH_AW_AGENT_FILES: ".crush.json AGENTS.md CLAUDE.md GEMINI.md PI.md opencode.jsonc" + run: bash "${RUNNER_TEMP}/gh-aw/actions/restore_base_github_folders.sh" + - name: Download container images + run: bash "${RUNNER_TEMP}/gh-aw/actions/download_docker_images.sh" ghcr.io/github/gh-aw-firewall/agent:0.25.40@sha256:14ff567e8d9d4c2fbc5e55c973488381c71d7e0fdbe72d30ee7b8a738fd86504 ghcr.io/github/gh-aw-firewall/api-proxy:0.25.40@sha256:2883ca3e5ae9f330cafdd9345bfd4ae17fc8da36c96d4c9a1f76e922b4c45280 ghcr.io/github/gh-aw-firewall/squid:0.25.40@sha256:b084f4a2c771f584ee68084ced52fa6b3245197a1889645d817462d307d3ac51 ghcr.io/github/gh-aw-mcpg:v0.3.6@sha256:2bb8eef86006a4c5963c55616a9c51c32f27bfdecb023b8aa6f91f6718d9171c ghcr.io/github/github-mcp-server:v1.0.3@sha256:2ac27ef03461ef2b877031b838a7d1fd7f12b12d4ace7796d8cad91446d55959 node:lts-alpine@sha256:d1b3b4da11eefd5941e7f0b9cf17783fc99d9c6fc34884a665f40a06dbdfc94f + - name: Generate Safe Outputs Config + run: | + mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs" + mkdir -p /tmp/gh-aw/safeoutputs + mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs + cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_958c8bb971e6a6e0_EOF' + {"add_comment":{"max":2},"create_report_incomplete_issue":{},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"true"},"remove_labels":{"allowed":["docs-sync"],"max":1},"report_incomplete":{}} + GH_AW_SAFE_OUTPUTS_CONFIG_958c8bb971e6a6e0_EOF + - name: Generate Safe Outputs Tools + env: + GH_AW_TOOLS_META_JSON: | + { + "description_suffixes": { + "add_comment": " CONSTRAINTS: Maximum 2 comment(s) can be added. Supports reply_to_id for discussion threading.", + "remove_labels": " CONSTRAINTS: Maximum 1 label(s) can be removed. Only these labels can be removed: [docs-sync]." + }, + "repo_params": {}, + "dynamic_tools": [] + } + GH_AW_VALIDATION_JSON: | + { + "add_comment": { + "defaultMax": 1, + "fields": { + "body": { + "required": true, + "type": "string", + "sanitize": true, + "maxLength": 65000 + }, + "item_number": { + "issueOrPRNumber": true + }, + "reply_to_id": { + "type": "string", + "maxLength": 256 + }, + "repo": { + "type": "string", + "maxLength": 256 + } + } + }, + "missing_data": { + "defaultMax": 20, + "fields": { + "alternatives": { + "type": "string", + "sanitize": true, + "maxLength": 256 + }, + "context": { + "type": "string", + "sanitize": true, + "maxLength": 256 + }, + "data_type": { + "type": "string", + "sanitize": true, + "maxLength": 128 + }, + "reason": { + "type": "string", + "sanitize": true, + "maxLength": 256 + } + } + }, + "missing_tool": { + "defaultMax": 20, + "fields": { + "alternatives": { + "type": "string", + "sanitize": true, + "maxLength": 512 + }, + "reason": { + "required": true, + "type": "string", + "sanitize": true, + "maxLength": 256 + }, + "tool": { + "type": "string", + "sanitize": true, + "maxLength": 128 + } + } + }, + "noop": { + "defaultMax": 1, + "fields": { + "message": { + "required": true, + "type": "string", + "sanitize": true, + "maxLength": 65000 + } + } + }, + "remove_labels": { + "defaultMax": 5, + "fields": { + "item_number": { + "issueNumberOrTemporaryId": true + }, + "labels": { + "required": true, + "type": "array", + "itemType": "string", + "itemSanitize": true, + "itemMaxLength": 128 + }, + "repo": { + "type": "string", + "maxLength": 256 + } + } + }, + "report_incomplete": { + "defaultMax": 5, + "fields": { + "details": { + "type": "string", + "sanitize": true, + "maxLength": 65000 + }, + "reason": { + "required": true, + "type": "string", + "sanitize": true, + "maxLength": 1024 + } + } + } + } + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/generate_safe_outputs_tools.cjs'); + await main(); + - name: Generate Safe Outputs MCP Server Config + id: safe-outputs-config + run: | + # Generate a secure random API key (360 bits of entropy, 40+ chars) + # Mask immediately to prevent timing vulnerabilities + API_KEY=$(openssl rand -base64 45 | tr -d '/+=') + echo "::add-mask::${API_KEY}" + + PORT=3001 + + # Set outputs for next steps + { + echo "safe_outputs_api_key=${API_KEY}" + echo "safe_outputs_port=${PORT}" + } >> "$GITHUB_OUTPUT" + + echo "Safe Outputs MCP server will run on port ${PORT}" + + - name: Start Safe Outputs MCP HTTP Server + id: safe-outputs-start + env: + DEBUG: '*' + GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} + GH_AW_SAFE_OUTPUTS_PORT: ${{ steps.safe-outputs-config.outputs.safe_outputs_port }} + GH_AW_SAFE_OUTPUTS_API_KEY: ${{ steps.safe-outputs-config.outputs.safe_outputs_api_key }} + GH_AW_SAFE_OUTPUTS_TOOLS_PATH: ${{ runner.temp }}/gh-aw/safeoutputs/tools.json + GH_AW_SAFE_OUTPUTS_CONFIG_PATH: ${{ runner.temp }}/gh-aw/safeoutputs/config.json + GH_AW_MCP_LOG_DIR: /tmp/gh-aw/mcp-logs/safeoutputs + run: | + # Environment variables are set above to prevent template injection + export DEBUG + export GH_AW_SAFE_OUTPUTS + export GH_AW_SAFE_OUTPUTS_PORT + export GH_AW_SAFE_OUTPUTS_API_KEY + export GH_AW_SAFE_OUTPUTS_TOOLS_PATH + export GH_AW_SAFE_OUTPUTS_CONFIG_PATH + export GH_AW_MCP_LOG_DIR + + bash "${RUNNER_TEMP}/gh-aw/actions/start_safe_outputs_server.sh" + + - name: Start MCP Gateway + id: start-mcp-gateway + env: + GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} + GH_AW_SAFE_OUTPUTS_API_KEY: ${{ steps.safe-outputs-start.outputs.api_key }} + GH_AW_SAFE_OUTPUTS_PORT: ${{ steps.safe-outputs-start.outputs.port }} + GITHUB_MCP_GUARD_MIN_INTEGRITY: ${{ steps.determine-automatic-lockdown.outputs.min_integrity }} + GITHUB_MCP_GUARD_REPOS: ${{ steps.determine-automatic-lockdown.outputs.repos }} + GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + run: | + set -eo pipefail + mkdir -p "${RUNNER_TEMP}/gh-aw/mcp-config" + + # Export gateway environment variables for MCP config and gateway script + export MCP_GATEWAY_PORT="8080" + export MCP_GATEWAY_DOMAIN="host.docker.internal" + export MCP_GATEWAY_HOST_DOMAIN="localhost" + MCP_GATEWAY_API_KEY=$(openssl rand -base64 45 | tr -d '/+=') + echo "::add-mask::${MCP_GATEWAY_API_KEY}" + export MCP_GATEWAY_API_KEY + export MCP_GATEWAY_PAYLOAD_DIR="/tmp/gh-aw/mcp-payloads" + mkdir -p "${MCP_GATEWAY_PAYLOAD_DIR}" + export MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD="524288" + export DEBUG="*" + + export GH_AW_ENGINE="copilot" + MCP_GATEWAY_UID=$(id -u 2>/dev/null || echo '0') + MCP_GATEWAY_GID=$(id -g 2>/dev/null || echo '0') + DOCKER_SOCK_GID=$(stat -c '%g' /var/run/docker.sock 2>/dev/null || echo '0') + export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host --add-host host.docker.internal:127.0.0.1 --user '"${MCP_GATEWAY_UID}"':'"${MCP_GATEWAY_GID}"' --group-add '"${DOCKER_SOCK_GID}"' -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_GUARD_MIN_INTEGRITY -e GITHUB_MCP_GUARD_REPOS -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.3.6' + + mkdir -p /home/runner/.copilot + GH_AW_NODE=$(which node 2>/dev/null || command -v node 2>/dev/null || echo node) + cat << GH_AW_MCP_CONFIG_47af7b1bafe70d76_EOF | "$GH_AW_NODE" "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.cjs" + { + "mcpServers": { + "github": { + "type": "stdio", + "container": "ghcr.io/github/github-mcp-server:v1.0.3", + "env": { + "GITHUB_HOST": "\${GITHUB_SERVER_URL}", + "GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}", + "GITHUB_READ_ONLY": "1", + "GITHUB_TOOLSETS": "context,repos,issues,pull_requests" + }, + "guard-policies": { + "allow-only": { + "min-integrity": "$GITHUB_MCP_GUARD_MIN_INTEGRITY", + "repos": "$GITHUB_MCP_GUARD_REPOS" + } + } + }, + "safeoutputs": { + "type": "http", + "url": "http://host.docker.internal:$GH_AW_SAFE_OUTPUTS_PORT", + "headers": { + "Authorization": "\${GH_AW_SAFE_OUTPUTS_API_KEY}" + }, + "guard-policies": { + "write-sink": { + "accept": [ + "*" + ] + } + } + } + }, + "gateway": { + "port": $MCP_GATEWAY_PORT, + "domain": "${MCP_GATEWAY_DOMAIN}", + "apiKey": "${MCP_GATEWAY_API_KEY}", + "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}" + } + } + GH_AW_MCP_CONFIG_47af7b1bafe70d76_EOF + - name: Mount MCP servers as CLIs + id: mount-mcp-clis + continue-on-error: true + env: + MCP_GATEWAY_API_KEY: ${{ steps.start-mcp-gateway.outputs.gateway-api-key }} + MCP_GATEWAY_DOMAIN: ${{ steps.start-mcp-gateway.outputs.gateway-domain }} + MCP_GATEWAY_PORT: ${{ steps.start-mcp-gateway.outputs.gateway-port }} + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io); + const { main } = require('${{ runner.temp }}/gh-aw/actions/mount_mcp_as_cli.cjs'); + await main(); + - name: Clean credentials + continue-on-error: true + run: bash "${RUNNER_TEMP}/gh-aw/actions/clean_git_credentials.sh" + - name: Audit pre-agent workspace + id: pre_agent_audit + continue-on-error: true + run: bash "${RUNNER_TEMP}/gh-aw/actions/audit_pre_agent_workspace.sh" + - name: Execute GitHub Copilot CLI + id: agentic_execution + # Copilot CLI tool arguments (sorted): + timeout-minutes: 30 + run: | + set -o pipefail + touch /tmp/gh-aw/agent-step-summary.md + GH_AW_NODE_BIN=$(command -v node 2>/dev/null || true) + export GH_AW_NODE_BIN + (umask 177 && touch /tmp/gh-aw/agent-stdio.log) + printf '%s\n' '{"$schema":"https://github.com/github/gh-aw-firewall/releases/download/v0.25.40/awf-config.schema.json","network":{"allowDomains":["*.githubusercontent.com","api.business.githubcopilot.com","api.enterprise.githubcopilot.com","api.github.com","api.githubcopilot.com","api.individual.githubcopilot.com","api.snapcraft.io","archive.ubuntu.com","azure.archive.ubuntu.com","codeload.github.com","crl.geotrust.com","crl.globalsign.com","crl.identrust.com","crl.sectigo.com","crl.thawte.com","crl.usertrust.com","crl.verisign.com","crl3.digicert.com","crl4.digicert.com","crls.ssl.com","docs.github.com","github-cloud.githubusercontent.com","github-cloud.s3.amazonaws.com","github.blog","github.com","github.githubassets.com","host.docker.internal","json-schema.org","json.schemastore.org","keyserver.ubuntu.com","lfs.github.com","objects.githubusercontent.com","ocsp.digicert.com","ocsp.geotrust.com","ocsp.globalsign.com","ocsp.identrust.com","ocsp.sectigo.com","ocsp.ssl.com","ocsp.thawte.com","ocsp.usertrust.com","ocsp.verisign.com","packagecloud.io","packages.cloud.google.com","packages.microsoft.com","ppa.launchpad.net","raw.githubusercontent.com","registry.npmjs.org","s.symcb.com","s.symcd.com","security.ubuntu.com","telemetry.enterprise.githubcopilot.com","ts-crl.ws.symantec.com","ts-ocsp.ws.symantec.com","www.googleapis.com"]},"apiProxy":{"enabled":true,"models":{"auto":["large"],"deep-research":["copilot/deep-research*","google/deep-research*"],"gemini-flash":["copilot/gemini-*flash*","google/gemini-*flash*"],"gemini-pro":["copilot/gemini-*pro*","google/gemini-*pro*"],"gpt-4.1":["copilot/gpt-4.1*","openai/gpt-4.1*"],"gpt-5":["copilot/gpt-5*","openai/gpt-5*"],"gpt-5-codex":["copilot/gpt-5*codex*","openai/gpt-5*codex*"],"gpt-5-mini":["copilot/gpt-5*mini*","openai/gpt-5*mini*"],"gpt-5-nano":["copilot/gpt-5*nano*","openai/gpt-5*nano*"],"gpt-5-pro":["copilot/gpt-5*pro*","openai/gpt-5*pro*"],"haiku":["copilot/*haiku*","anthropic/*haiku*"],"large":["sonnet","gpt-5-pro","gpt-5","gemini-pro"],"mini":["haiku","gpt-5-mini","gpt-5-nano","gemini-flash"],"opus":["copilot/*opus*","anthropic/*opus*"],"reasoning":["copilot/o1*","copilot/o3*","copilot/o4*","openai/o1*","openai/o3*","openai/o4*"],"small":["mini"],"sonnet":["copilot/*sonnet*","anthropic/*sonnet*"]}},"container":{"imageTag":"0.25.40,squid=sha256:b084f4a2c771f584ee68084ced52fa6b3245197a1889645d817462d307d3ac51,agent=sha256:14ff567e8d9d4c2fbc5e55c973488381c71d7e0fdbe72d30ee7b8a738fd86504,api-proxy=sha256:2883ca3e5ae9f330cafdd9345bfd4ae17fc8da36c96d4c9a1f76e922b4c45280,cli-proxy=sha256:3e7152911d4b4b7b97beef9d3d7d924ff7902227e86001ef3838fb728d5d514c"}}' > "${RUNNER_TEMP}/gh-aw/awf-config.json" && cp "${RUNNER_TEMP}/gh-aw/awf-config.json" /tmp/gh-aw/awf-config.json + # shellcheck disable=SC1003 + sudo -E awf --config "${RUNNER_TEMP}/gh-aw/awf-config.json" --container-workdir "${GITHUB_WORKSPACE}" --mount "${RUNNER_TEMP}/gh-aw:${RUNNER_TEMP}/gh-aw:ro" --mount "${RUNNER_TEMP}/gh-aw:/host${RUNNER_TEMP}/gh-aw:ro" --env-all --exclude-env COPILOT_GITHUB_TOKEN --exclude-env GITHUB_MCP_SERVER_TOKEN --exclude-env MCP_GATEWAY_API_KEY --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --audit-dir /tmp/gh-aw/sandbox/firewall/audit --enable-host-access --allow-host-ports 80,443,8080 --skip-pull \ + -- /bin/bash -c 'export PATH="${RUNNER_TEMP}/gh-aw/mcp-cli/bin:$PATH" && export PATH="$(find /opt/hostedtoolcache /home/runner/work/_tool -maxdepth 4 -type d -name bin 2>/dev/null | tr '\''\n'\'' '\'':'\'')$PATH"; [ -n "$GOROOT" ] && export PATH="$GOROOT/bin:$PATH" || true && GH_AW_NODE_EXEC="${GH_AW_NODE_BIN:-}"; if [ -z "$GH_AW_NODE_EXEC" ] || [ ! -x "$GH_AW_NODE_EXEC" ]; then GH_AW_NODE_EXEC="$(command -v node 2>/dev/null || echo node)"; fi; "$GH_AW_NODE_EXEC" ${RUNNER_TEMP}/gh-aw/actions/copilot_harness.cjs /usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --no-ask-user --allow-all-tools --allow-all-paths --add-dir "${GITHUB_WORKSPACE}" --prompt-file /tmp/gh-aw/aw-prompts/prompt.txt' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log + env: + COPILOT_AGENT_RUNNER_TYPE: STANDALONE + COPILOT_API_KEY: dummy-byok-key-for-offline-mode + COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }} + COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || 'claude-sonnet-4.6' }} + GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json + GH_AW_PHASE: agent + GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt + GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} + GH_AW_VERSION: v0.71.5 + GITHUB_API_URL: ${{ github.api_url }} + GITHUB_AW: true + GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows + GITHUB_HEAD_REF: ${{ github.head_ref }} + GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + GITHUB_REF_NAME: ${{ github.ref_name }} + GITHUB_SERVER_URL: ${{ github.server_url }} + GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md + GITHUB_WORKSPACE: ${{ github.workspace }} + GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com + GIT_AUTHOR_NAME: github-actions[bot] + GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com + GIT_COMMITTER_NAME: github-actions[bot] + XDG_CONFIG_HOME: /home/runner + - name: Detect Copilot errors + id: detect-copilot-errors + if: always() + continue-on-error: true + run: node "${RUNNER_TEMP}/gh-aw/actions/detect_copilot_errors.cjs" + - name: Copy Copilot session state files to logs + if: always() + continue-on-error: true + run: bash "${RUNNER_TEMP}/gh-aw/actions/copy_copilot_session_state.sh" + - name: Stop MCP Gateway + if: always() + continue-on-error: true + env: + MCP_GATEWAY_PORT: ${{ steps.start-mcp-gateway.outputs.gateway-port }} + MCP_GATEWAY_API_KEY: ${{ steps.start-mcp-gateway.outputs.gateway-api-key }} + GATEWAY_PID: ${{ steps.start-mcp-gateway.outputs.gateway-pid }} + run: | + bash "${RUNNER_TEMP}/gh-aw/actions/stop_mcp_gateway.sh" "$GATEWAY_PID" + - name: Redact secrets in logs + if: always() + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/redact_secrets.cjs'); + await main(); + env: + GH_AW_SECRET_NAMES: 'COPILOT_GITHUB_TOKEN,GH_AW_GITHUB_MCP_SERVER_TOKEN,GH_AW_GITHUB_TOKEN,GITHUB_TOKEN' + SECRET_COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }} + SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }} + SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }} + SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + - name: Append agent step summary + if: always() + run: bash "${RUNNER_TEMP}/gh-aw/actions/append_agent_step_summary.sh" + - name: Copy Safe Outputs + if: always() + env: + GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} + run: | + mkdir -p /tmp/gh-aw + cp "$GH_AW_SAFE_OUTPUTS" /tmp/gh-aw/safeoutputs.jsonl 2>/dev/null || true + - name: Ingest agent output + id: collect_output + if: always() + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }} + GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,docs.github.com,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,lfs.github.com,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com" + GITHUB_SERVER_URL: ${{ github.server_url }} + GITHUB_API_URL: ${{ github.api_url }} + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/collect_ndjson_output.cjs'); + await main(); + - name: Parse agent logs for step summary + if: always() + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_AW_AGENT_OUTPUT: /tmp/gh-aw/sandbox/agent/logs/ + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/parse_copilot_log.cjs'); + await main(); + - name: Parse MCP Gateway logs for step summary + if: always() + id: parse-mcp-gateway + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/parse_mcp_gateway_log.cjs'); + await main(); + - name: Print firewall logs + if: always() + continue-on-error: true + env: + AWF_LOGS_DIR: /tmp/gh-aw/sandbox/firewall/logs + run: | + # Fix permissions on firewall logs/audit dirs so they can be uploaded as artifacts + # AWF runs with sudo, creating files owned by root + sudo chmod -R a+r /tmp/gh-aw/sandbox/firewall 2>/dev/null || true + # Only run awf logs summary if awf command exists (it may not be installed if workflow failed before install step) + if command -v awf &> /dev/null; then + awf logs summary | tee -a "$GITHUB_STEP_SUMMARY" + else + echo 'AWF binary not installed, skipping firewall log summary' + fi + - name: Parse token usage for step summary + if: always() + continue-on-error: true + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/parse_token_usage.cjs'); + await main(); + - name: Print AWF reflect summary + if: always() + continue-on-error: true + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/awf_reflect_summary.cjs'); + await main(); + - name: Write agent output placeholder if missing + if: always() + run: | + if [ ! -f /tmp/gh-aw/agent_output.json ]; then + echo '{"items":[]}' > /tmp/gh-aw/agent_output.json + fi + - name: Upload agent artifacts + if: always() + continue-on-error: true + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 + with: + name: agent + path: | + /tmp/gh-aw/aw-prompts/prompt.txt + /tmp/gh-aw/sandbox/agent/logs/ + /tmp/gh-aw/redacted-urls.log + /tmp/gh-aw/mcp-logs/ + /tmp/gh-aw/agent_usage.json + /tmp/gh-aw/agent-stdio.log + /tmp/gh-aw/pre-agent-audit.txt + /tmp/gh-aw/agent/ + /tmp/gh-aw/github_rate_limits.jsonl + /tmp/gh-aw/safeoutputs.jsonl + /tmp/gh-aw/agent_output.json + /tmp/gh-aw/aw-*.patch + /tmp/gh-aw/aw-*.bundle + /tmp/gh-aw/awf-config.json + /tmp/gh-aw/sandbox/firewall/logs/ + /tmp/gh-aw/sandbox/firewall/audit/ + /tmp/gh-aw/sandbox/firewall/awf-reflect.json + if-no-files-found: ignore + + apm: + needs: + - activation + - apm-prep + runs-on: ubuntu-slim + strategy: + fail-fast: false + matrix: ${{ fromJSON(needs.apm-prep.outputs.matrix) }} + permissions: + {} + + steps: + - name: Configure GH_HOST for enterprise compatibility + id: ghes-host-config + shell: bash + run: | + # Derive GH_HOST from GITHUB_SERVER_URL so the gh CLI targets the correct + # GitHub instance (GHES/GHEC). On github.com this is a harmless no-op. + GH_HOST="${GITHUB_SERVER_URL#https://}" + GH_HOST="${GH_HOST#http://}" + echo "GH_HOST=${GH_HOST}" >> "$GITHUB_ENV" + - name: Mint installation token + id: token + if: ${{ matrix.group.app-id != '' }} + uses: actions/create-github-app-token@1b10c78c7865c340bc4f6099eb2f838309f1e8c3 # v3.1.1 + with: + app-id: ${{ matrix.group.app-id }} + owner: ${{ matrix.group.owner != '' && matrix.group.owner || github.repository_owner }} + private-key: ${{ matrix.group.private-key }} + repositories: ${{ matrix.group.repositories }} + - name: Render package list + id: list + run: | + DEPS=$(echo "$AW_PKG" | jq -r '.[] | "- " + .') + { + echo "deps<> "$GITHUB_OUTPUT" + env: + AW_PKG: ${{ toJSON(matrix.group.packages) }} + - name: Pack APM packages + id: pack + uses: microsoft/apm-action@b48dd081eb0050f6d7f32d0e7caa0a59a2d419fd # v1.7.2 + env: + GITHUB_TOKEN: ${{ steps.token.outputs.token || secrets.GH_AW_PLUGINS_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + with: + archive: "true" + dependencies: ${{ steps.list.outputs.deps }} + isolated: "true" + pack: "true" + target: copilot + working-directory: /tmp/gh-aw/apm-workspace + - name: Upload APM bundle artifact + if: success() + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7 + with: + name: ${{ needs.activation.outputs.artifact_prefix }}apm-${{ matrix.group.id }} + path: ${{ steps.pack.outputs.bundle-path }} + retention-days: "1" + + apm-prep: + needs: activation + runs-on: ubuntu-slim + permissions: + {} + + outputs: + matrix: ${{ steps.compute.outputs.matrix }} + steps: + - name: Configure GH_HOST for enterprise compatibility + id: ghes-host-config + shell: bash + run: | + # Derive GH_HOST from GITHUB_SERVER_URL so the gh CLI targets the correct + # GitHub instance (GHES/GHEC). On github.com this is a harmless no-op. + GH_HOST="${GITHUB_SERVER_URL#https://}" + GH_HOST="${GH_HOST#http://}" + echo "GH_HOST=${GH_HOST}" >> "$GITHUB_ENV" + - name: Compute APM credential-group matrix + id: compute + run: | + set -euo pipefail + packages_json=${AW_APM_PACKAGES:-null} + apps_json=${AW_APM_APPS:-null} + legacy_id=${AW_APM_LEGACY_APP_ID:-} + + # gh-aw substitutes `["microsoft/apm#main"]` at + # compile time using Go's default slice formatter, which emits + # `[a b c]` (space-separated, no quotes) instead of valid JSON. + # That breaks `jq --argjson` below. Repair string-array inputs + # in place; leave already-valid JSON untouched. apps[] (objects) + # is not repairable this way -- consumers must use the legacy + # single-app inputs until upstream gh-aw exposes a JSON-encoding + # helper for import-inputs. + repair_string_array() { + local raw="$1" + if [ -z "$raw" ] || [ "$raw" = "null" ]; then + echo "$raw"; return + fi + if printf '%s' "$raw" | jq -e 'type=="array"' >/dev/null 2>&1; then + echo "$raw"; return + fi + python3 -c 'import json, re, sys; s=sys.argv[1].strip(); s=s[1:-1] if s.startswith("[") and s.endswith("]") else s; print(json.dumps([t for t in re.split(r"[\s,]+", s) if t]))' "$raw" + } + packages_json=$(repair_string_array "$packages_json") + + groups=$(jq -nc \ + --argjson packages "$packages_json" \ + --argjson apps "$apps_json" \ + --arg legacy_id "$legacy_id" \ + --arg legacy_pk "${AW_APM_LEGACY_PRIVATE_KEY:-}" \ + --arg legacy_owner "${AW_APM_LEGACY_OWNER:-}" \ + --arg legacy_repos "${AW_APM_LEGACY_REPOS:-}" \ + 'def slug(s): s | gsub("[^a-zA-Z0-9-]"; "-") | ascii_downcase | .[0:32]; + def with_id(g): + g + (if (g.id // "") == "" then {id: ("auto-" + slug(g.owner // "default"))} else {} end); + [ + (if (($packages // []) | length) > 0 and $legacy_id == "" + then [{id:"default",("app-id"):"",("private-key"):"",owner:"",repositories:"",packages:$packages}] + else [] end), + (if $legacy_id != "" + then [with_id({id:"legacy",("app-id"):$legacy_id,("private-key"):$legacy_pk,owner:$legacy_owner,repositories:$legacy_repos,packages:($packages // [])})] + else [] end), + (($apps // []) | map(with_id(.))) + ] | add // []') + + count=$(echo "$groups" | jq 'length') + if [ "$count" = "0" ]; then + echo "::error::shared/apm.md import provided no packages. Add packages: , single-app inputs (app-id + private-key), or apps: in the with: block." + exit 1 + fi + + dups=$(echo "$groups" | jq -r '[.[].id] | group_by(.) | map(select(length > 1) | first) | join(", ")') + if [ -n "$dups" ]; then + echo "::error::duplicate apm group ids after auto-derivation: $dups. Set apps[].id explicitly when two entries share the same owner." + exit 1 + fi + + while IFS= read -r id; do + if ! echo "$id" | grep -Eq '^[a-z0-9-]{1,32}$'; then + echo "::error::invalid apm group id: '$id' (lowercase alphanumeric and dashes, 1-32 chars). Set apps[].id explicitly." + exit 1 + fi + done < <(echo "$groups" | jq -r '.[].id') + + # SAFE: emit only id + package-count to logs. Never $groups in full. + { + echo "matrix={\"group\":$groups}" + } >> "$GITHUB_OUTPUT" + printf "::notice::APM matrix: %d credential group(s)\n" "$count" + echo "$groups" | jq -r '.[] | " - " + .id + " (" + (.packages | length | tostring) + " package(s))"' + env: + AW_APM_APPS: ${{ github.aw.import-inputs.apps }} + AW_APM_LEGACY_APP_ID: ${{ github.aw.import-inputs.app-id }} + AW_APM_LEGACY_OWNER: ${{ github.aw.import-inputs.owner }} + AW_APM_LEGACY_PRIVATE_KEY: ${{ github.aw.import-inputs.private-key }} + AW_APM_LEGACY_REPOS: ${{ github.aw.import-inputs.repositories }} + AW_APM_PACKAGES: "[\"microsoft/apm#main\"]" + + conclusion: + needs: + - activation + - agent + - apm + - apm-prep + - detection + - safe_outputs + if: > + always() && (needs.agent.result != 'skipped' || needs.activation.outputs.lockdown_check_failed == 'true' || + needs.activation.outputs.stale_lock_file_failed == 'true') + runs-on: ubuntu-slim + permissions: + contents: read + discussions: write + issues: write + pull-requests: write + concurrency: + group: "gh-aw-conclusion-docs-sync" + cancel-in-progress: false + outputs: + incomplete_count: ${{ steps.report_incomplete.outputs.incomplete_count }} + noop_message: ${{ steps.noop.outputs.noop_message }} + tools_reported: ${{ steps.missing_tool.outputs.tools_reported }} + total_count: ${{ steps.missing_tool.outputs.total_count }} + steps: + - name: Setup Scripts + id: setup + uses: github/gh-aw-actions/setup@b8068426813005612b960b5ab0b8bd2c27142323 # v0.71.5 + with: + destination: ${{ runner.temp }}/gh-aw/actions + job-name: ${{ github.job }} + trace-id: ${{ needs.activation.outputs.setup-trace-id }} + env: + GH_AW_SETUP_WORKFLOW_NAME: "Docs Sync Advisory" + GH_AW_CURRENT_WORKFLOW_REF: ${{ github.repository }}/.github/workflows/docs-sync.lock.yml@${{ github.ref }} + GH_AW_INFO_VERSION: "1.0.40" + - name: Download agent output artifact + id: download-agent-output + continue-on-error: true + uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 + with: + name: agent + path: /tmp/gh-aw/ + - name: Setup agent output environment variable + id: setup-agent-output-env + if: steps.download-agent-output.outcome == 'success' + run: | + mkdir -p /tmp/gh-aw/ + find "/tmp/gh-aw/" -type f -print + echo "GH_AW_AGENT_OUTPUT=/tmp/gh-aw/agent_output.json" >> "$GITHUB_OUTPUT" + - name: Process no-op messages + id: noop + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }} + GH_AW_NOOP_MAX: "1" + GH_AW_WORKFLOW_NAME: "Docs Sync Advisory" + GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }} + GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }} + GH_AW_NOOP_REPORT_AS_ISSUE: "true" + with: + github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/handle_noop_message.cjs'); + await main(); + - name: Log detection run + id: detection_runs + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }} + GH_AW_WORKFLOW_NAME: "Docs Sync Advisory" + GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }} + GH_AW_DETECTION_CONCLUSION: ${{ needs.detection.outputs.detection_conclusion }} + GH_AW_DETECTION_REASON: ${{ needs.detection.outputs.detection_reason }} + with: + github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/handle_detection_runs.cjs'); + await main(); + - name: Record missing tool + id: missing_tool + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }} + GH_AW_MISSING_TOOL_CREATE_ISSUE: "true" + GH_AW_WORKFLOW_NAME: "Docs Sync Advisory" + with: + github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/missing_tool.cjs'); + await main(); + - name: Record incomplete + id: report_incomplete + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }} + GH_AW_REPORT_INCOMPLETE_CREATE_ISSUE: "true" + GH_AW_WORKFLOW_NAME: "Docs Sync Advisory" + with: + github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/report_incomplete_handler.cjs'); + await main(); + - name: Handle agent failure + id: handle_agent_failure + if: always() + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }} + GH_AW_WORKFLOW_NAME: "Docs Sync Advisory" + GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }} + GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }} + GH_AW_WORKFLOW_ID: "docs-sync" + GH_AW_ACTION_FAILURE_ISSUE_EXPIRES_HOURS: "168" + GH_AW_ENGINE_ID: "copilot" + GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.activation.outputs.secret_verification_result }} + GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }} + GH_AW_INFERENCE_ACCESS_ERROR: ${{ needs.agent.outputs.inference_access_error }} + GH_AW_MCP_POLICY_ERROR: ${{ needs.agent.outputs.mcp_policy_error }} + GH_AW_AGENTIC_ENGINE_TIMEOUT: ${{ needs.agent.outputs.agentic_engine_timeout }} + GH_AW_MODEL_NOT_SUPPORTED_ERROR: ${{ needs.agent.outputs.model_not_supported_error }} + GH_AW_ENGINE_API_HOSTS: "api.enterprise.githubcopilot.com,api.githubcopilot.com,api.business.githubcopilot.com,api.individual.githubcopilot.com" + GH_AW_LOCKDOWN_CHECK_FAILED: ${{ needs.activation.outputs.lockdown_check_failed }} + GH_AW_STALE_LOCK_FILE_FAILED: ${{ needs.activation.outputs.stale_lock_file_failed }} + GH_AW_GROUP_REPORTS: "false" + GH_AW_FAILURE_REPORT_AS_ISSUE: "true" + GH_AW_MISSING_TOOL_REPORT_AS_FAILURE: "true" + GH_AW_MISSING_DATA_REPORT_AS_FAILURE: "true" + GH_AW_TIMEOUT_MINUTES: "30" + with: + github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/handle_agent_failure.cjs'); + await main(); + + detection: + needs: + - activation + - agent + if: > + always() && needs.agent.result != 'skipped' && (needs.agent.outputs.output_types != '' || needs.agent.outputs.has_patch == 'true') + runs-on: ubuntu-latest + permissions: + contents: read + outputs: + detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }} + detection_reason: ${{ steps.detection_conclusion.outputs.reason }} + detection_success: ${{ steps.detection_conclusion.outputs.success }} + steps: + - name: Setup Scripts + id: setup + uses: github/gh-aw-actions/setup@b8068426813005612b960b5ab0b8bd2c27142323 # v0.71.5 + with: + destination: ${{ runner.temp }}/gh-aw/actions + job-name: ${{ github.job }} + trace-id: ${{ needs.activation.outputs.setup-trace-id }} + env: + GH_AW_SETUP_WORKFLOW_NAME: "Docs Sync Advisory" + GH_AW_CURRENT_WORKFLOW_REF: ${{ github.repository }}/.github/workflows/docs-sync.lock.yml@${{ github.ref }} + GH_AW_INFO_VERSION: "1.0.40" + - name: Download agent output artifact + id: download-agent-output + continue-on-error: true + uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 + with: + name: agent + path: /tmp/gh-aw/ + - name: Setup agent output environment variable + id: setup-agent-output-env + if: steps.download-agent-output.outcome == 'success' + run: | + mkdir -p /tmp/gh-aw/ + find "/tmp/gh-aw/" -type f -print + echo "GH_AW_AGENT_OUTPUT=/tmp/gh-aw/agent_output.json" >> "$GITHUB_OUTPUT" + - name: Checkout repository for patch context + if: needs.agent.outputs.has_patch == 'true' + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + with: + persist-credentials: false + # --- Threat Detection --- + - name: Clean stale firewall files from agent artifact + run: | + rm -rf /tmp/gh-aw/sandbox/firewall/logs + rm -rf /tmp/gh-aw/sandbox/firewall/audit + - name: Download container images + run: bash "${RUNNER_TEMP}/gh-aw/actions/download_docker_images.sh" ghcr.io/github/gh-aw-firewall/agent:0.25.40@sha256:14ff567e8d9d4c2fbc5e55c973488381c71d7e0fdbe72d30ee7b8a738fd86504 ghcr.io/github/gh-aw-firewall/api-proxy:0.25.40@sha256:2883ca3e5ae9f330cafdd9345bfd4ae17fc8da36c96d4c9a1f76e922b4c45280 ghcr.io/github/gh-aw-firewall/squid:0.25.40@sha256:b084f4a2c771f584ee68084ced52fa6b3245197a1889645d817462d307d3ac51 + - name: Check if detection needed + id: detection_guard + if: always() + env: + OUTPUT_TYPES: ${{ needs.agent.outputs.output_types }} + HAS_PATCH: ${{ needs.agent.outputs.has_patch }} + run: | + if [[ -n "$OUTPUT_TYPES" || "$HAS_PATCH" == "true" ]]; then + echo "run_detection=true" >> "$GITHUB_OUTPUT" + echo "Detection will run: output_types=$OUTPUT_TYPES, has_patch=$HAS_PATCH" + else + echo "run_detection=false" >> "$GITHUB_OUTPUT" + echo "Detection skipped: no agent outputs or patches to analyze" + fi + - name: Clear MCP Config for detection + if: always() && steps.detection_guard.outputs.run_detection == 'true' + run: | + rm -f "${RUNNER_TEMP}/gh-aw/mcp-config/mcp-servers.json" + rm -f /home/runner/.copilot/mcp-config.json + rm -f "$GITHUB_WORKSPACE/.gemini/settings.json" + - name: Prepare threat detection files + if: always() && steps.detection_guard.outputs.run_detection == 'true' + run: | + mkdir -p /tmp/gh-aw/threat-detection/aw-prompts + cp /tmp/gh-aw/aw-prompts/prompt.txt /tmp/gh-aw/threat-detection/aw-prompts/prompt.txt 2>/dev/null || true + cp /tmp/gh-aw/agent_output.json /tmp/gh-aw/threat-detection/agent_output.json 2>/dev/null || true + for f in /tmp/gh-aw/aw-*.patch; do + [ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true + done + for f in /tmp/gh-aw/aw-*.bundle; do + [ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true + done + echo "Prepared threat detection files:" + ls -la /tmp/gh-aw/threat-detection/ 2>/dev/null || true + - name: Setup threat detection + if: always() && steps.detection_guard.outputs.run_detection == 'true' + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + WORKFLOW_NAME: "Docs Sync Advisory" + WORKFLOW_DESCRIPTION: "Per-PR documentation impact panel; posts a single advisory recommendation comment." + HAS_PATCH: ${{ needs.agent.outputs.has_patch }} + with: + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/setup_threat_detection.cjs'); + await main(); + - name: Ensure threat-detection directory and log + if: always() && steps.detection_guard.outputs.run_detection == 'true' + run: | + mkdir -p /tmp/gh-aw/threat-detection + touch /tmp/gh-aw/threat-detection/detection.log + - name: Setup Node.js + uses: actions/setup-node@48b55a011bda9f5d6aeb4c2d9c7362e8dae4041e # v6.4.0 + with: + node-version: '24' + package-manager-cache: false + - name: Install GitHub Copilot CLI + run: bash "${RUNNER_TEMP}/gh-aw/actions/install_copilot_cli.sh" 1.0.40 + env: + GH_HOST: github.com + - name: Install AWF binary + run: bash "${RUNNER_TEMP}/gh-aw/actions/install_awf_binary.sh" v0.25.40 + - name: Execute GitHub Copilot CLI + if: always() && steps.detection_guard.outputs.run_detection == 'true' + continue-on-error: true + id: detection_agentic_execution + # Copilot CLI tool arguments (sorted): + timeout-minutes: 20 + run: | + set -o pipefail + touch /tmp/gh-aw/agent-step-summary.md + GH_AW_NODE_BIN=$(command -v node 2>/dev/null || true) + export GH_AW_NODE_BIN + (umask 177 && touch /tmp/gh-aw/threat-detection/detection.log) + printf '%s\n' '{"$schema":"https://github.com/github/gh-aw-firewall/releases/download/v0.25.40/awf-config.schema.json","network":{"allowDomains":["api.business.githubcopilot.com","api.enterprise.githubcopilot.com","api.github.com","api.githubcopilot.com","api.individual.githubcopilot.com","github.com","host.docker.internal","telemetry.enterprise.githubcopilot.com"]},"apiProxy":{"enabled":true},"container":{"imageTag":"0.25.40,squid=sha256:b084f4a2c771f584ee68084ced52fa6b3245197a1889645d817462d307d3ac51,agent=sha256:14ff567e8d9d4c2fbc5e55c973488381c71d7e0fdbe72d30ee7b8a738fd86504,api-proxy=sha256:2883ca3e5ae9f330cafdd9345bfd4ae17fc8da36c96d4c9a1f76e922b4c45280,cli-proxy=sha256:3e7152911d4b4b7b97beef9d3d7d924ff7902227e86001ef3838fb728d5d514c"}}' > "${RUNNER_TEMP}/gh-aw/awf-config.json" && cp "${RUNNER_TEMP}/gh-aw/awf-config.json" /tmp/gh-aw/awf-config.json + # shellcheck disable=SC1003 + sudo -E awf --config "${RUNNER_TEMP}/gh-aw/awf-config.json" --container-workdir "${GITHUB_WORKSPACE}" --mount "${RUNNER_TEMP}/gh-aw:${RUNNER_TEMP}/gh-aw:ro" --mount "${RUNNER_TEMP}/gh-aw:/host${RUNNER_TEMP}/gh-aw:ro" --env-all --exclude-env COPILOT_GITHUB_TOKEN --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --audit-dir /tmp/gh-aw/sandbox/firewall/audit --enable-host-access --allow-host-ports 80,443,8080 --skip-pull \ + -- /bin/bash -c 'export PATH="$(find /opt/hostedtoolcache /home/runner/work/_tool -maxdepth 4 -type d -name bin 2>/dev/null | tr '\''\n'\'' '\'':'\'')$PATH"; [ -n "$GOROOT" ] && export PATH="$GOROOT/bin:$PATH" || true && GH_AW_NODE_EXEC="${GH_AW_NODE_BIN:-}"; if [ -z "$GH_AW_NODE_EXEC" ] || [ ! -x "$GH_AW_NODE_EXEC" ]; then GH_AW_NODE_EXEC="$(command -v node 2>/dev/null || echo node)"; fi; "$GH_AW_NODE_EXEC" ${RUNNER_TEMP}/gh-aw/actions/copilot_harness.cjs /usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --no-ask-user --allow-all-tools --add-dir "${GITHUB_WORKSPACE}" --prompt-file /tmp/gh-aw/aw-prompts/prompt.txt' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log + env: + COPILOT_AGENT_RUNNER_TYPE: STANDALONE + COPILOT_API_KEY: dummy-byok-key-for-offline-mode + COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }} + COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || 'claude-sonnet-4.6' }} + GH_AW_PHASE: detection + GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt + GH_AW_VERSION: v0.71.5 + GITHUB_API_URL: ${{ github.api_url }} + GITHUB_AW: true + GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows + GITHUB_HEAD_REF: ${{ github.head_ref }} + GITHUB_REF_NAME: ${{ github.ref_name }} + GITHUB_SERVER_URL: ${{ github.server_url }} + GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md + GITHUB_WORKSPACE: ${{ github.workspace }} + GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com + GIT_AUTHOR_NAME: github-actions[bot] + GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com + GIT_COMMITTER_NAME: github-actions[bot] + XDG_CONFIG_HOME: /home/runner + - name: Upload threat detection log + if: always() && steps.detection_guard.outputs.run_detection == 'true' + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 + with: + name: detection + path: /tmp/gh-aw/threat-detection/detection.log + if-no-files-found: ignore + - name: Parse and conclude threat detection + id: detection_conclusion + if: always() + continue-on-error: true + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + RUN_DETECTION: ${{ steps.detection_guard.outputs.run_detection }} + GH_AW_DETECTION_CONTINUE_ON_ERROR: "true" + with: + script: | + try { + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/parse_threat_detection_results.cjs'); + await main(); + } catch (loadErr) { + const continueOnError = process.env.GH_AW_DETECTION_CONTINUE_ON_ERROR !== 'false'; + const msg = 'ERR_SYSTEM: \u274C Unexpected error loading threat detection module: ' + (loadErr && loadErr.message ? loadErr.message : String(loadErr)); + core.error(msg); + core.setOutput('reason', 'parse_error'); + if (continueOnError) { + core.warning('\u26A0\uFE0F ' + msg); + core.setOutput('conclusion', 'warning'); + core.setOutput('success', 'false'); + } else { + core.setOutput('conclusion', 'failure'); + core.setOutput('success', 'false'); + core.setFailed(msg); + } + } + + pre_activation: + if: github.event_name == 'workflow_dispatch' || github.event.label.name == 'docs-sync' + runs-on: ubuntu-slim + outputs: + activated: ${{ steps.check_membership.outputs.is_team_member == 'true' }} + matched_command: '' + setup-trace-id: ${{ steps.setup.outputs.trace-id }} + steps: + - name: Setup Scripts + id: setup + uses: github/gh-aw-actions/setup@b8068426813005612b960b5ab0b8bd2c27142323 # v0.71.5 + with: + destination: ${{ runner.temp }}/gh-aw/actions + job-name: ${{ github.job }} + env: + GH_AW_SETUP_WORKFLOW_NAME: "Docs Sync Advisory" + GH_AW_CURRENT_WORKFLOW_REF: ${{ github.repository }}/.github/workflows/docs-sync.lock.yml@${{ github.ref }} + GH_AW_INFO_VERSION: "1.0.40" + - name: Check team membership for workflow + id: check_membership + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_AW_REQUIRED_ROLES: "admin,maintainer,write" + with: + github-token: ${{ secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/check_membership.cjs'); + await main(); + + safe_outputs: + needs: + - activation + - agent + - detection + if: (!cancelled()) && needs.agent.result != 'skipped' && needs.detection.result == 'success' + runs-on: ubuntu-slim + permissions: + contents: read + discussions: write + issues: write + pull-requests: write + timeout-minutes: 15 + env: + GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/docs-sync" + GH_AW_DETECTION_CONCLUSION: ${{ needs.detection.outputs.detection_conclusion }} + GH_AW_DETECTION_REASON: ${{ needs.detection.outputs.detection_reason }} + GH_AW_EFFECTIVE_TOKENS: ${{ needs.agent.outputs.effective_tokens }} + GH_AW_ENGINE_ID: "copilot" + GH_AW_ENGINE_MODEL: ${{ needs.agent.outputs.model }} + GH_AW_ENGINE_VERSION: "1.0.40" + GH_AW_WORKFLOW_ID: "docs-sync" + GH_AW_WORKFLOW_NAME: "Docs Sync Advisory" + outputs: + code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }} + code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }} + comment_id: ${{ steps.process_safe_outputs.outputs.comment_id }} + comment_url: ${{ steps.process_safe_outputs.outputs.comment_url }} + create_discussion_error_count: ${{ steps.process_safe_outputs.outputs.create_discussion_error_count }} + create_discussion_errors: ${{ steps.process_safe_outputs.outputs.create_discussion_errors }} + process_safe_outputs_processed_count: ${{ steps.process_safe_outputs.outputs.processed_count }} + process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }} + steps: + - name: Setup Scripts + id: setup + uses: github/gh-aw-actions/setup@b8068426813005612b960b5ab0b8bd2c27142323 # v0.71.5 + with: + destination: ${{ runner.temp }}/gh-aw/actions + job-name: ${{ github.job }} + trace-id: ${{ needs.activation.outputs.setup-trace-id }} + env: + GH_AW_SETUP_WORKFLOW_NAME: "Docs Sync Advisory" + GH_AW_CURRENT_WORKFLOW_REF: ${{ github.repository }}/.github/workflows/docs-sync.lock.yml@${{ github.ref }} + GH_AW_INFO_VERSION: "1.0.40" + - name: Download agent output artifact + id: download-agent-output + continue-on-error: true + uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 + with: + name: agent + path: /tmp/gh-aw/ + - name: Setup agent output environment variable + id: setup-agent-output-env + if: steps.download-agent-output.outcome == 'success' + run: | + mkdir -p /tmp/gh-aw/ + find "/tmp/gh-aw/" -type f -print + echo "GH_AW_AGENT_OUTPUT=/tmp/gh-aw/agent_output.json" >> "$GITHUB_OUTPUT" + - name: Configure GH_HOST for enterprise compatibility + id: ghes-host-config + shell: bash + run: | + # Derive GH_HOST from GITHUB_SERVER_URL so the gh CLI targets the correct + # GitHub instance (GHES/GHEC). On github.com this is a harmless no-op. + GH_HOST="${GITHUB_SERVER_URL#https://}" + GH_HOST="${GH_HOST#http://}" + echo "GH_HOST=${GH_HOST}" >> "$GITHUB_ENV" + - name: Process Safe Outputs + id: process_safe_outputs + uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0 + env: + GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }} + GH_AW_ALLOWED_DOMAINS: "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,docs.github.com,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.blog,github.com,github.githubassets.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,lfs.github.com,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com" + GITHUB_SERVER_URL: ${{ github.server_url }} + GITHUB_API_URL: ${{ github.api_url }} + GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":2},\"create_report_incomplete_issue\":{},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"true\"},\"remove_labels\":{\"allowed\":[\"docs-sync\"],\"max\":1},\"report_incomplete\":{}}" + with: + github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }} + script: | + const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs'); + setupGlobals(core, github, context, exec, io, getOctokit); + const { main } = require('${{ runner.temp }}/gh-aw/actions/safe_output_handler_manager.cjs'); + await main(); + - name: Upload Safe Outputs Items + if: always() + uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1 + with: + name: safe-outputs-items + path: | + /tmp/gh-aw/safe-output-items.jsonl + /tmp/gh-aw/temporary-id-map.json + if-no-files-found: ignore + diff --git a/.github/workflows/docs-sync.md b/.github/workflows/docs-sync.md new file mode 100644 index 000000000..80915696e --- /dev/null +++ b/.github/workflows/docs-sync.md @@ -0,0 +1,170 @@ +--- +name: Docs Sync Advisory +description: Per-PR documentation impact panel; posts a single advisory recommendation comment. + +# Triggers (cost-gated, fork-safe, GHES-compatible): +# +# 1. pull_request_target: fires when a label is applied. We use _target +# (NOT plain pull_request) so that fork PRs run in the BASE repo +# context with full secrets (COPILOT_GITHUB_TOKEN etc.). gh-aw does +# not expose `names:` on `pull_request_target` in v0.68.x (the +# first-class `on.labels` filter landed post-v0.71.1 and is not yet +# released, see github/gh-aw ADR-28737). To filter by label name +# without producing a red-X failed CI check on every unrelated label +# change, we use the top-level frontmatter `if:` field below. Same +# pattern as `pr-review-panel.md`. +# +# Why pull_request_target is safe here despite the well-known +# "pwn-request" pattern: +# - permissions are read-only (no write to contents / actions) +# - we never `actions/checkout` the PR head; only `gh pr view` / +# `gh pr diff` which return inert text +# - imports are pinned to microsoft/apm#main (docs-sync skill + +# sibling skills + persona definitions are trusted, not from +# the PR head) +# - write surfaces are tightly scoped: +# add-comment max 2 (one advisory + one safety overflow) +# remove-labels allowed [docs-sync] max 1 +# (clear the trigger label so re-applying it re-runs the +# skill idempotently) +# - companion-PR creation (Step 7 of the skill) requires a +# SECOND label `docs-sync-confirm` -- A9 SUPERVISED EXECUTION +# boundary. The agent suggests; the maintainer ratifies. +# - `roles: [admin, maintainer, write]` ensures only repo +# maintainers can trigger -- matches the trust model that +# applying the `docs-sync` label requires write access. +# +# `synchronize` is intentionally NOT subscribed at rung 1. +# Re-apply the label (remove + add) to re-run after addressing +# findings. Rung 2 (default-on) will subscribe to synchronize; +# not yet enabled, see .apm/skills/docs-sync/evals/README.md +# ship gates. +# +# 2. workflow_dispatch: manual fallback. Reads YAML from the dispatched +# ref (default main) and accepts any PR number. Useful if a +# maintainer needs to re-run without touching labels. +on: + pull_request_target: + types: [labeled] + workflow_dispatch: + inputs: + pr_number: + description: "Pull request number to assess (works for fork PRs)" + required: true + type: string + roles: [admin, maintainer, write] + +# Label-name gate: skip (not fail) when the triggering label isn't +# `docs-sync`. gh-aw injects this `if:` into both pre_activation and +# activation jobs, producing a gray Skipped status for unrelated label +# changes instead of a red Failed check. workflow_dispatch is always +# allowed through. +if: ${{ github.event_name == 'workflow_dispatch' || github.event.label.name == 'docs-sync' }} + +# Defense-in-depth against the "pwn request" attack pattern: never check out +# the PR head. We only consume the diff via `gh pr view` / `gh pr diff` which +# return inert text. Combined with read-only permissions below, this makes +# arbitrary code in the PR head physically unreachable from the workflow. +checkout: false + +# Agent job runs READ-ONLY. Safe-output jobs are auto-granted scoped write. +permissions: + contents: read + pull-requests: read + issues: read + +# Pull docs-sync skill + sibling skills + persona agents from +# microsoft/apm@main. +# Why main and not ${{ github.sha }}: a malicious PR could otherwise modify +# the skill or persona definitions and trick its own review. Pinning to +# main means the assessment always runs against the trusted, already- +# reviewed skill -- changes to .apm/ only take effect after they +# themselves have been reviewed and merged. +imports: + - uses: shared/apm.md + with: + target: copilot + packages: + - microsoft/apm#main + +tools: + github: + toolsets: [default] + bash: true + +network: + allowed: + - defaults + - github + +safe-outputs: + # Single advisory comment per run. max:2 is a fail-soft ceiling; the + # one-comment discipline lives inside the docs-sync skill (idempotent + # edit-in-place using the stable `## Docs sync advisory` header). + add-comment: + max: 2 + # Label cleanup. The orchestrator removes `docs-sync` so re-applying + # the label re-runs the skill idempotently. `docs-sync-confirm` is + # NOT swept -- it is the maintainer's ratification signal for the + # optional companion PR (see Step 7 of the skill). + remove-labels: + allowed: [docs-sync] + max: 1 + +timeout-minutes: 30 +--- + +# Docs Sync Advisory + +You are orchestrating the **docs-sync** skill against pull request +**#${{ github.event.pull_request.number || inputs.pr_number }}** in `${{ github.repository }}`. + +> The label-name guard runs at the workflow level via the top-level +> frontmatter `if:` field (skips both `pre_activation` and `activation` +> for unrelated labels). If you are reading this prompt, the triggering +> label is `docs-sync` or this is a manual `workflow_dispatch` -- +> proceed. + +## Step 1: Gather PR context (read-only) + +Use `gh` CLI -- never `git checkout` of PR head. We are running in the +base repo context with read-only permissions; the PR diff is the only +untrusted input we touch, and `gh` returns it as inert data. + +```bash +PR=${{ github.event.pull_request.number || inputs.pr_number }} +gh pr view "$PR" --json title,body,author,additions,deletions,changedFiles,files,labels +gh pr diff "$PR" +gh pr diff "$PR" --name-only +``` + +Also check for the `docs-sync-confirm` label on this PR -- it gates +the optional companion-PR step (Step 7 of the skill). + +```bash +gh pr view "$PR" --json labels --jq '.labels[].name' | grep -q docs-sync-confirm && echo "CONFIRM_PRESENT=true" || echo "CONFIRM_PRESENT=false" +``` + +## Step 2: Run the docs-sync skill + +Load the **docs-sync** skill and follow its execution checklist +(Steps 1-7) and output contract exactly. The skill owns: + +- Classifier dispatch (the cost gate) +- Localizer or architect dispatch on in_place / structural verdicts +- Per-page fan-out panel (doc-writer + python-architect verifier) +- Editorial-owner and growth-hacker single passes +- CDO synthesis with bounded ALIGNMENT LOOP (N <= 3 redrafts) +- Cost ceiling enforcement (15 LLM calls max per run) +- Single-comment emission via `safe-outputs.add-comment` (NOT the + GitHub API directly) +- Label sweep via `safe-outputs.remove-labels` (drops `docs-sync` so + re-applying it re-runs the skill) +- Companion-PR creation IF AND ONLY IF the `docs-sync-confirm` + label is present (the A9 SUPERVISED EXECUTION boundary) + +The skill body is at `.apm/skills/docs-sync/SKILL.md` (resolved +from the import above). + +The advisory comment uses the stable header `## Docs sync advisory` +for idempotent edit-in-place on re-runs. diff --git a/.gitignore b/.gitignore index dc9710329..5026da659 100644 --- a/.gitignore +++ b/.gitignore @@ -76,3 +76,4 @@ scout-pipeline-result.png .cursor/mcp.json .playwright-mcp/ server.pid +.docs-rewrite-plan/ diff --git a/apm.lock.yaml b/apm.lock.yaml index 1c7666e82..84da61a45 100644 --- a/apm.lock.yaml +++ b/apm.lock.yaml @@ -8,6 +8,10 @@ local_deployed_files: - .agents/skills/auth - .agents/skills/cli-logging-ux - .agents/skills/devx-ux +- .agents/skills/docs-impact-architect +- .agents/skills/docs-impact-classifier +- .agents/skills/docs-impact-localizer +- .agents/skills/docs-sync - .agents/skills/oss-growth - .agents/skills/pr-description-skill - .agents/skills/python-architecture @@ -16,10 +20,12 @@ local_deployed_files: - .github/agents/apm-ceo.agent.md - .github/agents/apm-primitives-architect.agent.md - .github/agents/auth-expert.agent.md +- .github/agents/cdo.agent.md - .github/agents/cli-logging-expert.agent.md - .github/agents/devx-ux-expert.agent.md - .github/agents/doc-analyser.agent.md - .github/agents/doc-writer.agent.md +- .github/agents/editorial-owner.agent.md - .github/agents/oss-growth-hacker.agent.md - .github/agents/python-architect.agent.md - .github/agents/supply-chain-security-expert.agent.md @@ -38,10 +44,12 @@ local_deployed_file_hashes: .github/agents/apm-ceo.agent.md: sha256:484da64428ea46a6183dffd3f30c9fc5fc5c747639c0c79e55be69dba0899323 .github/agents/apm-primitives-architect.agent.md: sha256:6c01eab74ba18d70f21d45010d636cc6535d63cee81da12e61898d8036e0b028 .github/agents/auth-expert.agent.md: sha256:18264a933cba432b77d133e6ae11eee294c92ed245629af8c9b7a5bb7a9a300c + .github/agents/cdo.agent.md: sha256:71e7684942679f86199b6720fc69d52ef796a0ec28981250b9ad275a1ed41d31 .github/agents/cli-logging-expert.agent.md: sha256:3ed7fe1a2e28e03a40311d4999ef54330908920d6515205708dd3f037abfcf0f .github/agents/devx-ux-expert.agent.md: sha256:8310d130cca5bc548baf4a2a84e3c9680c9dc5d83a2718150636896ab2aa1f30 .github/agents/doc-analyser.agent.md: sha256:47b1d0204904b786c19d4fe84343e86cdab6f92f862f676ba741ffe6e1385679 .github/agents/doc-writer.agent.md: sha256:328a5b9ea079869b8ccd914a6e2135c204225a5eedb42f59a1ec73058f7f0b47 + .github/agents/editorial-owner.agent.md: sha256:9dd101a9476dd93b67da1b823cc3b649f1227168fd809b108c74f9304262d860 .github/agents/oss-growth-hacker.agent.md: sha256:1cd56bb78ab37d52c50e45ab69d759f775cd49cdf35981b3dc6c4004315c6b83 .github/agents/python-architect.agent.md: sha256:7587ee7c684c61046a83dfa1b7e39d1345f2f119c3395478e3ca2dbbaaaff0e9 .github/agents/supply-chain-security-expert.agent.md: sha256:8fb8cc426d6af17ba084a28b3f026c2b475b62e3ca63ed2f88b83bd823f877af diff --git a/docs/astro.config.mjs b/docs/astro.config.mjs index 3a8229bd8..6966d22c3 100644 --- a/docs/astro.config.mjs +++ b/docs/astro.config.mjs @@ -1,6 +1,7 @@ // @ts-check import { defineConfig } from 'astro/config'; import starlight from '@astrojs/starlight'; +import sitemap from '@astrojs/sitemap'; import starlightLlmsTxt from 'starlight-llms-txt'; import starlightLinksValidator from 'starlight-links-validator'; import mermaid from 'astro-mermaid'; @@ -9,16 +10,71 @@ import mermaid from 'astro-mermaid'; export default defineConfig({ site: 'https://microsoft.github.io', base: '/apm/', + trailingSlash: 'always', + prefetch: { + prefetchAll: true, + defaultStrategy: 'viewport', + }, redirects: { + // Legacy enterprise slugs '/enterprise/teams': '/enterprise/making-the-case', '/enterprise/governance': '/enterprise/governance-guide', + // Legacy intro section -> concepts + '/introduction/what-is-apm': '/concepts/what-is-apm', + '/introduction/why-apm': '/concepts/the-three-promises', + '/introduction/how-it-works': '/concepts/lifecycle', + '/introduction/key-concepts': '/concepts/glossary', + '/introduction/anatomy-of-an-apm-package': '/concepts/package-anatomy', + // Legacy getting-started -> persona ramps + '/getting-started/quick-start': '/quickstart', + '/getting-started/installation': '/quickstart', + '/getting-started/authentication': '/consumer/authentication', + '/getting-started/migration': '/troubleshooting/migration', + // Legacy guides -> consumer/producer ramps + '/guides/dependencies': '/consumer/manage-dependencies', + '/guides/skills': '/producer/author-primitives/skills', + '/guides/prompts': '/producer/author-primitives/prompts', + '/guides/agent-workflows': '/producer/author-primitives/instructions-and-agents', + '/guides/compilation': '/producer/compile', + '/guides/dev-only-primitives': '/producer/author-primitives', + '/guides/package-relative-links': '/producer/package-relative-links', + '/guides/marketplaces': '/consumer/private-and-org-packages', + '/guides/marketplace-authoring': '/producer/publish-to-a-marketplace', + '/guides/plugins': '/producer/author-primitives', + '/guides/mcp-servers': '/consumer/install-mcp-servers', + '/guides/pack-distribute': '/producer', + '/guides/private-packages': '/consumer/private-and-org-packages', + '/guides/org-packages': '/consumer/private-and-org-packages', + '/guides/ci-policy-setup': '/enterprise/enforce-in-ci', + '/guides/drift-detection': '/enterprise/drift-detection', + // Legacy reference monolith -> per-command + '/reference/cli-commands': '/reference/cli/install', }, integrations: [ + sitemap(), mermaid(), starlight({ title: 'Agent Package Manager', - description: 'An open-source, community-driven dependency manager for AI agents. Declare skills, prompts, instructions, and tools in apm.yml — install with one command.', + description: 'An open-source dependency manager for AI agents. Declare skills, prompts, instructions, and tools in apm.yml -- install with one command.', favicon: '/favicon.svg', + editLink: { + baseUrl: 'https://github.com/microsoft/apm/edit/main/docs/', + }, + lastUpdated: true, + head: [ + { + tag: 'meta', + attrs: { name: 'theme-color', content: '#1d4ed8' }, + }, + { + tag: 'meta', + attrs: { property: 'og:type', content: 'website' }, + }, + { + tag: 'meta', + attrs: { name: 'twitter:card', content: 'summary_large_image' }, + }, + ], social: [ { icon: 'github', label: 'GitHub', href: 'https://github.com/microsoft/apm' }, ], @@ -29,6 +85,16 @@ export default defineConfig({ pagination: true, customCss: ['./src/styles/custom.css'], expressiveCode: { + themes: ['github-dark', 'github-light'], + styleOverrides: { + borderRadius: '0.5rem', + borderWidth: '1px', + codeFontSize: '0.875rem', + codeLineHeight: '1.5', + frames: { + shadowColor: 'transparent', + }, + }, frames: { showCopyToClipboardButton: true, }, @@ -40,80 +106,155 @@ export default defineConfig({ }), starlightLlmsTxt({ description: 'APM (Agent Package Manager) is an open-source dependency manager for AI agents. It lets you declare skills, prompts, instructions, agents, hooks, plugins, and MCP servers in a single apm.yml manifest, resolving transitive dependencies automatically.', + exclude: ['contributing/**'], + customSets: [ + { + label: 'Consumer ramp', + description: 'How to install and use APM packages in your project.', + paths: ['quickstart', 'consumer/**'], + }, + { + label: 'Producer ramp', + description: 'How to author, validate, and publish APM packages.', + paths: ['producer/**', 'getting-started/first-package'], + }, + { + label: 'Enterprise ramp', + description: 'Policy, audit, and CI gating for platform teams.', + paths: ['enterprise/**'], + }, + { + label: 'CLI reference', + description: 'Per-command reference for the apm CLI.', + paths: ['reference/cli/**'], + }, + ], }), ], sidebar: [ { - label: 'Understanding APM', + label: 'Start here', items: [ - { label: 'What is APM?', slug: 'introduction/what-is-apm' }, - { label: 'Why APM?', slug: 'introduction/why-apm' }, - { label: 'How It Works', slug: 'introduction/how-it-works' }, - { label: 'Key Concepts', slug: 'introduction/key-concepts' }, - { label: 'Anatomy of an APM Package', slug: 'introduction/anatomy-of-an-apm-package' }, + { label: 'Quickstart', slug: 'quickstart' }, + { label: 'Your first package', slug: 'getting-started/first-package' }, ], }, { - label: 'Getting Started', + label: 'Use a package (Consumer)', items: [ - { label: 'Installation', slug: 'getting-started/installation' }, - { label: 'Quick Start', slug: 'getting-started/quick-start' }, - { label: 'Your First Package', slug: 'getting-started/first-package' }, - { label: 'Authentication', slug: 'getting-started/authentication' }, - { label: 'Existing Projects', slug: 'getting-started/migration' }, + { label: 'Overview', slug: 'consumer' }, + { label: 'Install packages', slug: 'consumer/install-packages' }, + { label: 'Manage dependencies', slug: 'consumer/manage-dependencies' }, + { label: 'Run scripts', slug: 'consumer/run-scripts' }, + { label: 'Update and refresh', slug: 'consumer/update-and-refresh' }, + { label: 'Install MCP servers', slug: 'consumer/install-mcp-servers' }, + { label: 'Authentication', slug: 'consumer/authentication' }, + { label: 'Private and org packages', slug: 'consumer/private-and-org-packages' }, + { label: 'Deploy a local bundle', slug: 'consumer/deploy-a-bundle' }, + { label: 'Drift and secure-by-default', slug: 'consumer/drift-and-secure-by-default' }, + { label: 'Governance on the consumer ramp', slug: 'consumer/governance-on-the-consumer-ramp' }, ], }, { - label: 'Guides', + label: 'Author a package (Producer)', items: [ - { label: 'Compilation & Optimization', slug: 'guides/compilation' }, - { label: 'Skills', slug: 'guides/skills' }, - { label: 'Prompts', slug: 'guides/prompts' }, - { label: 'Plugins', slug: 'guides/plugins' }, - { label: 'MCP Servers', slug: 'guides/mcp-servers' }, - { label: 'Dependencies & Lockfile', slug: 'guides/dependencies' }, - { label: 'Pack & Distribute', slug: 'guides/pack-distribute' }, - { label: 'Private Packages', slug: 'guides/private-packages' }, - { label: 'Org-Wide Packages', slug: 'guides/org-packages' }, - { label: 'Marketplaces', slug: 'guides/marketplaces' }, - { label: 'Marketplace Authoring', slug: 'guides/marketplace-authoring' }, - { label: 'CI Policy Enforcement', slug: 'guides/ci-policy-setup' }, - { label: 'Agent Workflows (Experimental)', slug: 'guides/agent-workflows' }, + { label: 'Overview', slug: 'producer' }, + { + label: 'Author primitives', + items: [ + { label: 'Overview', slug: 'producer/author-primitives' }, + { label: 'Skills', slug: 'producer/author-primitives/skills' }, + { label: 'Prompts', slug: 'producer/author-primitives/prompts' }, + { label: 'Instructions and agents', slug: 'producer/author-primitives/instructions-and-agents' }, + { label: 'Hooks and commands', slug: 'producer/author-primitives/hooks-and-commands' }, + { label: 'MCP as a primitive', slug: 'producer/author-primitives/mcp-as-primitive' }, + ], + }, + { label: 'Compile your package', slug: 'producer/compile' }, + { label: 'Preview and validate', slug: 'producer/preview-and-validate' }, + { label: 'Pack a bundle', slug: 'producer/pack-a-bundle' }, + { label: 'Publish to a marketplace', slug: 'producer/publish-to-a-marketplace' }, + { label: 'Package-relative links', slug: 'producer/package-relative-links' }, ], }, { - label: 'Troubleshooting', + label: 'Govern at scale (Enterprise)', items: [ - { label: 'SSL / TLS issues', slug: 'troubleshooting/ssl-issues' }, + { label: 'Overview', slug: 'enterprise' }, + { label: 'Making the case', slug: 'enterprise/making-the-case' }, + { label: 'Adoption playbook', slug: 'enterprise/adoption-playbook' }, + { label: 'Governance overview', slug: 'enterprise/governance-overview' }, + { label: 'Governance guide', slug: 'enterprise/governance-guide' }, + { label: 'Policy: getting started', slug: 'enterprise/apm-policy-getting-started' }, + { label: 'Policy pilot', slug: 'enterprise/policy-pilot' }, + { label: 'Policy files', slug: 'enterprise/apm-policy' }, + { label: 'Policy reference', slug: 'enterprise/policy-reference' }, + { label: 'Enforce in CI', slug: 'enterprise/enforce-in-ci' }, + { label: 'Security model', slug: 'enterprise/security' }, + { label: 'Security and supply chain', slug: 'enterprise/security-and-supply-chain' }, + { label: 'Drift detection', slug: 'enterprise/drift-detection' }, + { label: 'Registry proxy and air-gapped', slug: 'enterprise/registry-proxy' }, + { label: 'GitHub rulesets', slug: 'enterprise/github-rulesets' }, ], }, { - label: 'Enterprise', + label: 'Integrations', items: [ - { label: 'Enterprise', slug: 'enterprise' }, - { label: 'Making the Case', slug: 'enterprise/making-the-case' }, - { label: 'Adoption Playbook', slug: 'enterprise/adoption-playbook' }, - { label: 'Security Model', slug: 'enterprise/security' }, - { label: 'Governance', slug: 'enterprise/governance-guide' }, - { label: 'Registry Proxy & Air-gapped', slug: 'enterprise/registry-proxy' }, - { label: 'Policy Files', slug: 'enterprise/apm-policy' }, - { label: 'Policy Reference', slug: 'enterprise/policy-reference' }, + { label: 'IDE and tool integration', slug: 'integrations/ide-tool-integration' }, + { label: 'CI/CD pipelines', slug: 'integrations/ci-cd' }, + { label: 'GitHub Agentic Workflows', slug: 'integrations/gh-aw' }, + { label: 'Microsoft 365 Copilot Cowork (Experimental)', slug: 'integrations/copilot-cowork' }, + { label: 'AI runtime compatibility', slug: 'integrations/runtime-compatibility' }, + { label: 'GitHub rulesets', slug: 'integrations/github-rulesets' }, ], }, { - label: 'Integrations', + label: 'CLI reference', items: [ - { label: 'CI/CD Pipelines', slug: 'integrations/ci-cd' }, - { label: 'GitHub Agentic Workflows', slug: 'integrations/gh-aw' }, - { label: 'IDE & Tool Integration', slug: 'integrations/ide-tool-integration' }, - { label: 'Microsoft 365 Copilot Cowork (Experimental)', slug: 'integrations/copilot-cowork' }, - { label: 'AI Runtime Compatibility', slug: 'integrations/runtime-compatibility' }, - { label: 'GitHub Rulesets', slug: 'integrations/github-rulesets' }, + { label: 'Overview', slug: 'reference' }, + { + label: 'Commands', + autogenerate: { directory: 'reference/cli' }, + }, + ], + }, + { + label: 'Schemas and specs', + items: [ + { label: 'Manifest schema', slug: 'reference/manifest-schema' }, + { label: 'Lockfile spec', slug: 'reference/lockfile-spec' }, + { label: 'Policy schema', slug: 'reference/policy-schema' }, + { label: 'Targets matrix', slug: 'reference/targets-matrix' }, + { label: 'Primitive types', slug: 'reference/primitive-types' }, + { label: 'Package types', slug: 'reference/package-types' }, + { label: 'Baseline checks', slug: 'reference/baseline-checks' }, + { label: 'Environment variables', slug: 'reference/environment-variables' }, + { label: 'Examples', slug: 'reference/examples' }, + { label: 'Experimental', slug: 'reference/experimental' }, ], }, { - label: 'Reference', - autogenerate: { directory: 'reference' }, + label: 'Concepts', + items: [ + { label: 'What is APM?', slug: 'concepts/what-is-apm' }, + { label: 'The three promises', slug: 'concepts/the-three-promises' }, + { label: 'Lifecycle', slug: 'concepts/lifecycle' }, + { label: 'Primitives and targets', slug: 'concepts/primitives-and-targets' }, + { label: 'Package anatomy', slug: 'concepts/package-anatomy' }, + { label: 'Glossary', slug: 'concepts/glossary' }, + ], + }, + { + label: 'Troubleshooting', + items: [ + { label: 'Overview', slug: 'troubleshooting' }, + { label: 'Common errors', slug: 'troubleshooting/common-errors' }, + { label: 'Install failures', slug: 'troubleshooting/install-failures' }, + { label: 'Compile produced no output', slug: 'troubleshooting/compile-zero-output-warning' }, + { label: 'Policy debugging', slug: 'troubleshooting/policy-debugging' }, + { label: 'SSL / TLS issues', slug: 'troubleshooting/ssl-issues' }, + { label: 'Migration paths', slug: 'troubleshooting/migration' }, + ], }, { label: 'Contributing', diff --git a/docs/package-lock.json b/docs/package-lock.json index 364f65348..3f13ff402 100644 --- a/docs/package-lock.json +++ b/docs/package-lock.json @@ -8,6 +8,7 @@ "name": "apm-docs", "version": "0.0.1", "dependencies": { + "@astrojs/sitemap": "^3.7.2", "@astrojs/starlight": "0.38.4", "astro": "6.2.1", "astro-mermaid": "^2.0.1", @@ -254,45 +255,10 @@ "node": ">=18" } }, - "node_modules/@chevrotain/cst-dts-gen": { - "version": "12.0.0", - "resolved": "https://registry.npmjs.org/@chevrotain/cst-dts-gen/-/cst-dts-gen-12.0.0.tgz", - "integrity": "sha512-fSL4KXjTl7cDgf0B5Rip9Q05BOrYvkJV/RrBTE/bKDN096E4hN/ySpcBK5B24T76dlQ2i32Zc3PAE27jFnFrKg==", - "license": "Apache-2.0", - "peer": true, - "dependencies": { - "@chevrotain/gast": "12.0.0", - "@chevrotain/types": "12.0.0" - } - }, - "node_modules/@chevrotain/gast": { - "version": "12.0.0", - "resolved": "https://registry.npmjs.org/@chevrotain/gast/-/gast-12.0.0.tgz", - "integrity": "sha512-1ne/m3XsIT8aEdrvT33so0GUC+wkctpUPK6zU9IlOyJLUbR0rg4G7ZiApiJbggpgPir9ERy3FRjT6T7lpgetnQ==", - "license": "Apache-2.0", - "peer": true, - "dependencies": { - "@chevrotain/types": "12.0.0" - } - }, - "node_modules/@chevrotain/regexp-to-ast": { - "version": "12.0.0", - "resolved": "https://registry.npmjs.org/@chevrotain/regexp-to-ast/-/regexp-to-ast-12.0.0.tgz", - "integrity": "sha512-p+EW9MaJwgaHguhoqwOtx/FwuGr+DnNn857sXWOi/mClXIkPGl3rn7hGNWvo31HA3vyeQxjqe+H36yZJwYU8cA==", - "license": "Apache-2.0", - "peer": true - }, "node_modules/@chevrotain/types": { - "version": "12.0.0", - "resolved": "https://registry.npmjs.org/@chevrotain/types/-/types-12.0.0.tgz", - "integrity": "sha512-S+04vjFQKeuYw0/eW3U52LkAHQsB1ASxsPGsLPUyQgrZ2iNNibQrsidruDzjEX2JYfespXMG0eZmXlhA6z7nWA==", - "license": "Apache-2.0", - "peer": true - }, - "node_modules/@chevrotain/utils": { - "version": "12.0.0", - "resolved": "https://registry.npmjs.org/@chevrotain/utils/-/utils-12.0.0.tgz", - "integrity": "sha512-lB59uJoaGIfOOL9knQqQRfhl9g7x8/wqFkp13zTdkRu1huG9kg6IJs1O8hqj9rs6h7orGxHJUKb+mX3rPbWGhA==", + "version": "11.1.2", + "resolved": "https://registry.npmjs.org/@chevrotain/types/-/types-11.1.2.tgz", + "integrity": "sha512-U+HFai5+zmJCkK86QsaJtoITlboZHBqrVketcO2ROv865xfCMSFpELQoz1GkX5GzME8pTa+3kbKrZHQtI0gdbw==", "license": "Apache-2.0", "peer": true }, @@ -889,15 +855,15 @@ "peer": true }, "node_modules/@iconify/utils": { - "version": "3.1.1", - "resolved": "https://registry.npmjs.org/@iconify/utils/-/utils-3.1.1.tgz", - "integrity": "sha512-MwzoDtw9rO1x+qfgLTV/IVXsHDBqeYZoMIQC8SfxfYSlaSUG+oWiAcoiB1yajAda6mqblm4/1/w2E8tRu7a7Tw==", + "version": "3.1.3", + "resolved": "https://registry.npmjs.org/@iconify/utils/-/utils-3.1.3.tgz", + "integrity": "sha512-LPKOXPn/zV+zis1oOfGWogaXVpqUybF3ZS6SCZIsz8vg0ivVp9+fVqyYB7xq0aiST/VhUQYGO1qo6uoYSiEJqw==", "license": "MIT", "peer": true, "dependencies": { "@antfu/install-pkg": "^1.1.0", "@iconify/types": "^2.0.0", - "mlly": "^1.8.2" + "import-meta-resolve": "^4.2.0" } }, "node_modules/@img/colour": { @@ -1457,13 +1423,13 @@ } }, "node_modules/@mermaid-js/parser": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/@mermaid-js/parser/-/parser-1.1.0.tgz", - "integrity": "sha512-gxK9ZX2+Fex5zu8LhRQoMeMPEHbc73UKZ0FQ54YrQtUxE1VVhMwzeNtKRPAu5aXks4FasbMe4xB4bWrmq6Jlxw==", + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@mermaid-js/parser/-/parser-1.1.1.tgz", + "integrity": "sha512-VuHdsYMK1bT6X2JbcAaWAhugTRvRBRyuZgd+c22swUeI9g/ntaxF7CY7dYarhZovofCbUNO0G7JesfmNtjYOCw==", "license": "MIT", "peer": true, "dependencies": { - "langium": "^4.0.0" + "@chevrotain/types": "~11.1.1" } }, "node_modules/@oslojs/encoding": { @@ -1598,9 +1564,9 @@ "license": "MIT" }, "node_modules/@rollup/rollup-android-arm-eabi": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-android-arm-eabi/-/rollup-android-arm-eabi-4.60.2.tgz", - "integrity": "sha512-dnlp69efPPg6Uaw2dVqzWRfAWRnYVb1XJ8CyyhIbZeaq4CA5/mLeZ1IEt9QqQxmbdvagjLIm2ZL8BxXv5lH4Yw==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-android-arm-eabi/-/rollup-android-arm-eabi-4.60.3.tgz", + "integrity": "sha512-x35CNW/ANXG3hE/EZpRU8MXX1JDN86hBb2wMGAtltkz7pc6cxgjpy1OMMfDosOQ+2hWqIkag/fGok1Yady9nGw==", "cpu": [ "arm" ], @@ -1611,9 +1577,9 @@ ] }, "node_modules/@rollup/rollup-android-arm64": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-android-arm64/-/rollup-android-arm64-4.60.2.tgz", - "integrity": "sha512-OqZTwDRDchGRHHm/hwLOL7uVPB9aUvI0am/eQuWMNyFHf5PSEQmyEeYYheA0EPPKUO/l0uigCp+iaTjoLjVoHg==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-android-arm64/-/rollup-android-arm64-4.60.3.tgz", + "integrity": "sha512-xw3xtkDApIOGayehp2+Rz4zimfkaX65r4t47iy+ymQB2G4iJCBBfj0ogVg5jpvjpn8UWn/+q9tprxleYeNp3Hw==", "cpu": [ "arm64" ], @@ -1624,9 +1590,9 @@ ] }, "node_modules/@rollup/rollup-darwin-arm64": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-darwin-arm64/-/rollup-darwin-arm64-4.60.2.tgz", - "integrity": "sha512-UwRE7CGpvSVEQS8gUMBe1uADWjNnVgP3Iusyda1nSRwNDCsRjnGc7w6El6WLQsXmZTbLZx9cecegumcitNfpmA==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-darwin-arm64/-/rollup-darwin-arm64-4.60.3.tgz", + "integrity": "sha512-vo6Y5Qfpx7/5EaamIwi0WqW2+zfiusVihKatLvtN1VFVy3D13uERk/6gZLU1UiHRL6fDXqj/ELIeVRGnvcTE1g==", "cpu": [ "arm64" ], @@ -1637,9 +1603,9 @@ ] }, "node_modules/@rollup/rollup-darwin-x64": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-darwin-x64/-/rollup-darwin-x64-4.60.2.tgz", - "integrity": "sha512-gjEtURKLCC5VXm1I+2i1u9OhxFsKAQJKTVB8WvDAHF+oZlq0GTVFOlTlO1q3AlCTE/DF32c16ESvfgqR7343/g==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-darwin-x64/-/rollup-darwin-x64-4.60.3.tgz", + "integrity": "sha512-D+0QGcZhBzTN82weOnsSlY7V7+RMmPuF1CkbxyMAGE8+ZHeUjyb76ZiWmBlCu//AQQONvxcqRbwZTajZKqjuOw==", "cpu": [ "x64" ], @@ -1650,9 +1616,9 @@ ] }, "node_modules/@rollup/rollup-freebsd-arm64": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-freebsd-arm64/-/rollup-freebsd-arm64-4.60.2.tgz", - "integrity": "sha512-Bcl6CYDeAgE70cqZaMojOi/eK63h5Me97ZqAQoh77VPjMysA/4ORQBRGo3rRy45x4MzVlU9uZxs8Uwy7ZaKnBw==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-freebsd-arm64/-/rollup-freebsd-arm64-4.60.3.tgz", + "integrity": "sha512-6HnvHCT7fDyj6R0Ph7A6x8dQS/S38MClRWeDLqc0MdfWkxjiu1HSDYrdPhqSILzjTIC/pnXbbJbo+ft+gy/9hQ==", "cpu": [ "arm64" ], @@ -1663,9 +1629,9 @@ ] }, "node_modules/@rollup/rollup-freebsd-x64": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-freebsd-x64/-/rollup-freebsd-x64-4.60.2.tgz", - "integrity": "sha512-LU+TPda3mAE2QB0/Hp5VyeKJivpC6+tlOXd1VMoXV/YFMvk/MNk5iXeBfB4MQGRWyOYVJ01625vjkr0Az98OJQ==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-freebsd-x64/-/rollup-freebsd-x64-4.60.3.tgz", + "integrity": "sha512-KHLgC3WKlUYW3ShFKnnosZDOJ0xjg9zp7au3sIm2bs/tGBeC2ipmvRh/N7JKi0t9Ue20C0dpEshi8WUubg+cnA==", "cpu": [ "x64" ], @@ -1676,9 +1642,9 @@ ] }, "node_modules/@rollup/rollup-linux-arm-gnueabihf": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm-gnueabihf/-/rollup-linux-arm-gnueabihf-4.60.2.tgz", - "integrity": "sha512-2QxQrM+KQ7DAW4o22j+XZ6RKdxjLD7BOWTP0Bv0tmjdyhXSsr2Ul1oJDQqh9Zf5qOwTuTc7Ek83mOFaKnodPjg==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm-gnueabihf/-/rollup-linux-arm-gnueabihf-4.60.3.tgz", + "integrity": "sha512-DV6fJoxEYWJOvaZIsok7KrYl0tPvga5OZ2yvKHNNYyk/2roMLqQAbGhr78EQ5YhHpnhLKJD3S1WFusAkmUuV5g==", "cpu": [ "arm" ], @@ -1692,9 +1658,9 @@ ] }, "node_modules/@rollup/rollup-linux-arm-musleabihf": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm-musleabihf/-/rollup-linux-arm-musleabihf-4.60.2.tgz", - "integrity": "sha512-TbziEu2DVsTEOPif2mKWkMeDMLoYjx95oESa9fkQQK7r/Orta0gnkcDpzwufEcAO2BLBsD7mZkXGFqEdMRRwfw==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm-musleabihf/-/rollup-linux-arm-musleabihf-4.60.3.tgz", + "integrity": "sha512-mQKoJAzvuOs6F+TZybQO4GOTSMUu7v0WdxEk24krQ/uUxXoPTtHjuaUuPmFhtBcM4K0ons8nrE3JyhTuCFtT/w==", "cpu": [ "arm" ], @@ -1708,9 +1674,9 @@ ] }, "node_modules/@rollup/rollup-linux-arm64-gnu": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm64-gnu/-/rollup-linux-arm64-gnu-4.60.2.tgz", - "integrity": "sha512-bO/rVDiDUuM2YfuCUwZ1t1cP+/yqjqz+Xf2VtkdppefuOFS2OSeAfgafaHNkFn0t02hEyXngZkxtGqXcXwO8Rg==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm64-gnu/-/rollup-linux-arm64-gnu-4.60.3.tgz", + "integrity": "sha512-Whjj2qoiJ6+OOJMGptTYazaJvjOJm+iKHpXQM1P3LzGjt7Ff++Tp7nH4N8J/BUA7R9IHfDyx4DJIflifwnbmIA==", "cpu": [ "arm64" ], @@ -1724,9 +1690,9 @@ ] }, "node_modules/@rollup/rollup-linux-arm64-musl": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm64-musl/-/rollup-linux-arm64-musl-4.60.2.tgz", - "integrity": "sha512-hr26p7e93Rl0Za+JwW7EAnwAvKkehh12BU1Llm9Ykiibg4uIr2rbpxG9WCf56GuvidlTG9KiiQT/TXT1yAWxTA==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm64-musl/-/rollup-linux-arm64-musl-4.60.3.tgz", + "integrity": "sha512-4YTNHKqGng5+yiZt3mg77nmyuCfmNfX4fPmyUapBcIk+BdwSwmCWGXOUxhXbBEkFHtoN5boLj/5NON+u5QC9tg==", "cpu": [ "arm64" ], @@ -1740,9 +1706,9 @@ ] }, "node_modules/@rollup/rollup-linux-loong64-gnu": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-loong64-gnu/-/rollup-linux-loong64-gnu-4.60.2.tgz", - "integrity": "sha512-pOjB/uSIyDt+ow3k/RcLvUAOGpysT2phDn7TTUB3n75SlIgZzM6NKAqlErPhoFU+npgY3/n+2HYIQVbF70P9/A==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-loong64-gnu/-/rollup-linux-loong64-gnu-4.60.3.tgz", + "integrity": "sha512-SU3kNlhkpI4UqlUc2VXPGK9o886ZsSeGfMAX2ba2b8DKmMXq4AL7KUrkSWVbb7koVqx41Yczx6dx5PNargIrEA==", "cpu": [ "loong64" ], @@ -1756,9 +1722,9 @@ ] }, "node_modules/@rollup/rollup-linux-loong64-musl": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-loong64-musl/-/rollup-linux-loong64-musl-4.60.2.tgz", - "integrity": "sha512-2/w+q8jszv9Ww1c+6uJT3OwqhdmGP2/4T17cu8WuwyUuuaCDDJ2ojdyYwZzCxx0GcsZBhzi3HmH+J5pZNXnd+Q==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-loong64-musl/-/rollup-linux-loong64-musl-4.60.3.tgz", + "integrity": "sha512-6lDLl5h4TXpB1mTf2rQWnAk/LcXrx9vBfu/DT5TIPhvMhRWaZ5MxkIc8u4lJAmBo6klTe1ywXIUHFjylW505sg==", "cpu": [ "loong64" ], @@ -1772,9 +1738,9 @@ ] }, "node_modules/@rollup/rollup-linux-ppc64-gnu": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-ppc64-gnu/-/rollup-linux-ppc64-gnu-4.60.2.tgz", - "integrity": "sha512-11+aL5vKheYgczxtPVVRhdptAM2H7fcDR5Gw4/bTcteuZBlH4oP9f5s9zYO9aGZvoGeBpqXI/9TZZihZ609wKw==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-ppc64-gnu/-/rollup-linux-ppc64-gnu-4.60.3.tgz", + "integrity": "sha512-BMo8bOw8evlup/8G+cj5xWtPyp93xPdyoSN16Zy90Q2QZ0ZYRhCt6ZJSwbrRzG9HApFabjwj2p25TUPDWrhzqQ==", "cpu": [ "ppc64" ], @@ -1788,9 +1754,9 @@ ] }, "node_modules/@rollup/rollup-linux-ppc64-musl": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-ppc64-musl/-/rollup-linux-ppc64-musl-4.60.2.tgz", - "integrity": "sha512-i16fokAGK46IVZuV8LIIwMdtqhin9hfYkCh8pf8iC3QU3LpwL+1FSFGej+O7l3E/AoknL6Dclh2oTdnRMpTzFQ==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-ppc64-musl/-/rollup-linux-ppc64-musl-4.60.3.tgz", + "integrity": "sha512-E0L8X1dZN1/Rph+5VPF6Xj2G7JJvMACVXtamTJIDrVI44Y3K+G8gQaMEAavbqCGTa16InptiVrX6eM6pmJ+7qA==", "cpu": [ "ppc64" ], @@ -1804,9 +1770,9 @@ ] }, "node_modules/@rollup/rollup-linux-riscv64-gnu": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-riscv64-gnu/-/rollup-linux-riscv64-gnu-4.60.2.tgz", - "integrity": "sha512-49FkKS6RGQoriDSK/6E2GkAsAuU5kETFCh7pG4yD/ylj9rKhTmO3elsnmBvRD4PgJPds5W2PkhC82aVwmUcJ7A==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-riscv64-gnu/-/rollup-linux-riscv64-gnu-4.60.3.tgz", + "integrity": "sha512-oZJ/WHaVfHUiRAtmTAeo3DcevNsVvH8mbvodjZy7D5QKvCefO371SiKRpxoDcCxB3PTRTLayWBkvmDQKTcX/sw==", "cpu": [ "riscv64" ], @@ -1820,9 +1786,9 @@ ] }, "node_modules/@rollup/rollup-linux-riscv64-musl": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-riscv64-musl/-/rollup-linux-riscv64-musl-4.60.2.tgz", - "integrity": "sha512-mjYNkHPfGpUR00DuM1ZZIgs64Hpf4bWcz9Z41+4Q+pgDx73UwWdAYyf6EG/lRFldmdHHzgrYyge5akFUW0D3mQ==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-riscv64-musl/-/rollup-linux-riscv64-musl-4.60.3.tgz", + "integrity": "sha512-Dhbyh7j9FybM3YaTgaHmVALwA8AkUwTPccyCQ79TG9AJUsMQqgN1DDEZNr4+QUfwiWvLDumW5vdwzoeUF+TNxQ==", "cpu": [ "riscv64" ], @@ -1836,9 +1802,9 @@ ] }, "node_modules/@rollup/rollup-linux-s390x-gnu": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-s390x-gnu/-/rollup-linux-s390x-gnu-4.60.2.tgz", - "integrity": "sha512-ALyvJz965BQk8E9Al/JDKKDLH2kfKFLTGMlgkAbbYtZuJt9LU8DW3ZoDMCtQpXAltZxwBHevXz5u+gf0yA0YoA==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-s390x-gnu/-/rollup-linux-s390x-gnu-4.60.3.tgz", + "integrity": "sha512-cJd1X5XhHHlltkaypz1UcWLA8AcoIi1aWhsvaWDskD1oz2eKCypnqvTQ8ykMNI0RSmm7NkTdSqSSD7zM0xa6Ig==", "cpu": [ "s390x" ], @@ -1852,9 +1818,9 @@ ] }, "node_modules/@rollup/rollup-linux-x64-gnu": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-gnu/-/rollup-linux-x64-gnu-4.60.2.tgz", - "integrity": "sha512-UQjrkIdWrKI626Du8lCQ6MJp/6V1LAo2bOK9OTu4mSn8GGXIkPXk/Vsp4bLHCd9Z9Iz2OTEaokUE90VweJgIYQ==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-gnu/-/rollup-linux-x64-gnu-4.60.3.tgz", + "integrity": "sha512-DAZDBHQfG2oQuhY7mc6I3/qB4LU2fQCjRvxbDwd/Jdvb9fypP4IJ4qmtu6lNjes6B531AI8cg1aKC2di97bUxA==", "cpu": [ "x64" ], @@ -1868,9 +1834,9 @@ ] }, "node_modules/@rollup/rollup-linux-x64-musl": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-musl/-/rollup-linux-x64-musl-4.60.2.tgz", - "integrity": "sha512-bTsRGj6VlSdn/XD4CGyzMnzaBs9bsRxy79eTqTCBsA8TMIEky7qg48aPkvJvFe1HyzQ5oMZdg7AnVlWQSKLTnw==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-musl/-/rollup-linux-x64-musl-4.60.3.tgz", + "integrity": "sha512-cRxsE8c13mZOh3vP+wLDxpQBRrOHDIGOWyDL93Sy0Ga8y515fBcC2pjUfFwUe5T7tqvTvWbCpg1URM/AXdWIXA==", "cpu": [ "x64" ], @@ -1884,9 +1850,9 @@ ] }, "node_modules/@rollup/rollup-openbsd-x64": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-openbsd-x64/-/rollup-openbsd-x64-4.60.2.tgz", - "integrity": "sha512-6d4Z3534xitaA1FcMWP7mQPq5zGwBmGbhphh2DwaA1aNIXUu3KTOfwrWpbwI4/Gr0uANo7NTtaykFyO2hPuFLg==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-openbsd-x64/-/rollup-openbsd-x64-4.60.3.tgz", + "integrity": "sha512-QaWcIgRxqEdQdhJqW4DJctsH6HCmo5vHxY0krHSX4jMtOqfzC+dqDGuHM87bu4H8JBeibWx7jFz+h6/4C8wA5Q==", "cpu": [ "x64" ], @@ -1897,9 +1863,9 @@ ] }, "node_modules/@rollup/rollup-openharmony-arm64": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-openharmony-arm64/-/rollup-openharmony-arm64-4.60.2.tgz", - "integrity": "sha512-NetAg5iO2uN7eB8zE5qrZ3CSil+7IJt4WDFLcC75Ymywq1VZVD6qJ6EvNLjZ3rEm6gB7XW5JdT60c6MN35Z85Q==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-openharmony-arm64/-/rollup-openharmony-arm64-4.60.3.tgz", + "integrity": "sha512-AaXwSvUi3QIPtroAUw1t5yHGIyqKEXwH54WUocFolZhpGDruJcs8c+xPNDRn4XiQsS7MEwnYsHW2l0MBLDMkWg==", "cpu": [ "arm64" ], @@ -1910,9 +1876,9 @@ ] }, "node_modules/@rollup/rollup-win32-arm64-msvc": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-arm64-msvc/-/rollup-win32-arm64-msvc-4.60.2.tgz", - "integrity": "sha512-NCYhOotpgWZ5kdxCZsv6Iudx0wX8980Q/oW4pNFNihpBKsDbEA1zpkfxJGC0yugsUuyDZ7gL37dbzwhR0VI7pQ==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-arm64-msvc/-/rollup-win32-arm64-msvc-4.60.3.tgz", + "integrity": "sha512-65LAKM/bAWDqKNEelHlcHvm2V+Vfb8C6INFxQXRHCvaVN1rJfwr4NvdP4FyzUaLqWfaCGaadf6UbTm8xJeYfEg==", "cpu": [ "arm64" ], @@ -1923,9 +1889,9 @@ ] }, "node_modules/@rollup/rollup-win32-ia32-msvc": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-ia32-msvc/-/rollup-win32-ia32-msvc-4.60.2.tgz", - "integrity": "sha512-RXsaOqXxfoUBQoOgvmmijVxJnW2IGB0eoMO7F8FAjaj0UTywUO/luSqimWBJn04WNgUkeNhh7fs7pESXajWmkg==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-ia32-msvc/-/rollup-win32-ia32-msvc-4.60.3.tgz", + "integrity": "sha512-EEM2gyhBF5MFnI6vMKdX1LAosE627RGBzIoGMdLloPZkXrUN0Ckqgr2Qi8+J3zip/8NVVro3/FjB+tjhZUgUHA==", "cpu": [ "ia32" ], @@ -1936,9 +1902,9 @@ ] }, "node_modules/@rollup/rollup-win32-x64-gnu": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-gnu/-/rollup-win32-x64-gnu-4.60.2.tgz", - "integrity": "sha512-qdAzEULD+/hzObedtmV6iBpdL5TIbKVztGiK7O3/KYSf+HIzU257+MX1EXJcyIiDbMAqmbwaufcYPvyRryeZtA==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-gnu/-/rollup-win32-x64-gnu-4.60.3.tgz", + "integrity": "sha512-E5Eb5H/DpxaoXH++Qkv28RcUJboMopmdDUALBczvHMf7hNIxaDZqwY5lK12UK1BHacSmvupoEWGu+n993Z0y1A==", "cpu": [ "x64" ], @@ -1949,9 +1915,9 @@ ] }, "node_modules/@rollup/rollup-win32-x64-msvc": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.60.2.tgz", - "integrity": "sha512-Nd/SgG27WoA9e+/TdK74KnHz852TLa94ovOYySo/yMPuTmpckK/jIF2jSwS3g7ELSKXK13/cVdmg1Z/DaCWKxA==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.60.3.tgz", + "integrity": "sha512-hPt/bgL5cE+Qp+/TPHBqptcAgPzgj46mPcg/16zNUmbQk0j+mOEQV/+Lqu8QRtDV3Ek95Q6FeFITpuhl6OTsAA==", "cpu": [ "x64" ], @@ -2361,9 +2327,9 @@ } }, "node_modules/@types/estree": { - "version": "1.0.8", - "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz", - "integrity": "sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==", + "version": "1.0.9", + "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.9.tgz", + "integrity": "sha512-GhdPgy1el4/ImP05X05Uw4cw2/M93BCUmnEvWZNStlCzEKME4Fkk+YpoA5OiHNQmoS7Cafb8Xa3Pya8m1Qrzeg==", "license": "MIT" }, "node_modules/@types/estree-jsx": { @@ -2437,9 +2403,9 @@ } }, "node_modules/@types/node": { - "version": "24.12.2", - "resolved": "https://registry.npmjs.org/@types/node/-/node-24.12.2.tgz", - "integrity": "sha512-A1sre26ke7HDIuY/M23nd9gfB+nrmhtYyMINbjI1zHJxYteKR6qSMX56FsmjMcDb3SMcjJg5BiRRgOCC/yBD0g==", + "version": "24.12.3", + "resolved": "https://registry.npmjs.org/@types/node/-/node-24.12.3.tgz", + "integrity": "sha512-8oljBDGun9cIsZRJR6fkihn0TSXJI0UDOOhncYaERq6M0JMDoPLxyscwruJcb4GKS6dvK/d8xebYBg27h/duaQ==", "license": "MIT", "dependencies": { "undici-types": "~7.16.0" @@ -2475,9 +2441,9 @@ "license": "MIT" }, "node_modules/@ungap/structured-clone": { - "version": "1.3.0", - "resolved": "https://registry.npmjs.org/@ungap/structured-clone/-/structured-clone-1.3.0.tgz", - "integrity": "sha512-WmoN8qaIAo7WTYWbAZuG8PYEhn5fkz7dZrqTBZ7dtt//lL2Gwms1IcnQ5yHqjDfX8Ft5j4YzDM23f87zBfDe9g==", + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/@ungap/structured-clone/-/structured-clone-1.3.1.tgz", + "integrity": "sha512-mUFwbeTqrVgDQxFveS+df2yfap6iuP20NAKAsBt5jDEoOTDew+zwLAOilHCeQJOVSvmgCX4ogqIrA0mnyr08yQ==", "license": "ISC" }, "node_modules/@upsetjs/venn.js": { @@ -2814,36 +2780,6 @@ "url": "https://github.com/sponsors/wooorm" } }, - "node_modules/chevrotain": { - "version": "12.0.0", - "resolved": "https://registry.npmjs.org/chevrotain/-/chevrotain-12.0.0.tgz", - "integrity": "sha512-csJvb+6kEiQaqo1woTdSAuOWdN0WTLIydkKrBnS+V5gZz0oqBrp4kQ35519QgK6TpBThiG3V1vNSHlIkv4AglQ==", - "license": "Apache-2.0", - "peer": true, - "dependencies": { - "@chevrotain/cst-dts-gen": "12.0.0", - "@chevrotain/gast": "12.0.0", - "@chevrotain/regexp-to-ast": "12.0.0", - "@chevrotain/types": "12.0.0", - "@chevrotain/utils": "12.0.0" - }, - "engines": { - "node": ">=22.0.0" - } - }, - "node_modules/chevrotain-allstar": { - "version": "0.4.3", - "resolved": "https://registry.npmjs.org/chevrotain-allstar/-/chevrotain-allstar-0.4.3.tgz", - "integrity": "sha512-2X4mkroolSMKqW+H22pyPMUVDqYZzPhephTmg/NODKb1IGYPHfxfhcW0EjS7wcPJNbze2i4vBWT7zT5FKF2lrQ==", - "license": "MIT", - "peer": true, - "dependencies": { - "lodash-es": "^4.18.1" - }, - "peerDependencies": { - "chevrotain": "^12.0.0" - } - }, "node_modules/chokidar": { "version": "5.0.0", "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-5.0.0.tgz", @@ -2922,13 +2858,6 @@ "node": ">= 18" } }, - "node_modules/confbox": { - "version": "0.1.8", - "resolved": "https://registry.npmjs.org/confbox/-/confbox-0.1.8.tgz", - "integrity": "sha512-RMtmw0iFkeR4YV+fUOSucriAQNb9g8zFR52MWCtl+cCZOFRNL6zeB395vPzFhEjjn4fMxXudmELnl/KF/WrK6w==", - "license": "MIT", - "peer": true - }, "node_modules/cookie": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/cookie/-/cookie-1.1.1.tgz", @@ -3850,6 +3779,17 @@ "integrity": "sha512-n27zTYMjYu1aj4MjCWzSP7G9r75utsaoc8m61weK+W8JMBGGQybd43GstCXZ3WNmSFtGT9wi59qQTW6mhTR5LQ==", "license": "MIT" }, + "node_modules/es-toolkit": { + "version": "1.46.1", + "resolved": "https://registry.npmjs.org/es-toolkit/-/es-toolkit-1.46.1.tgz", + "integrity": "sha512-5eNtXOs3tbfxXOj04tjjseeWkRWaoCjdEI+96DgwzZoe6c9juL49pXlzAFTI72aWC9Y8p7168g6XIKjh7k6pyQ==", + "license": "MIT", + "peer": true, + "workspaces": [ + "docs", + "benchmarks" + ] + }, "node_modules/esast-util-from-estree": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/esast-util-from-estree/-/esast-util-from-estree-2.0.0.tgz", @@ -4877,25 +4817,6 @@ "node": ">= 8" } }, - "node_modules/langium": { - "version": "4.2.3", - "resolved": "https://registry.npmjs.org/langium/-/langium-4.2.3.tgz", - "integrity": "sha512-sOPIi4hISFnY7twwV97ca1TsxpBtXq0URu/LL1AvxwccPG/RIBBlKS7a/f/EL6w8lTNaS0EFs/F+IdSOaqYpng==", - "license": "MIT", - "peer": true, - "dependencies": { - "@chevrotain/regexp-to-ast": "~12.0.0", - "chevrotain": "~12.0.0", - "chevrotain-allstar": "~0.4.3", - "vscode-languageserver": "~9.0.1", - "vscode-languageserver-textdocument": "~1.0.11", - "vscode-uri": "~3.1.0" - }, - "engines": { - "node": ">=20.10.0", - "npm": ">=10.2.3" - } - }, "node_modules/layout-base": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/layout-base/-/layout-base-1.0.2.tgz", @@ -4921,9 +4842,9 @@ } }, "node_modules/lru-cache": { - "version": "11.3.5", - "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-11.3.5.tgz", - "integrity": "sha512-NxVFwLAnrd9i7KUBxC4DrUhmgjzOs+1Qm50D3oF1/oL+r1NpZ4gA7xvG0/zJ8evR7zIKn4vLf7qTNduWFtCrRw==", + "version": "11.3.6", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-11.3.6.tgz", + "integrity": "sha512-Gf/KoL3C/MlI7Bt0PGI9I+TeTC/I6r/csU58N4BSNc4lppLBeKsOdFYkK+dX0ABDUMJNfCHTyPpzwwO21Awd3A==", "license": "BlueOak-1.0.0", "engines": { "node": "20 || >=22" @@ -5314,15 +5235,15 @@ "license": "CC0-1.0" }, "node_modules/mermaid": { - "version": "11.14.0", - "resolved": "https://registry.npmjs.org/mermaid/-/mermaid-11.14.0.tgz", - "integrity": "sha512-GSGloRsBs+JINmmhl0JDwjpuezCsHB4WGI4NASHxL3fHo3o/BRXTxhDLKnln8/Q0lRFRyDdEjmk1/d5Sn1Xz8g==", + "version": "11.15.0", + "resolved": "https://registry.npmjs.org/mermaid/-/mermaid-11.15.0.tgz", + "integrity": "sha512-pTMbcf3rWdtLiYGpmoTjHEpeY8seiy6sR+9nD7LOs8KfUbHE4lOUAprTRqRAcWSQ6MQpdX+YEsxShtGsINtPtw==", "license": "MIT", "peer": true, "dependencies": { "@braintree/sanitize-url": "^7.1.1", "@iconify/utils": "^3.0.2", - "@mermaid-js/parser": "^1.1.0", + "@mermaid-js/parser": "^1.1.1", "@types/d3": "^7.4.3", "@upsetjs/venn.js": "^2.0.0", "cytoscape": "^3.33.1", @@ -5333,14 +5254,14 @@ "dagre-d3-es": "7.0.14", "dayjs": "^1.11.19", "dompurify": "^3.3.1", + "es-toolkit": "^1.45.1", "katex": "^0.16.25", "khroma": "^2.1.0", - "lodash-es": "^4.17.23", "marked": "^16.3.0", "roughjs": "^4.6.6", "stylis": "^4.3.6", "ts-dedent": "^2.2.0", - "uuid": "^11.1.0" + "uuid": "^11.1.0 || ^12 || ^13 || ^14.0.0" } }, "node_modules/micromark": { @@ -6104,19 +6025,6 @@ "url": "https://github.com/sponsors/jonschlinkert" } }, - "node_modules/mlly": { - "version": "1.8.2", - "resolved": "https://registry.npmjs.org/mlly/-/mlly-1.8.2.tgz", - "integrity": "sha512-d+ObxMQFmbt10sretNDytwt85VrbkhhUA/JBGm1MPaWJ65Cl4wOgLaB1NYvJSZ0Ef03MMEU/0xpPMXUIQ29UfA==", - "license": "MIT", - "peer": true, - "dependencies": { - "acorn": "^8.16.0", - "pathe": "^2.0.3", - "pkg-types": "^1.3.1", - "ufo": "^1.6.3" - } - }, "node_modules/mrmime": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/mrmime/-/mrmime-2.0.1.tgz", @@ -6378,13 +6286,6 @@ "license": "MIT", "peer": true }, - "node_modules/pathe": { - "version": "2.0.3", - "resolved": "https://registry.npmjs.org/pathe/-/pathe-2.0.3.tgz", - "integrity": "sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w==", - "license": "MIT", - "peer": true - }, "node_modules/piccolore": { "version": "0.1.3", "resolved": "https://registry.npmjs.org/piccolore/-/piccolore-0.1.3.tgz", @@ -6409,18 +6310,6 @@ "url": "https://github.com/sponsors/jonschlinkert" } }, - "node_modules/pkg-types": { - "version": "1.3.1", - "resolved": "https://registry.npmjs.org/pkg-types/-/pkg-types-1.3.1.tgz", - "integrity": "sha512-/Jm5M4RvtBFVkKWRu2BLUTNP8/M2a+UwuAX+ae4770q1qVGtfjG+WTCupoZixokjmHiry8uI+dlY8KXYV5HVVQ==", - "license": "MIT", - "peer": true, - "dependencies": { - "confbox": "^0.1.8", - "mlly": "^1.7.4", - "pathe": "^2.0.1" - } - }, "node_modules/points-on-curve": { "version": "0.2.0", "resolved": "https://registry.npmjs.org/points-on-curve/-/points-on-curve-0.2.0.tgz", @@ -6440,9 +6329,9 @@ } }, "node_modules/postcss": { - "version": "8.5.13", - "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.13.tgz", - "integrity": "sha512-qif0+jGGZoLWdHey3UFHHWP0H7Gbmsk8T5VEqyYFbWqPr1XqvLGBbk/sl8V5exGmcYJklJOhOQq1pV9IcsiFag==", + "version": "8.5.14", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.14.tgz", + "integrity": "sha512-SoSL4+OSEtR99LHFZQiJLkT59C5B1amGO1NzTwj7TT1qCUgUO6hxOvzkOYxD+vMrXBM3XJIKzokoERdqQq/Zmg==", "funding": [ { "type": "opencollective", @@ -6944,9 +6833,9 @@ "peer": true }, "node_modules/rollup": { - "version": "4.60.2", - "resolved": "https://registry.npmjs.org/rollup/-/rollup-4.60.2.tgz", - "integrity": "sha512-J9qZyW++QK/09NyN/zeO0dG/1GdGfyp9lV8ajHnRVLfo/uFsbji5mHnDgn/qYdUHyCkM2N+8VyspgZclfAh0eQ==", + "version": "4.60.3", + "resolved": "https://registry.npmjs.org/rollup/-/rollup-4.60.3.tgz", + "integrity": "sha512-pAQK9HalE84QSm4Po3EmWIZPd3FnjkShVkiMlz1iligWYkWQ7wHYd1PF/T7QZ5TVSD6uSTon5gBVMSM4JfBV+A==", "license": "MIT", "dependencies": { "@types/estree": "1.0.8" @@ -6959,34 +6848,40 @@ "npm": ">=8.0.0" }, "optionalDependencies": { - "@rollup/rollup-android-arm-eabi": "4.60.2", - "@rollup/rollup-android-arm64": "4.60.2", - "@rollup/rollup-darwin-arm64": "4.60.2", - "@rollup/rollup-darwin-x64": "4.60.2", - "@rollup/rollup-freebsd-arm64": "4.60.2", - "@rollup/rollup-freebsd-x64": "4.60.2", - "@rollup/rollup-linux-arm-gnueabihf": "4.60.2", - "@rollup/rollup-linux-arm-musleabihf": "4.60.2", - "@rollup/rollup-linux-arm64-gnu": "4.60.2", - "@rollup/rollup-linux-arm64-musl": "4.60.2", - "@rollup/rollup-linux-loong64-gnu": "4.60.2", - "@rollup/rollup-linux-loong64-musl": "4.60.2", - "@rollup/rollup-linux-ppc64-gnu": "4.60.2", - "@rollup/rollup-linux-ppc64-musl": "4.60.2", - "@rollup/rollup-linux-riscv64-gnu": "4.60.2", - "@rollup/rollup-linux-riscv64-musl": "4.60.2", - "@rollup/rollup-linux-s390x-gnu": "4.60.2", - "@rollup/rollup-linux-x64-gnu": "4.60.2", - "@rollup/rollup-linux-x64-musl": "4.60.2", - "@rollup/rollup-openbsd-x64": "4.60.2", - "@rollup/rollup-openharmony-arm64": "4.60.2", - "@rollup/rollup-win32-arm64-msvc": "4.60.2", - "@rollup/rollup-win32-ia32-msvc": "4.60.2", - "@rollup/rollup-win32-x64-gnu": "4.60.2", - "@rollup/rollup-win32-x64-msvc": "4.60.2", + "@rollup/rollup-android-arm-eabi": "4.60.3", + "@rollup/rollup-android-arm64": "4.60.3", + "@rollup/rollup-darwin-arm64": "4.60.3", + "@rollup/rollup-darwin-x64": "4.60.3", + "@rollup/rollup-freebsd-arm64": "4.60.3", + "@rollup/rollup-freebsd-x64": "4.60.3", + "@rollup/rollup-linux-arm-gnueabihf": "4.60.3", + "@rollup/rollup-linux-arm-musleabihf": "4.60.3", + "@rollup/rollup-linux-arm64-gnu": "4.60.3", + "@rollup/rollup-linux-arm64-musl": "4.60.3", + "@rollup/rollup-linux-loong64-gnu": "4.60.3", + "@rollup/rollup-linux-loong64-musl": "4.60.3", + "@rollup/rollup-linux-ppc64-gnu": "4.60.3", + "@rollup/rollup-linux-ppc64-musl": "4.60.3", + "@rollup/rollup-linux-riscv64-gnu": "4.60.3", + "@rollup/rollup-linux-riscv64-musl": "4.60.3", + "@rollup/rollup-linux-s390x-gnu": "4.60.3", + "@rollup/rollup-linux-x64-gnu": "4.60.3", + "@rollup/rollup-linux-x64-musl": "4.60.3", + "@rollup/rollup-openbsd-x64": "4.60.3", + "@rollup/rollup-openharmony-arm64": "4.60.3", + "@rollup/rollup-win32-arm64-msvc": "4.60.3", + "@rollup/rollup-win32-ia32-msvc": "4.60.3", + "@rollup/rollup-win32-x64-gnu": "4.60.3", + "@rollup/rollup-win32-x64-msvc": "4.60.3", "fsevents": "~2.3.2" } }, + "node_modules/rollup/node_modules/@types/estree": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz", + "integrity": "sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==", + "license": "MIT" + }, "node_modules/roughjs": { "version": "4.6.6", "resolved": "https://registry.npmjs.org/roughjs/-/roughjs-4.6.6.tgz", @@ -7024,9 +6919,9 @@ } }, "node_modules/semver": { - "version": "7.7.4", - "resolved": "https://registry.npmjs.org/semver/-/semver-7.7.4.tgz", - "integrity": "sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA==", + "version": "7.8.0", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.8.0.tgz", + "integrity": "sha512-AcM7dV/5ul4EekoQ29Agm5vri8JNqRyj39o0qpX6vDF2GZrtutZl5RwgD1XnZjiTAfncsJhMI48QQH3sN87YNA==", "license": "ISC", "bin": { "semver": "bin/semver.js" @@ -7818,9 +7713,9 @@ } }, "node_modules/vite": { - "version": "7.3.2", - "resolved": "https://registry.npmjs.org/vite/-/vite-7.3.2.tgz", - "integrity": "sha512-Bby3NOsna2jsjfLVOHKes8sGwgl4TT0E6vvpYgnAYDIF/tie7MRaFthmKuHx1NSXjiTueXH3do80FMQgvEktRg==", + "version": "7.3.3", + "resolved": "https://registry.npmjs.org/vite/-/vite-7.3.3.tgz", + "integrity": "sha512-/4XH147Ui7OGTjg3HbdWe5arnZQSbfuRzdr9Ec7TQi5I7R+ir0Rlc9GIvD4v0XZurELqA035KVXJXpR61xhiTA==", "license": "MIT", "dependencies": { "esbuild": "^0.27.0", @@ -7910,61 +7805,6 @@ } } }, - "node_modules/vscode-jsonrpc": { - "version": "8.2.0", - "resolved": "https://registry.npmjs.org/vscode-jsonrpc/-/vscode-jsonrpc-8.2.0.tgz", - "integrity": "sha512-C+r0eKJUIfiDIfwJhria30+TYWPtuHJXHtI7J0YlOmKAo7ogxP20T0zxB7HZQIFhIyvoBPwWskjxrvAtfjyZfA==", - "license": "MIT", - "peer": true, - "engines": { - "node": ">=14.0.0" - } - }, - "node_modules/vscode-languageserver": { - "version": "9.0.1", - "resolved": "https://registry.npmjs.org/vscode-languageserver/-/vscode-languageserver-9.0.1.tgz", - "integrity": "sha512-woByF3PDpkHFUreUa7Hos7+pUWdeWMXRd26+ZX2A8cFx6v/JPTtd4/uN0/jB6XQHYaOlHbio03NTHCqrgG5n7g==", - "license": "MIT", - "peer": true, - "dependencies": { - "vscode-languageserver-protocol": "3.17.5" - }, - "bin": { - "installServerIntoExtension": "bin/installServerIntoExtension" - } - }, - "node_modules/vscode-languageserver-protocol": { - "version": "3.17.5", - "resolved": "https://registry.npmjs.org/vscode-languageserver-protocol/-/vscode-languageserver-protocol-3.17.5.tgz", - "integrity": "sha512-mb1bvRJN8SVznADSGWM9u/b07H7Ecg0I3OgXDuLdn307rl/J3A9YD6/eYOssqhecL27hK1IPZAsaqh00i/Jljg==", - "license": "MIT", - "peer": true, - "dependencies": { - "vscode-jsonrpc": "8.2.0", - "vscode-languageserver-types": "3.17.5" - } - }, - "node_modules/vscode-languageserver-textdocument": { - "version": "1.0.12", - "resolved": "https://registry.npmjs.org/vscode-languageserver-textdocument/-/vscode-languageserver-textdocument-1.0.12.tgz", - "integrity": "sha512-cxWNPesCnQCcMPeenjKKsOCKQZ/L6Tv19DTRIGuLWe32lyzWhihGVJ/rcckZXJxfdKCFvRLS3fpBIsV/ZGX4zA==", - "license": "MIT", - "peer": true - }, - "node_modules/vscode-languageserver-types": { - "version": "3.17.5", - "resolved": "https://registry.npmjs.org/vscode-languageserver-types/-/vscode-languageserver-types-3.17.5.tgz", - "integrity": "sha512-Ld1VelNuX9pdF39h2Hgaeb5hEZM2Z3jUrrMgWQAu82jMtZp7p3vJT3BzToKtZI7NgQssZje5o0zryOrhQvzQAg==", - "license": "MIT", - "peer": true - }, - "node_modules/vscode-uri": { - "version": "3.1.0", - "resolved": "https://registry.npmjs.org/vscode-uri/-/vscode-uri-3.1.0.tgz", - "integrity": "sha512-/BpdSx+yCQGnCvecbyXdxHDkuk55/G3xwnC0GqY4gmQ3j+A+g8kzzgB4Nk/SINjqn6+waqw3EgbVF2QKExkRxQ==", - "license": "MIT", - "peer": true - }, "node_modules/web-namespaces": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/web-namespaces/-/web-namespaces-2.0.1.tgz", @@ -7991,9 +7831,9 @@ "license": "MIT" }, "node_modules/yaml": { - "version": "2.8.4", - "resolved": "https://registry.npmjs.org/yaml/-/yaml-2.8.4.tgz", - "integrity": "sha512-ml/JPOj9fOQK8RNnWojA67GbZ0ApXAUlN2UQclwv2eVgTgn7O9gg9o7paZWKMp4g0H3nTLtS9LVzhkpOFIKzog==", + "version": "2.9.0", + "resolved": "https://registry.npmjs.org/yaml/-/yaml-2.9.0.tgz", + "integrity": "sha512-2AvhNX3mb8zd6Zy7INTtSpl1F15HW6Wnqj0srWlkKLcpYl/gMIMJiyuGq2KeI2YFxUPjdlB+3Lc10seMLtL4cA==", "license": "ISC", "bin": { "yaml": "bin.mjs" @@ -8027,9 +7867,9 @@ } }, "node_modules/zod": { - "version": "4.4.2", - "resolved": "https://registry.npmjs.org/zod/-/zod-4.4.2.tgz", - "integrity": "sha512-IynmDyxsEsb9RKzO3J9+4SxXnl2FTFSzNBaKKaMV6tsSk0rw9gYw9gs+JFCq/qk2LCZ78KDwyj+Z289TijSkUw==", + "version": "4.4.3", + "resolved": "https://registry.npmjs.org/zod/-/zod-4.4.3.tgz", + "integrity": "sha512-ytENFjIJFl2UwYglde2jchW2Hwm4GJFLDiSXWdTrJQBIN9Fcyp7n4DhxJEiWNAJMV1/BqWfW/kkg71UDcHJyTQ==", "license": "MIT", "funding": { "url": "https://github.com/sponsors/colinhacks" diff --git a/docs/package.json b/docs/package.json index 0312e9cd9..0a2f0de44 100644 --- a/docs/package.json +++ b/docs/package.json @@ -10,6 +10,7 @@ "astro": "astro" }, "dependencies": { + "@astrojs/sitemap": "^3.7.2", "@astrojs/starlight": "0.38.4", "astro": "6.2.1", "astro-mermaid": "^2.0.1", @@ -18,7 +19,6 @@ "starlight-llms-txt": "^0.8.1" }, "overrides": { - "@astrojs/sitemap": "3.7.2", "uuid": "^14.0.0" } } diff --git a/docs/src/content/docs/concepts/glossary.md b/docs/src/content/docs/concepts/glossary.md new file mode 100644 index 000000000..073157223 --- /dev/null +++ b/docs/src/content/docs/concepts/glossary.md @@ -0,0 +1,317 @@ +--- +title: Glossary +description: Every overloaded APM term, resolved in one paragraph. Skim alphabetically. +sidebar: + order: 99 +--- + +Every overloaded APM term, resolved in one paragraph. Skim alphabetically. +"What it is NOT" lines disambiguate the most common collisions. + +### apm.lock.yaml + +The lockfile APM writes after a successful resolve. Pins exact commit SHAs +and per-file content hashes so every `apm install` from the same lockfile +produces byte-identical output. Lives at the project root next to `apm.yml`. + +NOT the manifest. The manifest declares what you want; the lockfile records +what you got. + +Source: `src/apm_cli/deps/lockfile.py`. + +### apm.yml + +The package manifest. A YAML file at the package root that declares +`name`, `version`, `dependencies`, `scripts`, `target`, and metadata. Both +the unit of authoring and the unit of consumption -- a directory becomes +an APM package the moment it has an `apm.yml`. + +NOT the lockfile, and NOT a `plugin.json`. The manifest is human-edited; +the lockfile is generated; `plugin.json` is the local-bundle descriptor. + +Source: `src/apm_cli/models/apm_package.py`. + +### audit + +The `apm audit` command. Scans installed primitives for hidden Unicode +that could embed invisible instructions, and (with `--ci`) re-derives +content from the lockfile to detect drift before it ships. Emits SARIF, +JSON, or rendered output. Standalone scanning of arbitrary files is +available via `--file`. + +NOT the same thing as install-time scanning. Install-time scanning is +automatic and blocks critical findings; `audit` is the explicit reporting +and remediation surface. + +See: [Security](/apm/enterprise/security/). +Source: `src/apm_cli/commands/audit.py`. + +### bundle + +A local-install artifact produced by `apm pack`. Either a directory or a +`.tar.gz` containing `plugin.json` at the root and (in current versions) +an embedded `apm.lock.yaml` with per-file SHA-256 hashes. Installed via +`apm install `. + +NOT a package source repository. A bundle is the packed, hash-verified +output of one; you ship bundles, you author packages. + +Source: `src/apm_cli/bundle/local_bundle.py`. + +### compile + +The `apm compile` command. Takes resolved primitives in `apm_modules/` +and writes them into each declared harness location (`.github/`, +`.claude/`, `.cursor/`, etc.) using the format that harness expects. +Runs automatically as the final phase of `apm install`. + +NOT a build step that produces an artifact. Compile only deploys to +local harness directories. + +See: [Compilation guide](/apm/producer/compile/). +Source: `src/apm_cli/commands/compile/`. + +### dev-only primitive + +A dependency listed under `devDependencies:` in `apm.yml` (mirroring `package.json`). Installed locally +for authoring and testing but excluded from the bundle that `apm pack` +ships. The lockfile records the `is_dev` flag per package. + +NOT a separate primitive type. Any package or primitive can be marked +dev-only; it is a visibility flag, not a category. + +Source: `src/apm_cli/deps/lockfile.py`, +`src/apm_cli/deps/installed_package.py`. + +### GitHub APM PAT + +The personal access token APM reads to authenticate against GitHub +when resolving private packages. Resolution order: +`GITHUB_APM_PAT_` (per-org), then `GITHUB_APM_PAT`, then +`GITHUB_TOKEN`, then `GH_TOKEN`. Public packages need no token. + +NOT a separate token type. It is a standard GitHub PAT; the +`GITHUB_APM_PAT` name exists so APM-scoped tokens do not collide with +other tooling. + +See: [Authentication](/apm/consumer/authentication/). +Source: `src/apm_cli/core/auth.py`. + +### harness + +The agent runtime that executes primitives: GitHub Copilot (CLI + IDE), +Claude Code, Cursor, Codex, Gemini, OpenCode, Windsurf. Each harness has +its own primitive directory layout and file format. + +NOT the same as a target. The target is the `apm.yml` field that selects +which harnesses to compile for; the harness is the runtime itself. + +Source: `src/apm_cli/integration/targets.py` (see `KNOWN_TARGETS`). + +### hook + +A primitive type whose contents run at a defined lifecycle event in the +host harness (for example `PreToolUse`). Supported on every harness +except OpenCode -- see the matrix in +[primitives and targets](/apm/concepts/primitives-and-targets/). + +NOT an APM CLI lifecycle event. Hooks fire inside the agent runtime, +not inside `apm install` or `apm compile`. + +Source: `src/apm_cli/integration/hook_integrator.py`. + +### install + +The `apm install` command. Resolves dependencies declared in `apm.yml`, +downloads them into `apm_modules/`, runs the policy gate, scans for +hidden Unicode, writes `apm.lock.yaml`, and compiles primitives into +each declared harness directory. + +All major verbs match npm semantics: `apm install` deploys, `apm update` +refreshes dependencies, `apm install --frozen` is the lockfile-only CI +install (mirrors `npm ci`). The CLI binary itself updates via +`apm self-update`, not `apm update`. + +Source: `src/apm_cli/commands/install.py`. + +### lockfile + +See [apm.lock.yaml](#apmlockyaml). + +### manifest + +See [apm.yml](#apmyml). + +### marketplace + +A curated index of packages, hosted as a Git repository with a +`marketplace.json` at its root. Lists packages by handle and points each +one at a Git source. Authors publish their packages to a marketplace so +consumers can discover and install them by short name. + +NOT a registry. A marketplace is human-curated discovery; the registry +is the resolution backend that APM actually downloads from. + +See: [Marketplaces guide](/apm/consumer/private-and-org-packages/). +Source: `src/apm_cli/commands/marketplace/`. + +### MCP server + +A Model Context Protocol server declared as a dependency under +`mcp:` in `apm.yml`. APM resolves MCP servers transitively, applies +the same policy gate, and writes the runtime config into each harness +that supports MCP. + +NOT an APM primitive type. MCP servers are external processes; APM +declares and gates them but does not ship their code. + +Source: `src/apm_cli/install/mcp/`, +`src/apm_cli/integration/mcp_integrator.py`. + +### package + +The APM unit of distribution: a directory whose root contains an +`apm.yml`. Packages declare primitives, dependencies, scripts, and +targets. A repository may contain one package at the root or several in +subdirectories. + +NOT a plugin (see below) and NOT a bundle. A package is the source-form +unit; the bundle is its packed form; `plugin.json` is a separate +descriptor format. + +Source: `src/apm_cli/models/apm_package.py`. + +### plugin + +A local-install artifact whose root contains a `plugin.json` (Claude +Code / Copilot CLI plugin format). APM detects plugins on +`apm install `, treats them as packages by synthesising an +`apm.yml` from the `plugin.json`, then installs them through the +standard pipeline. + +NOT a different thing from a package at runtime. Plugin format is the +input shape; once detected, APM handles plugins exactly like packages. + +See: [Plugins guide](/apm/producer/author-primitives/). +Source: `src/apm_cli/bundle/local_bundle.py`, +`src/apm_cli/commands/install.py`. + +### policy + +The `apm-policy.yml` file plus the install-time enforcement gate that +reads it. Lets a security team allow-list sources, scopes, and primitive +kinds. Tightens-only across enterprise -> org -> repo. Runs before any +file is written to disk, including for transitive MCP servers. + +NOT the same as `audit`. Policy enforces at install time; audit reports +after the fact. + +See: [Governance guide](/apm/enterprise/governance-guide/). +Source: `src/apm_cli/policy/`. + +### primitive + +The atomic unit APM ships. The supported kinds are: instructions, +skills, prompts, agents, hooks, commands, plugins, and MCP servers. +Each kind has its own integrator that knows how to deploy it into each +harness. + +NOT every file in a package. Only files matching the primitive layout +under recognised directories (`agents/`, `skills/`, `prompts/`, +`instructions/`, `hooks/`, `commands/`) are deployed. + +See: [Primitives and targets](/apm/concepts/primitives-and-targets/). +Source: `src/apm_cli/integration/`. + +### registry + +The resolution backend APM downloads packages from. In the current +implementation this is GitHub (or any Git host reachable over HTTPS or +SSH); enterprise customers can pin to GHES, ADO, or GitLab. + +NOT a marketplace. The registry is where bytes come from; a marketplace +is a curated index that points at registry locations. + +Source: `src/apm_cli/core/auth.py`, +`src/apm_cli/utils/github_host.py`. + +### script + +An entry under `scripts:` in `apm.yml`, mapped to a shell command. +Invoked with `apm run `. Used for the post-install workflow that +launches an agent against the compiled primitives (for example +`apm run start`). + +NOT a primitive. Scripts are project-level commands; they do not deploy +into harness directories. + +Source: `src/apm_cli/models/apm_package.py`, +`src/apm_cli/commands/run.py`. + +### target + +The `target:` field in `apm.yml`. Names which harnesses the package +compiles for (`copilot`, `claude`, `cursor`, `codex`, `gemini`, +`opencode`, `windsurf`, or `all`). Drives which integrator runs and +which directories receive output during `apm compile`. + +NOT the harness itself. Target is the declaration; the harness is the +runtime that consumes the compiled output. + +See: [Primitives and targets](/apm/concepts/primitives-and-targets/). +Source: `src/apm_cli/integration/targets.py`, +`src/apm_cli/core/target_detection.py`. + +### transitive dependency + +A dependency that another dependency pulls in. APM resolves the full +transitive closure (packages and MCP servers), applies the policy gate +to every node, and records each one in the lockfile. Insecure transitive +deps trigger an explicit error unless allow-listed. + +NOT silent. Every transitive node is policy-gated and lockfile-recorded; +nothing slips in below the manifest layer. + +Source: `src/apm_cli/install/context.py`, +`src/apm_cli/install/insecure_policy.py`. + +### trust prompt + +The install-time consent step before APM writes a new MCP server config +to disk. Required because an MCP server pulled in transitively by a deep +dependency can introduce a new outbound integration the user did not +explicitly request. Today this is enforced via `--trust-transitive-mcp` +opt-in plus `apm-policy.yml` allow-listing; an interactive prompt is on +the Promise 2 roadmap. + +Source: `src/apm_cli/install/mcp/`, +`src/apm_cli/install/insecure_policy.py`. + +### self-update + +The `apm self-update` command. Downloads the latest release of the +`apm` CLI from the official installer URL and replaces the binary in +place. Supports `--check` to report availability without installing. +Disabled in package-manager distributions (for example, Homebrew), +which print a distributor-defined upgrade message instead. + +NOT a dependency refresh. `self-update` only touches the CLI binary; +your project's `apm.yml`, lockfile, and `apm_modules/` are untouched. +For dependency refresh, see [update](#update). + +Source: `src/apm_cli/commands/self_update.py`. + +### update + +The `apm update` command. Re-resolves every dependency in `apm.yml` to +its latest matching Git ref, prints a structured plan +(added / updated / removed / unchanged), and prompts for consent before +rewriting `apm.lock.yaml`. Defaults to **No** on the prompt; declining +exits cleanly with no writes. `--yes` skips the prompt for CI; +`--dry-run` prints the plan without prompting or writing. + +Mirrors `npm update`. To pin to the existing lockfile in CI without +refreshing, use `apm install --frozen` (mirrors `npm ci`). To upgrade +the CLI binary itself, see [self-update](#self-update). + +Source: `src/apm_cli/commands/update.py`. diff --git a/docs/src/content/docs/concepts/lifecycle.md b/docs/src/content/docs/concepts/lifecycle.md new file mode 100644 index 000000000..627d6afde --- /dev/null +++ b/docs/src/content/docs/concepts/lifecycle.md @@ -0,0 +1,143 @@ +--- +title: Lifecycle +description: The five steps every APM project moves through, from init to audit. +sidebar: + order: 4 +--- + +APM has five lifecycle steps. Most projects use all five; small ones use three. + +``` + init -> install -> compile -> run + | + v + audit + | + +--> back to install (fix drift) +``` + +`init` scaffolds the project. `install` resolves dependencies, scans them, and writes the lockfile. `compile` transforms primitives into the formats each agent harness expects. `run` invokes a script declared in `apm.yml`. `audit` rebuilds the deployed context in scratch and diffs it against your working tree to catch drift before it ships. + +You will use `install` and `run` daily, `audit` in CI, and `init` and `compile` rarely. + +## 1. INIT + +```bash +apm init [project-name] +``` + +Scaffolds a new APM project in the current directory. + +`apm init` writes three things: an `apm.yml` manifest, a `.apm/` directory for your local primitives (instructions, prompts, skills, agents, hooks), and the agent target directories you select (for example `.github/`, `.claude/`, `.cursor/`). It auto-detects sensible defaults for `name`, `author`, and `description` from your git config and surrounding directory; pass `-y` to accept them without prompts. + +Targets are picked in priority order. An explicit `--target copilot,claude` flag wins. Otherwise an interactive checklist runs. Otherwise APM scans the working tree for signal directories (`.github/`, `.claude/`, `.cursor/`, `.opencode/`, `.codex/`, `.gemini/`, `.windsurf/`) and pre-checks every harness it finds. With `-y` and no flag, all detected harnesses are written into `apm.yml`. See [primitives and targets](/apm/concepts/primitives-and-targets/) for what each target actually receives. + +**Common surprises** + +- Re-running `apm init` in a directory that already has `apm.yml` warns and exits unless you pass `-y` (which overwrites the manifest). +- If you set `targets:` to an empty list in `apm.yml`, target detection runs again at compile time. To pin targets, list them explicitly. + +**Read more:** [`apm init` reference](/apm/reference/cli/install/), [package anatomy](/apm/concepts/package-anatomy/). + +## 2. INSTALL + +```bash +apm install [packages...] +``` + +Resolves the dependency graph declared in `apm.yml`, runs the security scan, and writes `apm.lock.yaml`. + +Order of operations is deterministic and worth memorizing: + +1. **Resolve** -- walk `dependencies` and `devDependencies` (APM packages, MCP servers, Claude skills, plugin collections), follow transitive deps, pick versions. +2. **Policy gate** -- if `apm-policy.yml` is discovered (locally or via your repo's org), every resolved dependency is checked against the allow-list before anything touches disk. Pass `--no-policy` to skip the org policy gate for one invocation; this does not bypass `apm audit --ci`. +3. **Scan** -- the pre-deploy security scan inspects every primitive for hidden Unicode (zero-width characters, bidi controls, tag characters). Critical findings block the install. Pass `--force` to deploy anyway. +4. **Integrate** -- write primitives into each target harness's native directory (`.github/`, `.claude/`, etc.) and merge MCP server configs into the harness-specific config files. +5. **Lockfile** -- write `apm.lock.yaml` with pinned versions, content hashes, and the resolved MCP server set. + +`apm install` with no arguments installs from the existing manifest. `apm install ` adds a new dependency, re-runs the full pipeline, and updates both `apm.yml` and `apm.lock.yaml`. `--dry-run` runs steps 1 and 2 only and prints the plan. + +:::note[Coming from npm?] +`apm install` mirrors `npm install` deliberately. The big difference: APM also runs a security scan and, if present, an org policy gate before writing anything to disk. +::: + +**Common surprises** + +- The scan is not optional in normal operation. If you need to land an install with a known critical finding (for example, an upstream package you cannot patch yet), use `--force` and document the exception. +- Transitive MCP servers are gated behind explicit trust. If a deep dependency declares a new MCP server, install pauses to ask you to re-declare it in your top-level `apm.yml`. Use `--trust-transitive-mcp` to skip this in trusted environments. + +**Read more:** [`apm install` reference](/apm/reference/cli/install/), [security](/apm/enterprise/security/), [policy reference](/apm/enterprise/policy-reference/). + +## 3. COMPILE + +```bash +apm compile [--target ] +``` + +Transforms the primitives in `.apm/` (and dependencies under `apm_modules/`) into harness-native files: `AGENTS.md` for Codex, `GEMINI.md` for Gemini, populated `.cursor/`, `.opencode/`, `.windsurf/` directories, and so on. + +Most users never call `apm compile` directly. `apm install` runs it as part of the integrate phase, and `apm run` auto-compiles any `.prompt.md` files referenced by a script just before execution. Reach for `apm compile` when you want to inspect what will be deployed without changing dependencies, when you are iterating on local primitives between installs, or when you only need output for one harness. + +The `--target` flag accepts a comma-separated list (`copilot,claude,cursor,opencode,codex,gemini,windsurf,agent-skills`) or `all`. `--dry-run` prints placement decisions without writing files. `--validate` checks primitive frontmatter and structure without producing output. `--watch` re-runs compilation on every change. + +**Common surprises** + +- Running `apm compile` does not re-run the security scan. The scan happens at install time. If you hand-edit primitives between installs, run `apm audit` to scan them. +- `--clean` removes orphaned `AGENTS.md` files from previous compilations. Without it, removed primitives can leave stale output behind. + +**Read more:** [`apm compile` reference](/apm/reference/cli/install/), [compilation guide](/apm/producer/compile/). + +## 4. RUN + +```bash +apm run [--param key=value ...] +``` + +Executes a named script from the `scripts:` block in `apm.yml`. + +The `scripts:` block is a flat string-to-string mapping, mirroring `package.json`: + +```yaml +name: my-project +version: 0.1.0 +scripts: + start: copilot --prompt .apm/prompts/review.prompt.md + review: copilot --prompt .apm/prompts/review.prompt.md + test: pytest tests/ +``` + +`apm run` with no script name runs `start`, matching npm. Before invoking the command, APM scans it for `.prompt.md` references, compiles each one to `.apm/compiled/.txt`, and substitutes the compiled path into the command line. Use `--param key=value` (repeatable) to pass parameters that get interpolated into prompt frontmatter. + +To preview what will run without executing, use `apm preview `. It prints the original command, the rewritten command after prompt compilation, and the list of compiled files. + +:::note[Coming from npm?] +The `scripts:` shape is intentionally identical to `package.json`. Object-form scripts (with `description`, `env`, etc.) are not supported; keep them strings. +::: + +**Common surprises** + +- A script that does not reference any `.prompt.md` file runs as-is. APM only rewrites the command when it finds `.prompt.md` arguments. +- Parameters passed with `--param` only reach prompt files. They do not become shell environment variables. + +**Read more:** [`apm run` reference](/apm/reference/cli/install/), [agent workflows guide](/apm/producer/author-primitives/instructions-and-agents/). + +## 5. AUDIT + +```bash +apm audit # local: scan deployed files for hidden Unicode +apm audit --ci # CI gate: lockfile consistency + drift replay +apm audit --file # standalone: scan an arbitrary file +``` + +`apm audit` is the explicit reporting and remediation tool that complements the built-in scan run by `install`. It has two modes worth understanding separately. + +**Local mode** (`apm audit`, optionally with `--strip` or `--file `) scans installed primitives -- or any file you point at -- for hidden Unicode and reports findings as text, JSON, SARIF, or markdown. With `--strip`, it removes hidden characters in place, preserving emoji and whitespace. Use `--dry-run` to preview the strip. + +**CI mode** (`apm audit --ci`) runs the eight baseline consistency checks in order: `lockfile-exists`, `ref-consistency`, `deployed-files-present`, `no-orphaned-packages`, `skill-subset-consistency`, `config-consistency`, `content-integrity`, and `includes-consent`. After those pass, it performs an install-replay drift check. APM rebuilds the deployed context in a scratch directory and diffs it against your working tree, catching hand-edits to `apm_modules/` or generated files before they ship. Pass `--no-drift` to skip the replay in performance-constrained loops; pass `--no-fail-fast` to run all checks even after a failure. With `--policy ` it also evaluates org policy against the lockfile. + +**Common surprises** + +- `apm audit --ci` exits 1 on any failure -- this is the gate you wire into branch protection. The local `apm audit` exits 0 even when findings exist, unless you also pass `--strip` and writes fail. +- The drift check rebuilds the full context from scratch; on large repos, expect a few seconds of overhead. If your CI loop cannot afford it, narrow with `--no-drift` and accept reduced coverage. + +**Read more:** [`apm audit` reference](/apm/reference/cli/install/), [policy reference](/apm/enterprise/policy-reference/) for the full check list, [security](/apm/enterprise/security/). diff --git a/docs/src/content/docs/concepts/package-anatomy.md b/docs/src/content/docs/concepts/package-anatomy.md new file mode 100644 index 000000000..efaae8a34 --- /dev/null +++ b/docs/src/content/docs/concepts/package-anatomy.md @@ -0,0 +1,254 @@ +--- +title: Package anatomy +description: The file layout of an APM package, field by field. +sidebar: + order: 6 +--- + +An APM package is a directory with two things: an `apm.yml` manifest and a +`.apm/` source tree. Everything else -- the lockfile, compiled output, MCP +configs, runtime-specific folders -- is generated, optional, or both. + +This page walks the file tree top-down so you can recognize every piece on +sight. + +## The minimal package + +Three lines on disk is enough: + +``` +my-pkg/ ++-- apm.yml ++-- .apm/ + +-- skills/hello/SKILL.md +``` + +```yaml +# apm.yml +name: my-pkg +version: 1.0.0 +``` + +`name` and `version` are the only required fields. +`apm install` will validate the manifest, generate `apm.lock.yaml`, and +deploy `hello` to whatever harnesses you target. + +## Full file tree + +A mature package looks closer to this. One line per file; deeper pages own +the detail. + +``` +my-pkg/ ++-- apm.yml # The manifest. Required. See below. ++-- apm.lock.yaml # Resolved versions + content hashes. Generated. ++-- apm_modules/ # Installed dependencies. Generated. Gitignore. ++-- .apm/ # Source primitives you author. +| +-- instructions/ # Always-on rules attached to file globs. +| +-- skills/ # Multi-file capabilities (SKILL.md + assets). +| +-- prompts/ # Reusable prompt templates. +| +-- agents/ # Named agents (model + system prompt + tools). +| +-- chatmodes/ # Chat-mode configurations. +| +-- context/ # Shared context fragments. +| +-- hooks/ # Lifecycle hooks (pre/post events). ++-- .github/ # Compiled output for Copilot. Generated. +| +-- instructions/ +| +-- agents/ +| +-- copilot-instructions.md ++-- .claude/ # Compiled output for Claude Code. Generated. ++-- .cursor/ # Compiled output for Cursor. Generated. ++-- .codex/ # Compiled output for Codex. Generated. ++-- AGENTS.md # Cross-tool spec read by OpenCode, Gemini, Codex. ++-- apm-policy.yml # Optional org/repo policy. See enterprise docs. ++-- scripts/ # Optional helper scripts you author. ++-- tests/ # Optional tests for your primitives. +``` + +Anything under `apm_modules/`, `.github/`, `.claude/`, `.cursor/`, or +`.codex/` is build output. Edit the source under `.apm/` and re-run +`apm install` -- never edit the deployed copy. + +`apm init` only writes `apm.yml`. The rest appears as you author primitives +or run `apm install`. + +For why `.apm/` exists at all (instead of writing straight into `.github/`), +see [Primitives and targets](/apm/concepts/primitives-and-targets/). + +## Anatomy of `apm.yml` + +A realistic example, every field annotated: + +```yaml +# Required identity +name: my-pkg # Package name. Required. +version: 1.0.0 # SemVer string. Required. + +# Optional metadata +description: Code review skills for Python services +author: Jane Doe +license: MIT + +# Optional content type: one of instructions, skill, hybrid, prompts. +# Constrains what `.apm/` may contain. Useful for single-purpose packages. +type: skill + +# Optional target list. Pins which harnesses this package compiles to. +# Accepts a string ("copilot,claude") or a YAML list. Omit to target all. +target: + - copilot + - claude + +# Optional. "auto" auto-publishes every primitive under .apm/, or list +# explicit repo paths to publish a subset. +includes: auto + +# Optional. Runtime dependencies, grouped by kind. +dependencies: + apm: + - microsoft/apm-sample-package#v1.0.0 # Pinned to a tag + - github/awesome-copilot/skills/review-and-refactor # Single primitive + mcp: + - microsoft/azure-devops-mcp # MCP server dependency + +# Optional. Same shape as `dependencies`, but excluded from the shipped +# artifact. Use for dev-only tooling and tests. +devDependencies: + apm: + - my-org/internal-test-skills + +# Optional. Named scripts you can run with `apm run `. +scripts: + start: copilot -p hello.prompt.md + codex: codex --skip-git-repo-check hello.prompt.md +``` + +### Field reference + +| Field | Required | Notes | +|------------------|----------|-------------------------------------------------------------| +| `name` | yes | Package name. | +| `version` | yes | SemVer string. | +| `description` | no | | +| `author` | no | | +| `license` | no | SPDX identifier recommended. | +| `type` | no | `instructions`, `skill`, `hybrid`, or `prompts`. | +| `target` | no | String or list of harness slugs. | +| `includes` | no | `"auto"` or list of repo paths. | +| `dependencies` | no | Mapping with `apm:` and/or `mcp:` keys. | +| `devDependencies`| no | Same shape as `dependencies`. Excluded from `apm pack`. | +| `scripts` | no | Mapping of name to shell command. Run via `apm run `. | + +:::note[Coming from npm?] +The shape mirrors `package.json` on purpose: `name`, `version`, +`dependencies`, `devDependencies`, `scripts`. The verbs match too: +`apm install` deploys, `apm update` refreshes dependencies, and +`apm install --frozen` is the lockfile-only CI install (mirrors +`npm ci`). The CLI binary itself updates via `apm self-update`. +::: + +## Anatomy of `apm.lock.yaml` + +The lockfile pins every resolved dependency to an exact commit and content +hash so two clones of the repo install byte-identical primitives. Generated +by `apm install`; commit it. + +```yaml +lockfile_version: '1' +generated_at: '2026-04-21T21:45:34.516938+00:00' +apm_version: 0.10.0 + +dependencies: + - repo_url: https://github.com/microsoft/apm-sample-package + resolved_commit: a1b2c3d4e5f6... # Exact SHA installed + resolved_ref: v1.0.0 # Tag/branch the SHA came from + version: 1.0.0 # SemVer if available + depth: 1 # 1 = direct, 2+ = transitive + package_type: APM_PACKAGE + content_hash: sha256:9f... # Hash of the package file tree + deployed_files: # What this dep wrote to disk + - .github/skills/review/SKILL.md + deployed_file_hashes: + .github/skills/review/SKILL.md: sha256:c4... + + # A single-primitive (virtual) import looks like this: + - repo_url: https://github.com/github/awesome-copilot + virtual_path: skills/review-and-refactor + is_virtual: true + resolved_commit: 7e8f9a... + depth: 1 + +mcp_servers: + - microsoft/azure-devops-mcp + +# The package's own local content. Same hashing logic as deps; lets +# `apm audit` detect hand-edits to deployed files. +local_deployed_files: + - .github/instructions/python.instructions.md +local_deployed_file_hashes: + .github/instructions/python.instructions.md: sha256:45... +``` + +### Field reference + +Top-level fields: + +| Field | Notes | +|--------------------------------|------------------------------------------------| +| `lockfile_version` | Schema version of the lockfile. | +| `generated_at` | ISO timestamp of last write. | +| `apm_version` | CLI version that generated the file. | +| `dependencies` | List of `LockedDependency` entries. | +| `mcp_servers` | Resolved MCP server identifiers. | +| `mcp_configs` | Per-harness MCP configuration blobs. | +| `local_deployed_files` | Files this package wrote to deployed dirs. | +| `local_deployed_file_hashes` | SHA-256 of each local-deployed file. | + +Per-dependency fields: + +| Field | Notes | +|------------------------|------------------------------------------------| +| `repo_url` | Canonical clone URL. | +| `host`, `port` | For non-github.com or non-standard ports. | +| `registry_prefix` | Artifactory-style prefix. | +| `resolved_commit` | Full SHA. The thing that makes installs reproducible. | +| `resolved_ref` | Original tag/branch the SHA was resolved from. | +| `version` | SemVer, if the dep is versioned. | +| `virtual_path` | Subpath for single-primitive imports. | +| `is_virtual` | True for primitive-form deps. | +| `depth` | 1 = direct dependency; >1 = transitive. | +| `package_type` | `APM_PACKAGE`, `CLAUDE_SKILL`, `HYBRID`. | +| `deployed_files` | Files this dep wrote to your tree. | +| `deployed_file_hashes` | SHA-256 of each deployed file. | +| `source`, `local_path` | Set for `local_path:` deps. | +| `content_hash` | SHA-256 of the package file tree. | +| `is_dev` | True for `devDependencies` entries. | + +`apm audit` rehashes everything in `deployed_file_hashes` and +`local_deployed_file_hashes` to detect hand-edits before they ship. + +## The `.apm/` directory + +`.apm/` is the source root. APM does not look elsewhere for primitives. Each +subdirectory holds one primitive type; file naming conventions are +documented per type. + +- **`instructions/`** -- Always-on rules attached to file globs (e.g. "for + every `*.py`, follow PEP 8"). One Markdown file per rule. Compiled into + `.github/instructions/`, `.cursor/rules/`, and the equivalent for other + harnesses. +- **`skills//SKILL.md`** -- Multi-file capabilities. The `SKILL.md` + is the entry point; sibling files (templates, scripts, references) ship + alongside it. Loaded on demand by harnesses that support skills. +- **`prompts/`** -- Reusable prompt templates, one `.prompt.md` per prompt. + Invocable via `apm run