diff --git a/.claude/skills/devnet-log-review/SKILL.md b/.claude/skills/devnet-log-review/SKILL.md new file mode 100644 index 0000000..b37761c --- /dev/null +++ b/.claude/skills/devnet-log-review/SKILL.md @@ -0,0 +1,298 @@ +--- +name: devnet-log-review +description: Review and analyze devnet run results. Use when users want to (1) Analyze devnet logs for errors and warnings, (2) Generate a summary of a devnet run, (3) Identify interoperability issues between clients, (4) Understand consensus progress and block production, (5) Debug forks and finalization issues. +--- + +# Devnet Log Review + +Analyze and summarize devnet run results from lean consensus testing involving +gean and peer clients. + +## Quick Start + +**Run the analysis script:** +```bash +# From project root (with logs in current directory) +.claude/skills/devnet-log-review/scripts/analyze-logs.sh + +# Or specify logs directory +.claude/skills/devnet-log-review/scripts/analyze-logs.sh /path/to/logs +``` + +This produces a structured summary with: +- Error/warning counts per node +- Block production statistics +- Consensus progress (last slot, last justified, last finalized) +- Proposer assignments + +## Log File Locations + +| File | Content | +|---|---| +| `devnet.log` | Combined output from `spin-node.sh` (genesis generation + all node output) | +| `{client}_{n}.log` | Individual node logs (e.g., `gean_0.log`, `zeam_0.log`, `ethlambda_0.log`) | + +## Analysis Scripts + +| Script | Description | +|---|---| +| `analyze-logs.sh [dir]` | Main entry point — runs all analyses, outputs markdown summary | +| `count-errors-warnings.sh [dir]` | Count errors/warnings per node (excludes benign patterns) | +| `count-blocks.sh [dir]` | Count blocks proposed/processed per node (client-aware) | +| `check-consensus-progress.sh [dir]` | Show last slot, last justified, last finalized per node | +| `show-errors.sh [-n node] [-l limit] [-w] [dir]` | Display error details for investigation | + +**Usage examples:** +```bash +# Just count errors/warnings +.claude/skills/devnet-log-review/scripts/count-errors-warnings.sh + +# Show errors for specific node +.claude/skills/devnet-log-review/scripts/show-errors.sh -n zeam_0 + +# Show errors and warnings with limit +.claude/skills/devnet-log-review/scripts/show-errors.sh -w -l 50 +``` + +## Common Investigation Patterns + +### Tracing Slot-by-Slot Flow + +When investigating issues, trace the complete flow for a specific slot using +structured logging fields (`slot=X`). + +**Note:** Logs may contain ANSI color codes. Strip them first: + +```bash +# Strip ANSI codes and grep for a specific slot +sed 's/\x1b\[[0-9;]*m//g' devnet.log | grep -E "slot=3[^0-9]|slot=3$" + +# For double-digit slots +sed 's/\x1b\[[0-9;]*m//g' devnet.log | grep -E "slot=12[^0-9]|slot=12$" +``` + +Structured logging fields used by gean follow `key=value` format: +- `slot=N` — Slot number +- `validator=N` — Validator index +- `proposer=N` — Block proposer index +- `justified_slot=N` — Justified slot at the time of the log +- `finalized_slot=N` — Finalized slot at the time of the log +- `proc_time=Xms` — Block processing time (gean-specific) +- `has_parent=true|false` — Whether the block's parent was already known (gean-specific) +- `attestations=N` — Number of attestations in the block + +### Comparing Clients at Specific Slots + +```bash +# Extract block hashes for specific slots across all clients +for slot in 1 2 3 4 5; do + echo "=== Slot $slot ===" + grep -h "slot=$slot[^0-9]\|@ $slot[^0-9]" *.log | grep -oE "0x[a-f0-9]{8}" | sort -u +done + +# Check which client has which head at a specific slot +grep -h "head_slot=18\|Head Slot: 18\|head slot=18" *.log + +# Compare finalization across clients +grep -h "finalized.*slot\|Finalized block.*@\|finalized_slot=" *.log | tail -20 +``` + +### Finding Validators + +Each validator proposes blocks when `slot % validator_count == validator_id`. + +```bash +# gean — explicit validator in logs +grep "produced attestation" gean_0.log | head -3 +# Output: produced attestation slot=6 validator=2 +grep "proposing block\|proposed block" gean_0.log | head -3 + +# ethlambda — explicit validator_id in logs +grep "We are the proposer" ethlambda_0.log | head -3 +# Output: We are the proposer for this slot slot=5 validator_id=5 + +# zeam — proposer field in attestation logs +grep "packing proposer attestation" zeam_0.log | head -3 +# Output: packing proposer attestation for slot=6 proposer=0 + +# Generic approach — validator_id = slot % validator_count +``` + +## Analysis Areas + +### Fork Analysis + +When clients disagree on which blocks are valid, the network splits into forks. + +**Quick check for forks:** +```bash +# Compare block hashes at same slot across clients +grep -h "slot=4[^0-9]" *.log | grep -oE "block_root=0x[a-f0-9]{16}" | sort -u + +# If you see different hashes → fork exists! +``` + +**Identifying rejected blocks:** +```bash +# gean — block processing failures +grep -i "block processing failed\|state transition failed" gean_0.log + +# ethlambda +grep "Failed to process block" ethlambda_0.log + +# lantern +grep "signature verification failed" lantern_0.log + +# Cross-client signature failures +grep -i "signature.*failed\|invalid signature" *.log | head -20 +``` + +**See [references/FORK_ANALYSIS.md](references/FORK_ANALYSIS.md) for:** +- Understanding fork types (canonical, orphan, invalid) +- Tracing parent-child relationships +- Building fork structure diagrams +- Determining which validators are on which fork + +### Finalization Debugging + +Finalization should advance every 6-12 slots. If it stalls, investigate: + +```bash +# gean — chain status block + per-block log +grep "finalized_slot=" gean_0.log | tail -20 +grep "Latest Finalized:" gean_0.log | tail -10 + +# If finalized_slot stays the same for 50+ slots → finalization stalled +``` + +**Finalization requires >2/3 supermajority:** +- 5 validators → need 4 votes minimum +- 6 validators → need 4 votes minimum (3*4 >= 2*6) +- 9 validators → need 6 votes minimum + +**See [references/FINALIZATION_DEBUG.md](references/FINALIZATION_DEBUG.md) for:** +- Common causes of finalization stalls +- Validator participation calculations +- 3SF-mini gap rule and justification chain analysis +- Step-by-step debugging guide + +### Error Classification + +**See [references/ERROR_CLASSIFICATION.md](references/ERROR_CLASSIFICATION.md) for:** +- Critical errors (genesis mismatch, panics, database corruption) +- Expected/benign messages (TODOs, HandshakeTimedOut to unconfigured nodes) +- Medium severity issues (encoding mismatches, missing blocks) +- State transition errors + +### Client Log Patterns + +Different clients have different log formats and key patterns. + +**See [references/CLIENT_LOG_PATTERNS.md](references/CLIENT_LOG_PATTERNS.md) for:** +- Log format for each client (gean, zeam, ream, ethlambda, lantern) +- Key log patterns per client +- Block counting methods +- ANSI color code handling + +## Block Proposal Flow (gean) + +A healthy gean block proposal/processing follows this sequence: + +1. `[validator] proposing block slot=N validator=V` — gean detects it's the proposer +2. `[chain] block slot=N block_root=0x... proposer=V attestations=A justified_slot=J finalized_slot=F proc_time=Xms` — block built and processed +3. `[forkchoice] head slot=N head_root=0x... ...` — fork choice acknowledges the new head +4. `[validator] proposed block slot=N block_root=0x... attestations=A` — block published to network + +For incoming blocks (gean as receiver): + +1. `[gossip] received block slot=N proposer=V block_root=0x... parent_root=0x...` — gossip receive +2. `[chain] processing block slot=N block_root=0x... has_parent=true|false` — start of processing +3. `[chain] block slot=N ... proc_time=Xms` — block applied to state + +## Summary Report Format + +Generate concise summaries (20 lines or less) in this structure: + +```markdown +## Devnet Log Summary + +**Run:** {N} {client} nodes (`{image}`) | {M} slots ({range}) + +| Node | Validator | Blocks Proposed | Errors | Warnings | Status | +|---|---|---|---|---|---| +| {node_name} | {id} | {count} (slots {list}) | {n} | {n} | {emoji} | + +**Issues:** +- {issue 1} +- {issue 2} + +**{emoji} {RESULT}** — {one-line explanation} +``` + +### Status Emoji Guide + +| Emoji | Meaning | When to Use | +|---|---|---| +| 🟢 | Healthy | No errors, blocks processed successfully | +| 🟡 | Warning | Minor issues but consensus working | +| 🔴 | Failed | Critical errors, consensus broken, or blocks failing validation | + +### Result Line Examples + +- `🟢 PASSED` — All nodes healthy, consensus achieved +- `🟡 PASSED WITH WARNINGS` — Consensus working but minor issues detected +- `🔴 FAILED` — Consensus broken: {reason} + +### Key Rules + +1. Keep summary under 20 lines +2. Use table for per-node status +3. Status should reflect whether that node's blocks pass validation (🔴 if not) +4. End with single-line result with emoji +5. Don't list "what's working" — focus on issues + +## Manual Investigation Commands + +Use these when scripts don't provide enough detail: + +```bash +# Find which validators proposed blocks +grep -h "proposed block\|proposing block\|We are the proposer" *.log | head -20 + +# Check peer connections +grep -h "peer connected\|Connection established\|Connected Peers:" *.log | head -20 + +# Check attestations +grep -i "attestation" *.log | head -50 + +# Search for specific error patterns +grep -i "genesis mismatch\|panic\|fatal" *.log + +# gean — find oversized block warnings (should never appear after the per-validator refactor) +grep -i "MessageTooLarge\|exceeds max\|snappy decoded len" *.log + +# Track attestations to unknown blocks (indicates forks) +grep "Unknown.*block:" ethlambda_0.log | grep -oE "0x[a-f0-9]{64}" | sort | uniq -c | sort -rn + +# Check failed root cleanups (gean-specific) +grep "fetch exhausted for root" gean_0.log +``` + +## Detailed References + +For in-depth analysis, see these specialized guides: + +- **[FORK_ANALYSIS.md](references/FORK_ANALYSIS.md)** — Comprehensive guide to identifying and analyzing blockchain forks, tracing parent-child relationships, building fork structure diagrams, and determining consensus disagreements +- **[FINALIZATION_DEBUG.md](references/FINALIZATION_DEBUG.md)** — Debugging finalization stalls, validator participation calculations, justification chain analysis, threshold math, and the 3SF-mini gap rule +- **[CLIENT_LOG_PATTERNS.md](references/CLIENT_LOG_PATTERNS.md)** — Log formats and key patterns for all clients (gean, zeam, ream, ethlambda, lantern), including block counting methods +- **[ERROR_CLASSIFICATION.md](references/ERROR_CLASSIFICATION.md)** — Error types, severity levels, expected vs. critical errors, and interoperability issues + +## Progressive Disclosure + +This skill uses progressive disclosure to keep context usage efficient: + +1. **Start here** (SKILL.md) — Quick start workflow and common patterns +2. **Detailed references** (references/*.md) — Deep dives into specific analysis areas +3. **Scripts** (scripts/) — Automated analysis tools + +Load detailed references only when needed for specific investigations. diff --git a/.claude/skills/devnet-log-review/references/CLIENT_LOG_PATTERNS.md b/.claude/skills/devnet-log-review/references/CLIENT_LOG_PATTERNS.md new file mode 100644 index 0000000..640652c --- /dev/null +++ b/.claude/skills/devnet-log-review/references/CLIENT_LOG_PATTERNS.md @@ -0,0 +1,247 @@ +# Client-Specific Log Patterns + +Reference guide for log formats and key patterns across the lean consensus +clients used in gean's multi-client devnet (gean, zeam, ream, ethlambda, lantern). + +## gean (Go) + +**Log format:** `YYYY-MM-DDTHH:MM:SS.sssZ LEVEL [module] message key=value ...` + +**Key characteristics:** +- ANSI color codes for level + module +- Modules: `[chain]`, `[gossip]`, `[validator]`, `[network]`, `[forkchoice]`, + `[signature]`, `[sync]`, `[store]` +- Structured fields use `key=value` +- Periodic chain status box with `Latest Justified` / `Latest Finalized` + +### Block proposal flow + +``` +[validator] proposing block slot=N validator=V +[chain] block slot=N block_root=0x... parent_root=0x... proposer=V + attestations=A justified_slot=J finalized_slot=F proc_time=Xms +[forkchoice] head slot=N head_root=0x... ... justified_slot=J finalized_slot=F +[validator] proposed block slot=N block_root=0x... attestations=A +[signature] aggregate: slot=N sigs=S validators=[...] proof=B bytes duration=Xms +``` + +### Block reception flow + +``` +[gossip] received block slot=N proposer=V block_root=0x... parent_root=0x... +[chain] processing block slot=N block_root=0x... has_parent=true|false +[chain] block slot=N ... proc_time=Xms +[forkchoice] head slot=N ... +``` + +### Attestations + +``` +[gossip] attestation verified: validator=V slot=N dataRoot=0x... +[validator] produced attestation slot=N validator=V +[network] published attestation to network slot=N validator=V +``` + +### Chain status (logged periodically) + +``` ++===============================================================+ + CHAIN STATUS: Current Slot: N | Head Slot: N | Behind: B ++---------------------------------------------------------------+ + Connected Peers: P ++---------------------------------------------------------------+ + Head Block Root: 0x... + Parent Block Root: 0x... + State Root: 0x... ++---------------------------------------------------------------+ + Latest Justified: Slot J | Root: 0x... + Latest Finalized: Slot F | Root: 0x... ++---------------------------------------------------------------+ + Gossip Sigs: G | Known Payloads: K | States: S | FC Nodes: N ++---------------------------------------------------------------+ + Topics: + /leanconsensus/devnet0/block/ssz_snappy mesh_peers=M + /leanconsensus/devnet0/aggregation/ssz_snappy mesh_peers=M + /leanconsensus/devnet0/attestation_0/ssz_snappy mesh_peers=M ++===============================================================+ +``` + +### Sync events + +``` +[sync] queueing missing block block_root=0x... for batched fetch +[sync] batched fetch starting count=N +[sync] fetch exhausted for root 0x..., discarded N pending child block(s) +[sync] checkpoint sync: +[sync] requesting missing block block_root=0x... from network +``` + +### Store events + +``` +[store] pruning: finalized_slot=F states=S blocks=B live_chain=L gossip_sigs=G payloads=P non_canonical=N +``` + +### Counting blocks + +```bash +# Proposed (one per gean block proposal) +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep -c "\[validator\] proposed block" + +# Processed (one per block applied to state — own + others) +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep -c "\[chain\] block slot=" +``` + +## zeam (Zig) + +**Log format:** `[timestamp] [level] (zeam): [module] message` + +**Key characteristics:** +- Color codes in output (ANSI escape sequences) +- Key modules: `[node]`, `[network]`, `[consensus]` + +**Common patterns:** +``` +[validator] packing proposer attestation for slot=X proposer=Y +[database] initializing RocksDB +[node] failed to load latest finalized state from database: error.NoFinalizedStateFound +processed block with root=0x... slot=X processing time=... +``` + +**Important crash signature (seen in past incidents):** +``` +thread 1 panic: integer overflow +.../ssz/src/lib.zig:N: ... in serializedSize__anon_NNNNN (zeam) +``` +or +``` +.../utils/src/ssz.zig:NN: ... in process_block (zeam) +``` + +These cause unbounded recursion / stack overflow logs (millions of identical +frames). If a zeam_0.log is unusually large (>1M lines), suspect a crash loop. + +## ream (Rust) + +**Log format:** `timestamp LEVEL module: message` + +**Key characteristics:** +- Uses `tracing` crate format +- Key modules: `ream_p2p::network::lean`, `ream_blockchain`, `ream_chain_lean` + +**Common patterns:** +``` +ream_p2p::network::lean: Connected to peer: PeerId("...") +ream_chain_lean::service: Processing block built by Validator N slot=X block_root=0x... +ream_chain_lean::service: Failed to handle process attestation message: ... +ream_chain_lean::service: Attestation too far in future expected slot: X <= Y +ream_chain_lean::service: No common highest checkpoint found among connected peers +``` + +**Known weakness:** ream's sync recovery is fragile. Watch for: +- `Attestation too far in future` +- `No state available for target 0x...` +- `Backfill job request timed out` + +These usually indicate ream is stuck and cannot rejoin a divergent branch. + +## ethlambda (Rust) + +**Log format:** `timestamp LEVEL module: message` + +**Key modules:** +- `ethlambda` +- `ethlambda_blockchain` +- `ethlambda_p2p` +- `ethlambda_p2p::gossipsub` + +**Key patterns:** + +### Block Proposal +``` +ethlambda_blockchain: We are the proposer for this slot slot=X validator_id=Y +ethlambda_blockchain: Published block slot=X validator_id=Y +ethlambda_p2p: Published block to gossipsub slot=X proposer=Y attestation_count=A +``` + +### Attestations +``` +ethlambda_blockchain: Published attestation slot=X validator_id=Y +ethlambda_p2p::gossipsub::handler: Published attestation to gossipsub slot=X validator=Y +ethlambda_blockchain: Skipping attestation for proposer slot=X +``` + +### Block Processing +``` +ethlambda_blockchain::store: Fork choice head updated head_slot=X head_root=0x... ... +ethlambda_blockchain: Processed new block slot=X block_root=0x... state_root=0x... +``` + +### Errors +``` +ethlambda_blockchain: Failed to process block slot=X err=... +ethlambda_blockchain: Block parent missing, storing as pending slot=X parent_root=0x... +ethlambda_p2p::swarm_adapter: Swarm adapter: publish failed err=MessageTooLarge +``` + +**Known regression (block bloat bug):** ethlambda's `build_block` greedily +accumulates attestations without a per-validator dedup or size cap. During +stalls, blocks can grow to ~12 MB and trigger MessageTooLarge errors. Watch for: + +``` +Swarm adapter: publish failed err=MessageTooLarge +gossipsub block decompression failed: uncompressed size NNNNNNNN exceeds maximum 10485760 +Block fetch failed after max retries +``` + +This is the same bug gean had before commit `62454aa` (per-validator latest-vote +selection). + +## lantern (C) + +**Log format:** `timestamp LEVEL [module] message` + +**Key characteristics:** +- Brackets around module names: `[state]`, `[gossip]`, `[network]`, `[QUIC]` +- Most reliable client in our network — rarely the source of bugs + +**Key patterns:** +``` +[state] imported block slot=X new_head_slot=Y head_root=0x... +[gossip] received block slot=X proposer=Y root=0x... source=gossip +[gossip] published attestation validator=X slot=Y +[gossip] processed vote validator=X slot=Y head=0x... target=0x...@N source=0x...@M +[signature] aggregation verify start count=N epoch=E proof_len=N +[reqresp] chunk payload too large=N peer=... +[QUIC] handshake timeout state=client_init_sent(...) +``` + +**Note:** lantern's QUIC handshake timeouts are usually a peer being unreachable +(dead listener), not a lantern bug. Lantern keeps redialing dead peers. + +## ANSI Color Code Handling + +Many clients output ANSI escape sequences for terminal colors. Strip them before +grepping: + +```bash +# Strip ANSI codes +sed 's/\x1b\[[0-9;]*m//g' logfile.log | grep pattern +``` + +Without stripping, patterns may not match correctly. + +## Cross-Client Crash Detection + +Quick check for crashed clients (unusually large logs are often crash loops): + +```bash +# Sort logs by size — anything > 1M lines is suspect +wc -l *.log | sort -n + +# Find panic / fatal patterns +grep -l "panic\|fatal\|stack overflow\|segmentation" *.log + +# zeam-specific (most common crash family in our network) +grep -m 1 "thread.*panic\|process_block\|serializedSize" zeam_*.log +``` diff --git a/.claude/skills/devnet-log-review/references/ERROR_CLASSIFICATION.md b/.claude/skills/devnet-log-review/references/ERROR_CLASSIFICATION.md new file mode 100644 index 0000000..cf9b9d2 --- /dev/null +++ b/.claude/skills/devnet-log-review/references/ERROR_CLASSIFICATION.md @@ -0,0 +1,150 @@ +# Error Classification Guide + +Reference for categorizing and understanding errors in devnet logs. + +## Critical Errors + +Errors that indicate serious problems requiring immediate attention. + +| Pattern | Meaning | Action | +|---|---|---| +| `genesis mismatch` | Nodes have different genesis configurations | Check genesis consistency across nodes | +| `panic` / `fatal` / `thread panic` | Client crash | Check stack trace, file bug report | +| `stack overflow` / `runtime error: stack` | Crash from infinite recursion | Check SSZ decoder, file bug | +| `database corruption` | Data directory corrupted | Clear data directory and restart | +| `OutOfMemory` in block deserialization | Block format incompatibility between clients | Check SSZ schema versions | +| `MessageTooLarge` (publishing own block) | Producer-side block bloat bug | Check the producer's block builder | +| `snappy decoded len NNNNNNNN exceeds max 10485760` | Receiving an oversized block | Identify the producer (peer ID), file bug | +| `xmss_aggregate.rs panic` | Missing signature aggregation prover files | Ensure prover files are in correct location | + +## Expected/Benign Messages + +Messages that look like errors but are actually normal or harmless. + +| Pattern | Meaning | Why It's OK | +|---|---|---| +| `Error response from daemon: manifest unknown` | Docker image tag not found in remote registry | Docker falls back to local image; only an issue if no local image exists | +| `failed to load latest finalized state from database: NoFinalizedStateFound` | Fresh start, no previous state | Normal for new devnet runs | +| `HandshakeTimedOut` to ports of unconfigured nodes | Connection attempt to node that doesn't exist | Expected when validator config has fewer nodes than the network expects | +| `[QUIC] handshake timeout` (lantern) to dead peers | Peer disappeared, lantern still trying | Cosmetic; back-off should kick in | +| `TODO precompute poseidons in parallel + SIMD` | Performance optimization not yet implemented | Code TODOs, not runtime errors | +| `TODO optimize open_columns when no shifted F columns` | AIR proof optimization not yet implemented | Code TODOs, not runtime errors | + +## Medium Severity + +Issues that may indicate problems but don't immediately break consensus. + +| Pattern | Meaning | Action | +|---|---|---| +| `Failed to decode snappy-framed RPC request` | Protocol/encoding mismatch between clients | Check libp2p versions and snappy compression settings | +| `No callback found for request_id` | Response received for unknown request | May indicate internal state tracking issue | +| `UnexpectedEof` | Incomplete message received | Check network stability and message size limits | +| `Proposer signature verification failed` | Block has invalid proposer signature | Check if block is genuinely invalid or validation bug | +| `Invalid signatures for block` | Block has invalid attestation signatures | Check XMSS signature aggregation | +| `signature verification failed` | Generic signature validation failure | Check which signature type failed | +| `Unknown head block` | Attestation references block client doesn't have | May indicate fork or missing block | +| `Unknown target block` | Attestation target block not found | May indicate fork or missing block | +| `Block parent missing` | Received block but parent not available | Client will try to fetch parent | +| `block parent missing slot=N ... depth=D, storing as pending` (gean) | gean's pending block cache absorbing orphan | Normal during sync; problematic if depth grows | +| `Attestation too far in future` (ream) | ream is too far behind to validate the attestation | ream is stuck — sync recovery issue | +| `No state available for target 0x...` (ream) | Missing fork choice target state | ream cannot follow a divergent branch | +| `No common highest checkpoint found among connected peers` (ream) | Backfill cannot seed | ream sync deadlocked | + +## Connection Timeouts + +Connection timeouts to specific ports usually mean the node for that port was +never started, was paused, or crashed. + +**Identifying the node:** +Check the `validator-config.yaml` file in the network directory: +- `lean-quickstart/local-devnet/genesis/validator-config.yaml` +- `lean-quickstart/ansible-devnet/genesis/validator-config.yaml` + +Each node entry has an `enrFields.quic` port. + +**If you see HandshakeTimedOut to certain ports but those nodes were never started, this is expected.** + +If a node was running and now isn't, that node likely crashed — check its log +for panics. + +## State Transition Errors + +### State Root Mismatch During Proposal + +If you see this pattern (in any client): +``` +We are the proposer for this slot slot=N validator_id=X +... +Failed to process block slot=N err=State transition failed: state root mismatch +Published block slot=N validator_id=X +``` + +This indicates a **block building bug**, not a consensus issue: +- The proposer builds a block with one state root in the header +- When verifying its own block, it computes a different state root +- The block is published anyway (bug: should not publish invalid blocks) +- Other nodes will also fail to process it with the same mismatch + +**Key diagnostic:** If all nodes compute the **same** state root (but different +from the block header), the state transition is deterministic — the bug is in +how the block header's state root is computed during block building. + +## Interoperability Issues + +When analyzing multi-client devnets, watch for: + +1. **Status exchange failures** — clients failing to exchange status messages +2. **Block/attestation propagation** — messages not reaching all clients +3. **Encoding mismatches** — snappy/SSZ encoding differences +4. **Timing issues** — slot timing drift between clients +5. **Block format incompatibility** — SSZ schema differences causing + deserialization failures (look for `OutOfMemory` errors) +6. **Stale containers** — containers from previous runs causing genesis mismatch + (look for `UnknownSourceBlock`) +7. **Signature validation disagreements** — clients disagree on signature + validity (indicates bug in proposer or validator) +8. **Oversized block cascades** — one producer's bloated blocks crash multiple + peers; look for `MessageTooLarge` (producer side) and `snappy decoded len + exceeds max` (receiver side) with the same byte size + +## gean-Specific Patterns + +### Healthy gean + +- `attestations=` count in [chain] block log stays low (typically 1-5 with the + per-validator refactor) +- `proc_time=` < 200ms for normal blocks +- `Behind: 0` or `Behind: 1` in chain status +- `has_parent=true` for almost all incoming blocks +- `[forkchoice] head` updates each slot + +### Unhealthy gean (regressions to watch for) + +| Symptom | Likely cause | +|---|---| +| `attestations=NN` (e.g., 50+) | per-validator refactor regressed | +| `MessageTooLarge` in gean's own log | block builder regressed | +| `proc_time=Xs` (seconds, not ms) | aggregation slow / CPU pressure | +| `has_parent=false` repeatedly | sync recovery issue | +| `Behind:` growing without bound | gean falling behind wall clock | +| `[sync] fetch exhausted for root` (many) | peer dropping out, batched fetch failing | + +## Searching for Errors + +```bash +# Generic error search +grep -i "error\|ERROR" *.log | grep -vE "manifest unknown|HandshakeTimedOut|NoFinalizedStateFound" | head -50 + +# Search for specific critical patterns +grep -i "genesis mismatch\|panic\|fatal\|stack overflow" *.log + +# Block bloat regression check (any client) +grep -i "MessageTooLarge\|snappy decoded len.*exceeds max" *.log + +# Client-specific error patterns +grep "block processing failed" gean_0.log +grep "Failed to process block" ethlambda_0.log +grep "Invalid signatures" qlean_0.log 2>/dev/null # if qlean was included +grep "signature verification failed" lantern_0.log +grep "Failed to handle process" ream_0.log +``` diff --git a/.claude/skills/devnet-log-review/references/FINALIZATION_DEBUG.md b/.claude/skills/devnet-log-review/references/FINALIZATION_DEBUG.md new file mode 100644 index 0000000..31772b2 --- /dev/null +++ b/.claude/skills/devnet-log-review/references/FINALIZATION_DEBUG.md @@ -0,0 +1,334 @@ +# Finalization Debugging Guide + +Guide for diagnosing and debugging finalization issues in devnet runs. + +## What is Finalization? + +Finalization is the process by which slots become irreversible in the +blockchain. In the lean consensus protocol (3SF-mini), finalization requires: +- >2/3 supermajority of validators attesting (technically `3 * votes >= 2 * total`) +- Justification chain with no "justifiable but unjustified" gaps between source + and target + +## Checking Finalization Progress + +```bash +# Track finalization over time per client +grep -h "finalized.*slot\|Finalized block.*@\|finalized_slot=" *.log | tail -50 + +# gean specific +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "finalized_slot=" | tail -20 +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "Latest Finalized:" | tail -10 + +# ethlambda specific +grep "finalized_slot=" ethlambda_0.log | tail -20 + +# ream specific +grep "Finalization\|finalized" ream_0.log | tail -20 + +# lantern specific +grep "finalized" lantern_0.log | tail -20 +``` + +**Expected pattern:** Finalization should advance every 6-12 slots (depending +on 3SF-mini gap rules). + +**Stall indicator:** Finalized slot stays the same for 50+ slots while head +slot continues advancing. + +## Example of Healthy Finalization + +``` +Slot 0: finalized_slot=0 +Slot 6: finalized_slot=0 (waiting for justification) +Slot 12: finalized_slot=6 (slot 6 finalized) +Slot 18: finalized_slot=12 (slot 12 finalized) +Slot 24: finalized_slot=18 (slot 18 finalized) +``` + +## Example of Finalization Stall + +``` +Slot 0: finalized_slot=0 +Slot 6: finalized_slot=0 +Slot 12: finalized_slot=6 +Slot 18: finalized_slot=12 +Slot 24: finalized_slot=18 ← finalized +Slot 30: finalized_slot=18 ← STUCK +Slot 50: finalized_slot=18 ← STILL STUCK +Slot 100: finalized_slot=18 ← NOT ADVANCING +``` + +## The 3SF-Mini Gap Rule + +This is the most subtle cause of finalization stalls and is **protocol-level**, +not a client bug. + +A slot N is "justifiable after finalized slot F" if `delta = N - F` is one of: +- ≤ 5 (any small distance) +- A perfect square: 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, ... +- A pronic number `n*(n+1)`: 6, 12, 20, 30, 42, 56, 72, 90, 110, 132, 156, 182, ... + +**Finalization rule:** Finalization advances from `source` to `target` only if +**NO slot between them is "justifiable but not justified."** + +**Implication:** During normal steady-state operation, validators target the +*latest* justifiable slot at each step, so the chain of justified slots +matches the gap pattern and finalization keeps up. After a stall or peer +dropout, intermediate slots can be skipped — and once a justifiable slot is +missed, finalization can't cross it without retroactively justifying it +(which validators don't do). + +**Symptom:** justification keeps advancing in big jumps (e.g., 100 → 145 → +196), but finalization is stuck far behind because some intermediate +justifiable slot was never directly justified. + +This is **not a gean bug**. All clients exhibit this behavior because the +3SF-mini specification doesn't include a recovery mechanism. To recover, +restart from a clean genesis (or use checkpoint sync to bootstrap from a +known-good state). + +## Common Causes of Finalization Stalls + +### 1. Insufficient Validator Participation + +**Requirement:** Need **>2/3 supermajority** to justify +- With 5 validators: need 4 votes (3*4=12 ≥ 2*5=10 ✓) +- With 6 validators: need 4 votes (3*4=12 ≥ 2*6=12 ✓) +- With 9 validators: need 6 votes (3*6=18 ≥ 2*9=18 ✓) + +If validators are on different forks, neither fork may reach >2/3. + +```bash +# Count how many validators are active (attesting) +grep "validator=" *.log | grep -oE "validator=[0-9]+" | sort -u + +# Check which validators are on which fork (by head block they attest to) +grep "head=0x" lantern_0.log | grep "validator=" | tail -30 +``` + +### 2. Validators on Invalid Fork + +If N validators follow an invalid fork, only (total - N) validators contribute +to canonical chain. + +**Example:** 6 validators, 1 on invalid fork +- Total: 6 validators +- Honest: 5 validators on canonical fork +- Threshold: need 4 votes (3*4 ≥ 12) +- Available: 5 honest votes +- **Should justify!** 5 ≥ 4 ✓ + +**Example:** 6 validators, 3 on invalid fork +- Total: 6 validators +- Honest: 3 validators on canonical fork +- Threshold: need 4 votes +- Available: 3 honest votes +- **Cannot justify!** 3 < 4 ✗ + +### 3. Missing Attestations + +Client fails to process attestations from certain validators. + +```bash +# gean +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "attestation channel full" +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "Failed to process attestation" + +# ethlambda +grep "Failed to process.*attestation" ethlambda_0.log | tail -30 + +# Common reasons: +# - "Unknown head block" → validator attesting to block this client doesn't have +# - "Unknown target block" → validator attesting to invalid/orphan fork blocks +``` + +**Impact:** +- Missing attestations reduce effective vote count +- May prevent reaching >2/3 threshold even if enough validators are on canonical fork + +### 4. Justification Chain Broken (3SF-Mini Gap Rule) + +3SF-mini requires justified slots at specific intervals (see top of this doc). +Missing blocks or attestations can break justification chain by skipping a +justifiable slot. + +```bash +# Check justification progress (gean) +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "Latest Justified:\|justified_slot=" | tail -30 + +# Look for gaps in justified slots +``` + +### 5. Aggregator Crashed or Disconnected + +In gean's network, the aggregator (one designated node, e.g., `gean_0` with +`--is-aggregator`) bundles individual signatures into aggregated proofs. If the +aggregator dies, raw attestations may not be aggregated, and other clients +won't see the supermajority needed to finalize. + +```bash +# Check who is aggregating +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "\[signature\] aggregate:" | tail -20 + +# If empty, gean is not aggregating — was it started without --is-aggregator? +``` + +### 6. One or More Clients Crashed + +A client crash mid-run reduces validator count, can drop below threshold. + +```bash +# Sort logs by size — anomalously large logs are crash loops +wc -l *.log | sort -n + +# Find panic patterns +grep -l "panic\|fatal\|stack overflow" *.log + +# Check container status if devnet still running +docker ps --format "{{.Names}}: {{.Status}}" --filter "name=_0" +``` + +## Finalization Math + +Given: +- `N` = total validators +- `N_honest` = validators on canonical fork +- `N_invalid` = validators on invalid/wrong fork +- Threshold = `3 * votes >= 2 * N` (i.e., `votes >= ceil(2N/3)`) + +### Examples + +**5 validators, all honest:** +- Total: 5 validators +- Threshold: `3v >= 10`, so `v >= 4` +- Available: 5 +- **Justifies!** 5 ≥ 4 ✓ + +**5 validators, 1 crashed:** +- Total: 5 (registry size) +- Honest: 4 +- Threshold: `v >= 4` +- Available: 4 +- **Justifies!** 4 ≥ 4 ✓ (just enough) + +**5 validators, 2 crashed:** +- Total: 5 +- Honest: 3 +- Threshold: `v >= 4` +- Available: 3 +- **Cannot justify!** 3 < 4 ✗ + +**6 validators, 1 on invalid fork:** +- Total: 6 +- Honest: 5 +- Threshold: `v >= 4` +- Available: 5 +- **Justifies!** 5 ≥ 4 ✓ + +**6 validators, 3 on invalid fork:** +- Total: 6 +- Honest: 3 +- Threshold: `v >= 4` +- Available: 3 +- **Cannot justify!** + +## Debugging Steps + +### Step 1: Verify Validator Count and Status + +```bash +# Count total validators +grep -h "validator=" *.log | grep -oE "validator=[0-9]+" | sort -u | wc -l + +# Check which nodes are proposing blocks (active validators) +grep -h "We are the proposer\|proposing block\|proposed block" *.log | head -30 + +# Check which nodes are still alive (containers) +docker ps --format "{{.Names}}: {{.Status}}" --filter "name=_0" +``` + +### Step 2: Check Fork Structure + +```bash +# See if clients have different heads +grep -h "Head Slot: 30\|head slot=30" *.log + +# Compare block hashes at recent slots +for slot in 28 29 30 31 32; do + echo "=== Slot $slot ===" + sed 's/\x1b\[[0-9;]*m//g' *.log | grep -h "slot=$slot[^0-9]" | grep -oE "0x[a-f0-9]{8}" | sort -u +done +``` + +### Step 3: Count Attestations + +```bash +# Count attestations received per slot (gean) +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "attestation verified.*slot=30" | wc -l + +# Expected: N-1 attestations per slot (all validators except proposer) +# With 5 validators: expect 4 attestations per slot +``` + +### Step 4: Check for Processing Failures + +```bash +# Look for attestation processing failures +grep "Failed to process.*attestation\|attestation.*fail" *.log | tail -50 + +# Group by error type +grep "Failed to process.*attestation" ethlambda_0.log | grep -oE "err=.*" | sort | uniq -c +``` + +### Step 5: Verify Threshold Calculation + +```bash +# Calculate if finalization should be possible +echo "Total validators: $(grep -h validator= *.log | grep -oE 'validator=[0-9]+' | sort -u | wc -l)" +echo "Threshold: 3*votes >= 2*total" +echo "Validators on canonical fork: ?" # Count from logs +``` + +### Step 6: Check the 3SF-Mini Gap Rule + +If `justified` is advancing but `finalized` isn't, you're likely hitting the +3SF-mini gap rule. Check whether intermediate justifiable slots have been +skipped: + +```bash +# Print all justified slots gean has seen +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "justified_slot=" | grep -oE "justified_slot=[0-9]+" | sort -u + +# Compute the expected justifiable slots from finalized: +# delta values: 1,2,3,4,5,6,9,12,16,20,25,30,36,42,49,56,64,72,81,90,100,... +# If finalized=64, expected justifiable = 65,66,67,68,69,70,73,76,80,84,89,94,100,... +``` + +## Known Bugs (Resolved) + +### gean: Block Bloat (FIXED in commit 62454aa) + +**Old symptom:** gean produced blocks with 100+ aggregated attestations, +exceeding the 10 MiB spec limit. Blocks failed to gossip, network stalled. + +**Fix:** Per-validator latest-vote selection in `node/store_build.go`. After +the fix, gean's blocks contain at most `numValidators` distinct attestations. + +**Detection (regression check):** +```bash +# Should NEVER print after the fix +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep -E "attestations=[0-9]{2,}" | head +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "MessageTooLarge\|exceeds max" +``` + +### ethlambda: Same Bug (UPSTREAM ISSUE) + +ethlambda still has the same block bloat bug at the time of writing +(`crates/blockchain/src/store.rs:1018`). When ethlambda is the proposer in a +mixed network, expect occasional MessageTooLarge / cascade behavior. + +## Additional Resources + +See [FORK_ANALYSIS.md](FORK_ANALYSIS.md) for fork detection and +[ERROR_CLASSIFICATION.md](ERROR_CLASSIFICATION.md) for common error patterns. diff --git a/.claude/skills/devnet-log-review/references/FORK_ANALYSIS.md b/.claude/skills/devnet-log-review/references/FORK_ANALYSIS.md new file mode 100644 index 0000000..74583d5 --- /dev/null +++ b/.claude/skills/devnet-log-review/references/FORK_ANALYSIS.md @@ -0,0 +1,265 @@ +# Fork Analysis Guide + +Comprehensive guide to identifying and analyzing blockchain forks in devnet runs. + +## Understanding Forks + +**Fork Types:** +1. **Canonical Fork** — The main chain that the honest majority follows +2. **Orphan Fork** — Valid blocks that lost a fork choice race (e.g., two + blocks proposed for same slot) +3. **Invalid Fork** — Chain built on blocks with validation failures + (signature errors, state errors, etc.) + +**Key Insight:** Blocks don't just have slot numbers — they have **parent +relationships**. A fork occurs when blocks at different slots reference +different parent blocks. + +## Tracing Parent-Child Relationships + +To understand forks, map out the blockchain DAG (Directed Acyclic Graph) by +tracking which block is the parent of each new block. + +### gean — Explicit Parent Logging + +```bash +# gean logs parent relationships in [chain] block lines +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "\[chain\] block slot=" | head -20 +# Output: [chain] block slot=12 block_root=0x... parent_root=0x... proposer=3 +# attestations=4 justified_slot=10 finalized_slot=6 proc_time=88ms + +# gean's gossip layer shows incoming blocks +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "\[gossip\] received block" | head -20 + +# When parent is missing, gean logs and stores as pending +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "block parent missing" +# Output: block parent missing slot=N block_root=0x... parent_root=0x... depth=D, storing as pending +``` + +### ethlambda — Pending Blocks + +```bash +# When ethlambda receives a block with unknown parent: +grep "Block parent missing" ethlambda_0.log +# Output: Block parent missing, storing as pending slot=8 parent_root=0x... block_root=0x... +# Meaning: slot 8 block depends on parent 0x... which ethlambda doesn't have + +# Check processed blocks +grep "Processed new block\|Fork choice head updated" ethlambda_0.log | head -20 +``` + +### lantern — Import Logs + +```bash +grep "imported block" lantern_0.log | head -20 +# Output: imported block slot=3 new_head_slot=3 head_root=0x... +``` + +### zeam — Block Processing + +```bash +sed 's/\x1b\[[0-9;]*m//g' zeam_0.log | grep "processing block\|imported block\|processed block" | head -20 +``` + +### ream — Block Processing Service + +```bash +grep "Processing block built\|Fork choice head updated" ream_0.log | head -20 +``` + +## Building the Fork Structure + +### Step 1: Map Canonical Chain + +Start from genesis and follow the longest/heaviest chain: + +```bash +# For gean — extract block roots in slot order +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "\[chain\] block slot=" | \ + grep -oE "slot=[0-9]+|block_root=0x[a-f0-9]{8}" | paste - - | head -30 + +# Compare block hashes at each slot across clients +# If clients have different hashes at same slot → fork! +``` + +### Step 2: Identify Rejected Blocks + +```bash +# Find blocks rejected by signature verification +grep -i "signature.*failed\|invalid signature" *.log + +# gean +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "block processing failed" + +# ethlambda +grep "Failed to process block" ethlambda_0.log +# Output: Failed to process block slot=4 err=Proposer signature verification failed + +# lantern +grep "signature verification failed\|rejected" lantern_0.log +``` + +### Step 3: Track Attestations to Unknown Blocks + +Attestations reference blocks by hash. If a client receives attestations for +an unknown block, it indicates a fork: + +```bash +# ethlambda logs "Unknown head block" or "Unknown target block" +grep "Unknown.*block:" ethlambda_0.log | head -20 + +# Count attestations per unknown block +grep "Unknown.*block:" ethlambda_0.log | grep -oE "0x[a-f0-9]{64}" | sort | uniq -c | sort -rn + +# ream version +grep "Unknown.*block\|No state available for target" ream_0.log | grep -oE "0x[a-f0-9]{64}" | sort -u +``` + +### Step 4: Determine Which Validators Are on Which Fork + +```bash +# Check who is attesting to rejected blocks (lantern is most expressive here) +grep "rejected vote\|rejected attestation" lantern_0.log | grep "validator=" | head -20 + +# Cross-check with each node's reported head +for f in *.log; do + node=$(basename "$f" .log) + head_line=$(sed 's/\x1b\[[0-9;]*m//g' "$f" | grep -E "Head Block Root|Head Slot" | tail -1) + echo "$node: $head_line" +done +``` + +## Fork Structure Diagram Format + +When you identify forks, document them in ASCII: + +``` + GENESIS (slot 0) + 0xc8849d39... + │ + ┌─────────────────┴─────────────────┐ + │ │ + SLOT 1 █ SLOT 4 ✗ + 0xcbe3c545... 0xa829bac5... + ┌─────────────────┐ (INVALID — rejected + │ CANONICAL (A) │ by 3/4 clients) + │ Clients: │ │ + │ ✓ gean │ SLOT 10 ⚠ + │ ✓ ethlambda │ 0xf8dae5ee... + │ ✓ zeam │ (invalid fork, only + │ ✓ lantern │ ream follows) + └─────────────────┘ + │ + SLOT 3 █ + 0x0c3dd6a5... + │ + SLOT 5 █ + 0xd0fd6225... + │ + (continues...) + +Legend: + █ = Canonical block ✗ = Rejected block ⚠ = Block on invalid fork +``` + +## Key Questions to Answer + +1. **Which block(s) were rejected and why?** (signature errors, state errors, + etc.) +2. **Which validators accepted the rejected block?** (check their heads) +3. **How many validators are on each fork?** (count unique attestations per + fork) +4. **Can the canonical fork finalize without the validators on invalid fork?** + (need >2/3 supermajority) + +## Signature Verification Disagreements + +If clients disagree on signature validity, determine consensus: + +```bash +# Count how many clients rejected vs accepted a specific block +BLOCK_HASH="0xa829bac56f6b98fbe16ed02cde4166a0a0df2e68c68e64afa4fce43bbe1992b3" + +echo "=== Clients that rejected $BLOCK_HASH ===" +grep -l "signature.*failed.*$BLOCK_HASH\|Invalid signatures.*$BLOCK_HASH" *.log + +echo "=== Clients that accepted $BLOCK_HASH ===" +grep -l "Processed.*$BLOCK_HASH\|imported.*$BLOCK_HASH\|\[chain\] block.*$BLOCK_HASH" *.log + +# If 3/4 clients reject → the block is genuinely invalid, bug in proposer +# If 1/4 clients reject → possible bug in that client's validation +``` + +### Root Cause Determination + +- If **majority rejects** with signature errors → **proposer has bug** + (failed to sign properly) +- If **minority rejects** with signature errors → **validator has bug** + (incorrect validation) +- If **different blocks at same slot** → fork choice race (benign, resolved by + fork choice) + +## Comparing Block Hashes Across Slots + +```bash +# Extract block hashes for specific slots (comparing across clients) +for slot in 1 2 3 4 5; do + echo "=== Slot $slot ===" + sed 's/\x1b\[[0-9;]*m//g' *.log | grep -h "slot=$slot[^0-9]" | grep -oE "0x[a-f0-9]{8}" | sort -u +done + +# Check which client has which head at a specific slot +grep -h "Head Slot: 18\|head slot=18" *.log + +# Compare finalization across clients +grep -h "finalized.*slot\|Finalized block.*@\|finalized_slot=" *.log | tail -20 +``` + +## Validator ID Detection + +Each validator proposes blocks when `slot % validator_count == validator_id`. + +### Finding Validator IDs from Logs + +```bash +# gean — explicit validator in [validator] log lines +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "produced attestation\|proposing block" | head -3 +# Output: [validator] produced attestation slot=6 validator=2 +# Pattern: validator=2 proposes at slots 2, 7, 12, ... (every 5th if 5 validators) + +# ethlambda — explicit validator_id +grep "We are the proposer" ethlambda_0.log | head -3 +# Output: We are the proposer for this slot slot=5 validator_id=5 + +# zeam — proposer field +grep "packing proposer attestation" zeam_0.log | head -3 +# Output: packing proposer attestation for slot=6 proposer=0 + +# Generic — validator_id = slot % validator_count +``` + +### Verify Validator Count + +```bash +# Count unique validators from attestations +grep -h "validator=" *.log | grep -oE "validator=[0-9]+" | sort -u | wc -l + +# Or check genesis/validator-config.yaml for the configured count +``` + +## Reorg Detection + +Reorgs are normal — they happen when fork choice swaps the head between +competing branches. They become a problem only when frequent or deep. + +```bash +# gean logs REORG explicitly +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep "REORG" +# Output: REORG slot=N head_root=0x... (was 0x...) justified_slot=J finalized_slot=F + +# Count reorgs per node +sed 's/\x1b\[[0-9;]*m//g' gean_0.log | grep -c "REORG" +``` + +A handful of single-slot reorgs (1 reorg per ~50 blocks) is normal. Many +deep reorgs suggest a network partition or aggressive fork choice churn. diff --git a/.claude/skills/devnet-log-review/scripts/analyze-logs.sh b/.claude/skills/devnet-log-review/scripts/analyze-logs.sh new file mode 100755 index 0000000..2db93a6 --- /dev/null +++ b/.claude/skills/devnet-log-review/scripts/analyze-logs.sh @@ -0,0 +1,73 @@ +#!/bin/bash +# analyze-logs.sh - Main entry point for devnet log analysis +# +# Usage: analyze-logs.sh [log_dir] +# log_dir: Directory containing *.log files (default: current directory) +# +# Output: Complete analysis summary in markdown format +# Exit codes: 0 = healthy, 1 = warnings, 2 = failed + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +log_dir="${1:-.}" + +# Check if log files exist +shopt -s nullglob +log_files=("$log_dir"/*.log) +if [[ ${#log_files[@]} -eq 0 ]]; then + echo "No .log files found in $log_dir" >&2 + exit 1 +fi + +# Count node log files (excluding devnet.log) +node_count=0 +for f in "${log_files[@]}"; do + node=$(basename "$f" .log) + if [[ "$node" != "devnet" ]]; then + node_count=$((node_count + 1)) + fi +done + +echo "## Devnet Log Analysis" +echo "" +echo "**Log directory:** $log_dir" +echo "**Node logs found:** $node_count" +echo "" + +echo "### Errors and Warnings" +echo "" +"$SCRIPT_DIR/count-errors-warnings.sh" "$log_dir" +echo "" + +echo "### Block Production" +echo "" +"$SCRIPT_DIR/count-blocks.sh" "$log_dir" +echo "" + +echo "### Consensus Progress" +echo "" +"$SCRIPT_DIR/check-consensus-progress.sh" "$log_dir" +echo "" + +# Calculate overall health +total_errors=0 +for f in "${log_files[@]}"; do + node=$(basename "$f" .log) + if [[ "$node" != "devnet" ]]; then + errors=$(grep -i "error" "$f" 2>/dev/null | grep -cvE "manifest unknown|NoFinalizedStateFound|HandshakeTimedOut|HandshakeFailed" 2>/dev/null) || errors=0 + total_errors=$((total_errors + errors)) + fi +done + +echo "---" +if [[ $total_errors -eq 0 ]]; then + echo "**Status: HEALTHY** - No errors detected" + exit 0 +elif [[ $total_errors -lt 50 ]]; then + echo "**Status: WARNINGS** - $total_errors total errors detected" + exit 1 +else + echo "**Status: ISSUES** - $total_errors total errors detected (review recommended)" + exit 2 +fi diff --git a/.claude/skills/devnet-log-review/scripts/check-consensus-progress.sh b/.claude/skills/devnet-log-review/scripts/check-consensus-progress.sh new file mode 100755 index 0000000..eb8f0a9 --- /dev/null +++ b/.claude/skills/devnet-log-review/scripts/check-consensus-progress.sh @@ -0,0 +1,110 @@ +#!/bin/bash +# check-consensus-progress.sh - Show consensus progress per node +# +# Usage: check-consensus-progress.sh [log_dir] +# log_dir: Directory containing *.log files (default: current directory) +# +# Output: Last slot, last justified, last finalized per node, and proposer slots + +set -euo pipefail + +log_dir="${1:-.}" + +# Strip ANSI escape codes from input +strip_ansi() { + sed 's/\x1b\[[0-9;]*m//g' +} + +# Check if log files exist +shopt -s nullglob +log_files=("$log_dir"/*.log) +if [[ ${#log_files[@]} -eq 0 ]]; then + echo "No .log files found in $log_dir" >&2 + exit 1 +fi + +echo "=== Consensus Progress (Last Observed) ===" +printf "%-20s %12s %12s %12s\n" "Node" "Head Slot" "Justified" "Finalized" +printf "%-20s %12s %12s %12s\n" "----" "---------" "---------" "---------" + +for f in "${log_files[@]}"; do + node=$(basename "$f" .log) + + # Skip combined devnet.log + if [[ "$node" == "devnet" ]]; then + continue + fi + + # Extract last slot number from log (handles slot=N, slot: N, Slot N, @ N formats) + last_slot=$(strip_ansi < "$f" | grep -oE "slot[=: ][0-9]+|Slot [0-9]+|@ [0-9]+" | grep -oE "[0-9]+" | sort -n | tail -1 || echo "0") + + if [[ -z "$last_slot" ]]; then + last_slot="N/A" + fi + + # Try to extract last justified / finalized + # Patterns covered: + # - "justified_slot=N" (gean, ethlambda) + # - "Latest Justified: Slot N" (gean chain status box) + # - "Justified: slot N" (zeam chain status / lantern variants) + last_justified=$(strip_ansi < "$f" \ + | grep -oE "justified_slot=[0-9]+|Latest Justified:[[:space:]]+Slot[[:space:]]+[0-9]+|Justified:[[:space:]]+[Ss]lot[[:space:]:]*[0-9]+" \ + | grep -oE "[0-9]+$" | sort -n | tail -1 || echo "") + last_finalized=$(strip_ansi < "$f" \ + | grep -oE "finalized_slot=[0-9]+|Latest Finalized:[[:space:]]+Slot[[:space:]]+[0-9]+|Finalized:[[:space:]]+[Ss]lot[[:space:]:]*[0-9]+" \ + | grep -oE "[0-9]+$" | sort -n | tail -1 || echo "") + + [[ -z "$last_justified" ]] && last_justified="N/A" + [[ -z "$last_finalized" ]] && last_finalized="N/A" + + printf "%-20s %12s %12s %12s\n" "$node" "$last_slot" "$last_justified" "$last_finalized" +done + +echo "" +echo "=== Proposer Slots ===" +echo "(Slots where each node was the proposer)" +echo "" + +for f in "${log_files[@]}"; do + node=$(basename "$f" .log) + client="${node%_*}" + + # Skip combined devnet.log + if [[ "$node" == "devnet" ]]; then + continue + fi + + # Extract proposed slots based on client. + # Trailing `|| true` is required: under `set -euo pipefail` a grep with no + # matches returns 1, which would otherwise abort the whole script and skip + # every node alphabetically after a node with zero proposed blocks. + case "$client" in + gean) + slots=$(strip_ansi < "$f" | grep "\[validator\] proposed block" | grep -oE "slot=[0-9]+" | cut -d= -f2 | tr '\n' ',' | sed 's/,$//' || true) + ;; + zeam) + slots=$(strip_ansi < "$f" | grep "produced block for slot" | grep -oE "slot=[0-9]+" | cut -d= -f2 | tr '\n' ',' | sed 's/,$//' || true) + ;; + ream) + slots=$(strip_ansi < "$f" | grep "Proposing block by Validator" | grep -oE "slot=[0-9]+" | cut -d= -f2 | tr '\n' ',' | sed 's/,$//' || true) + ;; + ethlambda) + slots=$(strip_ansi < "$f" | grep "Published block to gossipsub" | grep -oE "slot=[0-9]+" | cut -d= -f2 | tr '\n' ',' | sed 's/,$//' || true) + ;; + lantern) + slots=$(strip_ansi < "$f" | grep "[Pp]ublished block\|[Pp]roduced block" | grep -oE "slot=[0-9]+" | cut -d= -f2 | tr '\n' ',' | sed 's/,$//' || true) + ;; + qlean) + slots=$(strip_ansi < "$f" | grep "Produced block" | grep -oE "@ [0-9]+" | grep -oE "[0-9]+" | tr '\n' ',' | sed 's/,$//' || true) + ;; + *) + slots="" + ;; + esac + + if [[ -n "$slots" ]]; then + echo "$node: slots $slots" + else + echo "$node: (no blocks proposed)" + fi +done diff --git a/.claude/skills/devnet-log-review/scripts/count-blocks.sh b/.claude/skills/devnet-log-review/scripts/count-blocks.sh new file mode 100755 index 0000000..833376b --- /dev/null +++ b/.claude/skills/devnet-log-review/scripts/count-blocks.sh @@ -0,0 +1,93 @@ +#!/bin/bash +# count-blocks.sh - Count blocks proposed and processed per node +# +# Usage: count-blocks.sh [log_dir] +# log_dir: Directory containing *.log files (default: current directory) +# +# Output: Table with node name, blocks proposed, blocks processed +# Handles client-specific log patterns (gean, zeam, ream, lantern, ethlambda) + +set -uo pipefail + +log_dir="${1:-.}" + +# Strip ANSI escape codes from input +strip_ansi() { + sed 's/\x1b\[[0-9;]*m//g' +} + +# Safe count function that always returns a number +count_pattern() { + local file="$1" + local pattern="$2" + local result + result=$(strip_ansi < "$file" | grep -cE "$pattern" 2>/dev/null) || result=0 + echo "${result:-0}" +} + +# Check if log files exist +shopt -s nullglob +log_files=("$log_dir"/*.log) +if [[ ${#log_files[@]} -eq 0 ]]; then + echo "No .log files found in $log_dir" >&2 + exit 1 +fi + +# Print header +printf "%-20s %10s %10s\n" "Node" "Proposed" "Processed" +printf "%-20s %10s %10s\n" "----" "--------" "---------" + +for f in "${log_files[@]}"; do + node=$(basename "$f" .log) + + # Skip devnet.log - it's a combined log + if [[ "$node" == "devnet" ]]; then + continue + fi + + # Extract client name from node name (e.g., "gean_0" -> "gean") + client="${node%_*}" + + proposed=0 + processed=0 + + case "$client" in + gean) + # gean logs "[validator] proposed block" once per proposal, + # and "[chain] block slot=N ... proc_time=Xms" for each processed block. + proposed=$(count_pattern "$f" "\\[validator\\] proposed block") + processed=$(count_pattern "$f" "\\[chain\\] block slot=") + ;; + zeam) + proposed=$(count_pattern "$f" "produced block for slot") + processed=$(count_pattern "$f" "processed block") + ;; + ream) + # ream logs "Proposing block by Validator" when attempting + proposed=$(count_pattern "$f" "Proposing block by Validator|Processing block built by Validator") + processed=$(count_pattern "$f" "Processing block built") + ;; + lantern) + # Lantern logs lowercase "published block" for proposals + proposed=$(count_pattern "$f" "[Pp]roduced block|[Gg]ossiped block|[Pp]ublished block") + processed=$(count_pattern "$f" "[Ii]mported block") + ;; + ethlambda) + # ethlambda logs "Published block to gossipsub" once per block + proposed=$(count_pattern "$f" "Published block to gossipsub") + processed=$(count_pattern "$f" "Processed new block|Fork choice head updated") + ;; + qlean) + # qlean uses "Produced block" or "Gossiped block" + proposed=$(count_pattern "$f" "Produced block|Gossiped block") + processed=$(count_pattern "$f" "Imported block") + ;; + *) + # Unknown client - try generic patterns + proposed=$(count_pattern "$f" "[Pp]roduced block|[Pp]ublished block|[Gg]ossiped block|proposed block") + processed=$(count_pattern "$f" "[Pp]rocessed block|[Ii]mported block") + ;; + esac + + printf "%-20s %10d %10d\n" "$node" "$proposed" "$processed" +done diff --git a/.claude/skills/devnet-log-review/scripts/count-errors-warnings.sh b/.claude/skills/devnet-log-review/scripts/count-errors-warnings.sh new file mode 100755 index 0000000..4c928ed --- /dev/null +++ b/.claude/skills/devnet-log-review/scripts/count-errors-warnings.sh @@ -0,0 +1,50 @@ +#!/bin/bash +# count-errors-warnings.sh - Count errors and warnings per node log file +# +# Usage: count-errors-warnings.sh [log_dir] +# log_dir: Directory containing *.log files (default: current directory) +# +# Output: Table with node name, error count, warning count +# Excludes benign patterns like "manifest unknown", "NoFinalizedStateFound", "TODO" + +set -uo pipefail + +log_dir="${1:-.}" + +# Benign patterns to exclude from counts +BENIGN_ERRORS="manifest unknown|NoFinalizedStateFound|HandshakeTimedOut|HandshakeFailed|connection refused" +BENIGN_WARNINGS="TODO|deprecated" + +# Safe count function +count_filtered() { + local file="$1" + local pattern="$2" + local exclude="$3" + local result + result=$(grep -i "$pattern" "$file" 2>/dev/null | grep -cvE "$exclude" 2>/dev/null) || result=0 + echo "${result:-0}" +} + +# Check if log files exist +shopt -s nullglob +log_files=("$log_dir"/*.log) +if [[ ${#log_files[@]} -eq 0 ]]; then + echo "No .log files found in $log_dir" >&2 + exit 1 +fi + +# Print header +printf "%-20s %8s %8s\n" "Node" "Errors" "Warnings" +printf "%-20s %8s %8s\n" "----" "------" "--------" + +for f in "${log_files[@]}"; do + node=$(basename "$f" .log) + + # Count errors excluding benign patterns + errors=$(count_filtered "$f" "error" "$BENIGN_ERRORS") + + # Count warnings excluding benign patterns + warnings=$(count_filtered "$f" "warn" "$BENIGN_WARNINGS") + + printf "%-20s %8d %8d\n" "$node" "$errors" "$warnings" +done diff --git a/.claude/skills/devnet-log-review/scripts/show-errors.sh b/.claude/skills/devnet-log-review/scripts/show-errors.sh new file mode 100755 index 0000000..c0c121d --- /dev/null +++ b/.claude/skills/devnet-log-review/scripts/show-errors.sh @@ -0,0 +1,80 @@ +#!/bin/bash +# show-errors.sh - Display error details for investigation +# +# Usage: show-errors.sh [options] [log_dir] +# -n NODE Filter to specific node (e.g., "gean_0") +# -l LIMIT Limit number of errors shown per file (default: 20) +# -w Also show warnings +# log_dir Directory containing *.log files (default: current directory) +# +# Output: Error messages from log files, stripped of ANSI codes + +set -euo pipefail + +# Defaults +node_filter="" +limit=20 +show_warnings=false +log_dir="." + +# Parse options +while getopts "n:l:w" opt; do + case $opt in + n) node_filter="$OPTARG" ;; + l) limit="$OPTARG" ;; + w) show_warnings=true ;; + *) echo "Usage: $0 [-n node] [-l limit] [-w] [log_dir]" >&2; exit 1 ;; + esac +done +shift $((OPTIND-1)) + +# Remaining argument is log_dir +if [[ $# -gt 0 ]]; then + log_dir="$1" +fi + +# Strip ANSI escape codes from input +strip_ansi() { + sed 's/\x1b\[[0-9;]*m//g' +} + +# Build file pattern +if [[ -n "$node_filter" ]]; then + pattern="$log_dir/${node_filter}.log" +else + pattern="$log_dir/*.log" +fi + +# Check if log files exist +shopt -s nullglob +log_files=($pattern) +if [[ ${#log_files[@]} -eq 0 ]]; then + echo "No matching .log files found" >&2 + exit 1 +fi + +for f in "${log_files[@]}"; do + node=$(basename "$f" .log) + + # Skip combined devnet.log unless specifically requested + if [[ "$node" == "devnet" && -z "$node_filter" ]]; then + continue + fi + + echo "=== $node ===" + + # Show errors + error_count=$(strip_ansi < "$f" | grep -ci "error" || echo 0) + echo "Errors ($error_count total, showing first $limit):" + strip_ansi < "$f" | grep -i "error" | head -"$limit" + + # Optionally show warnings + if $show_warnings; then + echo "" + warning_count=$(strip_ansi < "$f" | grep -ci "warn" || echo 0) + echo "Warnings ($warning_count total, showing first $limit):" + strip_ansi < "$f" | grep -i "warn" | head -"$limit" + fi + + echo "" +done diff --git a/.claude/skills/devnet-runner/SKILL.md b/.claude/skills/devnet-runner/SKILL.md new file mode 100644 index 0000000..a716f2a --- /dev/null +++ b/.claude/skills/devnet-runner/SKILL.md @@ -0,0 +1,249 @@ +--- +name: devnet-runner +description: Manage local development networks (devnets) for lean consensus multi-client testing. This skill should be used when the user asks to run a devnet, start or stop devnet nodes, spin up a local testnet, configure validator nodes, regenerate genesis files, change Docker image tags, collect or dump node logs, troubleshoot devnet issues, restart a node with checkpoint sync, run a long-lived devnet with detached containers, or perform rolling restarts to upgrade images. +--- + +# Devnet Runner + +Manage local development networks for lean consensus testing involving gean +and peer clients (zeam, ream, lantern, ethlambda). + +## Prerequisites + +The `lean-quickstart` directory must exist alongside the `gean` repo. If +missing: +```bash +make lean-quickstart +``` + +## Default Behavior + +When starting a devnet, **always**: +1. **Update validator config** — Edit + `lean-quickstart/local-devnet/genesis/validator-config.yaml` to include + ONLY the nodes that will run. Remove entries for nodes that won't be + started (unless the user explicitly asks to keep them). Validator indices + are assigned to ALL nodes in the config; if a node is in the config but + not running, its validators will miss their proposer slots. To control + which nodes run, always edit this config file rather than using + `--node `, since `--node` does NOT reassign validators and + causes missed slots. +2. **Update client image tags** — If the user specifies a tag (e.g., "use + `dev` tag for gean"), edit the relevant + `lean-quickstart/client-cmds/{client}-cmd.sh` file to update the + `node_docker` image tag. +3. **Use run-devnet-with-timeout.sh** — This script runs all nodes in the + config with a timeout, dumps logs, then stops them. +4. Run for **20 slots** unless the user specifies otherwise. +5. The script automatically dumps all node logs to `.log` files in + the gean repo root and stops the nodes when the timeout expires. + +## Timing Calculation + +Total timeout = startup buffer + genesis offset + (slots × 4 seconds) + +| Component | Local Mode | Ansible Mode | +|---|---|---| +| Startup buffer | 10s | 10s | +| Genesis offset | 30s | 360s | +| Per slot | 4s | 4s | + +**Examples (local mode):** +- 20 slots: 10 + 30 + (20 × 4) = **120s** +- 50 slots: 10 + 30 + (50 × 4) = **240s** +- 100 slots: 10 + 30 + (100 × 4) = **440s** + +## Quick Start (Default Workflow) + +**Step 1: Configure nodes** — Edit +`lean-quickstart/local-devnet/genesis/validator-config.yaml` to keep only the +nodes you want to run. See `references/validator-config.md` for the full +schema and field reference. + +**Step 2: Update image tags (if needed)** — Edit +`lean-quickstart/client-cmds/{client}-cmd.sh` to change the Docker image tag +in `node_docker`. See `references/clients.md` for current default tags. + +**Step 3: Run the devnet** +```bash +# Start devnet with fresh genesis, capture logs directly (20 slots = 120s) +.claude/skills/devnet-runner/scripts/run-devnet-with-timeout.sh 120 +``` + +## Manual Commands + +All `spin-node.sh` commands must be run from within `lean-quickstart/`: + +```bash +# Stop all nodes +cd lean-quickstart && NETWORK_DIR=local-devnet ./spin-node.sh --node all --stop + +# Run for custom duration (e.g., 50 slots = 240s with genesis offset) +.claude/skills/devnet-runner/scripts/run-devnet-with-timeout.sh 240 + +# Start without timeout (press Ctrl+C to stop) +cd lean-quickstart && NETWORK_DIR=local-devnet ./spin-node.sh --node all --generateGenesis +``` + +## Command-Line Flags + +| Flag | Description | +|---|---| +| `--node ` | **Required.** Node(s) to start. Use `all` to start all nodes in config | +| `--generateGenesis` | Regenerate genesis files. Implies `--cleanData` | +| `--cleanData` | Clean data directories before starting | +| `--stop` | Stop running nodes instead of starting them | +| `--forceKeyGen` | Force regeneration of hash-sig validator keys | +| `--validatorConfig ` | Custom config path (default: `$NETWORK_DIR/genesis/validator-config.yaml`) | +| `--dockerWithSudo` | Run docker commands with `sudo` | + +## Changing Docker Image Tags + +To use a specific tag for certain clients, edit the +`lean-quickstart/client-cmds/{client}-cmd.sh` files before running. + +**Example:** Change gean from `dev` to a branch tag: +```bash +# In lean-quickstart/client-cmds/gean-cmd.sh, find: +node_docker="--security-opt seccomp=unconfined gean:dev node \ + +# Change to: +node_docker="--security-opt seccomp=unconfined gean:my-feature-branch node \ +``` + +See `references/clients.md` for current default images, tags, and known +compatibility issues. + +## gean-Specific Notes + +- gean uses `--api-port` (default 5058) and `--metrics-port` (default 8088) + per the gean Makefile defaults. The standalone `make run` uses different + defaults — check the `lean-quickstart/client-cmds/gean-cmd.sh` for the + devnet defaults. +- gean expects `--custom-network-config-dir`, `--node-key`, `--node-id`, + `--data-dir`, `--gossipsub-port`, `--api-port`, `--metrics-port`, and + optionally `--is-aggregator` and `--checkpoint-sync-url`. +- To configure gean as the aggregator in a devnet, ensure + `--is-aggregator` is set in `gean-cmd.sh` for exactly one gean instance. + +## Validator Configuration + +See `references/validator-config.md` for the full schema, field reference, +adding/removing nodes, port allocation guide, and local vs ansible deployment +differences. + +## Log Collection + +### View Live Logs +```bash +docker logs gean_0 # View current logs +docker logs -f gean_0 # Follow/stream logs +``` + +### Dump Logs to Files + +**Automatic:** When using `run-devnet-with-timeout.sh`, logs are automatically +dumped to `.log` files in the gean repo root before stopping. + +**Single node (manual):** +```bash +docker logs gean_0 &> gean_0.log +``` + +**All running nodes (manual):** +```bash +for node in $(docker ps --format '{{.Names}}' | grep -E '^(gean|zeam|ream|lantern|ethlambda)_'); do + docker logs "$node" &> "${node}.log" +done +``` + +### Data Directory Logs + +Client-specific data and file-based logs are stored at: +``` +lean-quickstart/local-devnet/data// +``` + +## Common Troubleshooting + +### Nodes Won't Start + +1. Check if containers are already running: + ```bash + docker ps | grep -E 'gean|zeam|ream|lantern|ethlambda' + ``` +2. Stop existing nodes first: + ```bash + cd lean-quickstart && NETWORK_DIR=local-devnet ./spin-node.sh --node all --stop + ``` + +### Nodes Not Finding Peers + +1. Verify all nodes are using the same genesis: + ```bash + cd lean-quickstart && NETWORK_DIR=local-devnet ./spin-node.sh --node all --generateGenesis + ``` +2. Check `nodes.yaml` was generated with correct ENR records + +### Genesis Mismatch Errors + +Regenerate genesis for all nodes: +```bash +cd lean-quickstart && NETWORK_DIR=local-devnet ./spin-node.sh --node all --generateGenesis --forceKeyGen +``` + +### Port Conflicts + +Check if ports are in use: +```bash +lsof -i :9008 # Check gean QUIC port +lsof -i :8088 # Check gean metrics port +lsof -i :5058 # Check gean API port +``` + +### Stale Containers Cause Genesis Mismatch + +If you see `UnknownSourceBlock` or `OutOfMemory` deserialization errors, a +container from a previous run may still be running with old genesis. + +**Fix:** Always clean up before starting a new devnet: +```bash +docker rm -f gean_0 zeam_0 ream_0 lantern_0 ethlambda_0 2>/dev/null +``` + +Or use `run-devnet-with-timeout.sh` which handles cleanup automatically. + +### Docker Permission Issues + +```bash +cd lean-quickstart && NETWORK_DIR=local-devnet ./spin-node.sh --node all --dockerWithSudo +``` + +## Scripts + +| Script | Description | +|---|---| +| `scripts/run-devnet-with-timeout.sh ` | Run devnet for specified duration, dump logs to gean repo root, then stop | + +## Long-Lived Devnets and Rolling Restarts + +For persistent devnets on remote servers (e.g., `ssh admin@gean-1`), use +detached containers instead of `spin-node.sh`. This allows rolling restarts to +upgrade images without losing chain state. + +**Key points:** +- Start containers with `docker run -d --restart unless-stopped` (not + `spin-node.sh`) +- Rolling restart: stop one node, **wait 60 seconds** (gossipsub backoff), + start with new image + checkpoint sync +- Restart non-aggregator nodes first, aggregator last +- Checkpoint sync URL uses gean's API port: + `http://127.0.0.1:/lean/v0/states/finalized` + +See `references/long-lived-devnet.md` for the full procedure. + +## Reference + +- `references/clients.md`: Client-specific details (images, ports, known issues) +- `references/validator-config.md`: Full config schema, field reference, adding/removing nodes, port allocation +- `references/long-lived-devnet.md`: Persistent devnets with detached containers and rolling restarts diff --git a/.claude/skills/devnet-runner/references/clients.md b/.claude/skills/devnet-runner/references/clients.md new file mode 100644 index 0000000..b08f43c --- /dev/null +++ b/.claude/skills/devnet-runner/references/clients.md @@ -0,0 +1,145 @@ +# Client Reference + +Supported lean consensus clients used in gean's multi-client devnet +configuration. + +## Supported Clients (gean's default 5-client devnet) + +| Client | Language | Description | +|---|---|---| +| gean | Go | This client. Per-validator latest-vote selection, fast aggregator. | +| zeam | Zig | Lean consensus client with the best logging design. **Watch for SSZ panics**. | +| ream | Rust | Active development. **Sync recovery is fragile**. | +| lantern | C | Most reliable client in the network. Smart attestation selection. | +| ethlambda | Rust | Best fork choice tree visualization. **Has known block bloat bug**. | + +Other clients that exist but are NOT in gean's default 5-client setup: + +| Client | Language | Why excluded | +|---|---|---| +| qlean | C++ | Unreliable: `listen_addrs=0` config bug, frequent disconnects, no log shipping | +| lighthouse | Rust | Heavyweight Ethereum client (lean fork) — overkill for our tests | +| grandine | Rust | Not always available; can be added when needed | + +## Docker Images + +Images are defined in `client-cmds/{client}-cmd.sh`. Edit the `node_docker` +variable to change image/tag. + +| Client | Default Image | +|---|---| +| gean | `gean:dev` (built locally via `make docker-build`) | +| zeam | `blockblaz/zeam:devnet1` | +| ream | `ghcr.io/reamlabs/ream:latest` | +| lantern | `piertwo/lantern:v0.0.1` | +| ethlambda | `ghcr.io/lambdaclass/ethlambda:devnet3` | + +## Default Ports + +Ports are configured per-node in `validator-config.yaml`. Typical port +assignments for the 5-client devnet: + +| Node | QUIC Port | Metrics Port | API Port | +|---|---|---|---| +| zeam_0 | 9001 | 8081 | n/a | +| ream_0 | 9002 | 8082 | n/a | +| lantern_0 | 9004 | 8084 | n/a | +| ethlambda_0 | 9007 | 8087 | 5052 | +| gean_0 | 9008 | 8088 | 5058 | + +**Note:** Adjust ports to avoid conflicts when running multiple instances of +the same client. + +**Dual-port clients (gean, ethlambda):** Both run separate API and metrics +HTTP servers. The `metricsPort` from `validator-config.yaml` maps to +`--metrics-port`. The API port must be configured separately in the +client-cmd script. + +## Client Command Files + +Each client's Docker configuration is in `client-cmds/{client}-cmd.sh` (e.g., +`gean-cmd.sh`, `zeam-cmd.sh`, `ethlambda-cmd.sh`). Edit the `node_docker` +variable to change image/tag. + +## Changing Docker Images + +To use a different image or tag: + +1. **Temporary (single run):** Use `--tag` flag: + ```bash + NETWORK_DIR=local-devnet ./spin-node.sh --node gean_0 --tag my-branch + ``` + +2. **Permanent:** Edit `client-cmds/{client}-cmd.sh` and modify `node_docker`: + ```bash + node_docker="your-registry/image:tag" + ``` + +## Known Issues & Compatibility + +### gean + +| Issue | Status | Description | +|---|---|---| +| Block bloat bug | **FIXED in commit 62454aa** | Per-validator latest-vote selection ensures bounded block size. Earlier versions produced ~12 MB blocks during stalls. | +| Slow catch-up | **FIXED in commit e7e752c** | Batched `blocks_by_root` (up to 10 roots per request) speeds up restart catch-up. | +| Checkpoint init slot | **FIXED in commit e7e752c** | Anchor convention now matches ethlambda. | + +### zeam + +| Issue | Image Tags Affected | Description | +|---|---|---| +| SSZ stack overflow | All known | Crashes with `thread panic: integer overflow` or stack overflow in `serializedSize__anon_*` / `process_block`. Adversarial input or even valid blocks can trigger it. **Detection: zeam_0.log unusually large (>1M lines).** | +| CLI flag change | devnet2+ | Uses `--api-port` instead of `--metrics_port` for metrics endpoint | +| XMSS prover crash | devnet2 | Missing prover setup files cause panic when producing blocks with signature aggregation | + +### ream + +| Issue | Status | Description | +|---|---|---| +| Sync recovery fragile | Known | After a peer dies, ream cannot reseed missing fork-choice target states. Stuck `Justified` value persists across hundreds of slots. | +| `Attestation too far in future` | Known | ream rejects attestations for slots far ahead of its head, even when those slots are valid. | +| `No common highest checkpoint` | Known | Backfill cannot select a sync target when peers diverge. | + +### ethlambda + +| Issue | Status | Description | +|---|---|---| +| Block bloat regression | **OPEN UPSTREAM** | `crates/blockchain/src/store.rs:1018` greedily accumulates attestations. Same bug gean had before commit 62454aa. Issue filed. | +| Manifest unknown warning | local | Docker shows "manifest unknown" but falls back to local image — can be ignored | +| NoPeersSubscribedToTopic | all | Expected warning when no peers are connected to gossipsub topics | + +### lantern + +No known issues — most reliable client in our network. If lantern reports +errors, take them seriously. + +## Environment Variables Available to Clients + +These are set by `spin-node.sh` and available in client command scripts: + +| Variable | Description | +|---|---| +| `$item` | Node name (e.g., `gean_0`) | +| `$configDir` | Genesis config directory path | +| `$dataDir` | Data directory path | +| `$quicPort` | QUIC port from config | +| `$metricsPort` | Metrics port from config | +| `$privkey` | P2P private key | + +## gean Image Build + +Unlike other clients, gean is typically built locally rather than pulled from a +registry. From the gean repo root: + +```bash +make docker-build +# Produces image: gean:dev +``` + +For testing a feature branch, build with a custom tag: + +```bash +docker build -t gean:my-feature-branch . +# Then update lean-quickstart/client-cmds/gean-cmd.sh to use the tag +``` diff --git a/.claude/skills/devnet-runner/references/long-lived-devnet.md b/.claude/skills/devnet-runner/references/long-lived-devnet.md new file mode 100644 index 0000000..71ca9ae --- /dev/null +++ b/.claude/skills/devnet-runner/references/long-lived-devnet.md @@ -0,0 +1,258 @@ +# Long-Lived Devnets + +Running a persistent devnet with detached containers that survive SSH +disconnects and support rolling restarts to upgrade images without losing +chain state. + +## When to Use + +- Running a devnet on a remote server that should persist across SSH sessions +- Upgrading gean images mid-devnet without resetting genesis +- Testing checkpoint sync and rolling restart procedures + +## Overview + +`spin-node.sh` runs containers with `docker run --rm` (foreground, auto-remove) +and kills all containers on exit. This is fine for short test runs but not for +long-lived devnets. + +The alternative: start containers directly with +`docker run -d --restart unless-stopped`. Containers are decoupled from any +parent process and survive SSH disconnects, script exits, and host reboots. + +## Starting a Long-Lived Devnet + +### Step 1: Generate genesis + +Use `spin-node.sh` to generate genesis config, keys, and ENR records, then +immediately stop it: + +```bash +cd lean-quickstart && NETWORK_DIR=local-devnet ./spin-node.sh --node all --generateGenesis +# Press Ctrl-C after nodes start (genesis is already generated) +``` + +Or update `GENESIS_TIME` in `config.yaml` manually: + +```bash +GENESIS=/path/to/lean-quickstart/local-devnet/genesis +GENESIS_TIME=$(($(date +%s) + 30)) +sed -i "s/^GENESIS_TIME:.*/GENESIS_TIME: $GENESIS_TIME/" $GENESIS/config.yaml +``` + +### Step 2: Start all containers detached + +Start all nodes simultaneously so the gossipsub mesh forms correctly. Example +for a 5-client setup (zeam, ream, lantern, ethlambda, gean): + +```bash +GENESIS=/path/to/lean-quickstart/local-devnet/genesis +DATA=/path/to/lean-quickstart/local-devnet/data +GEAN_IMAGE=gean:dev +ZEAM_IMAGE=blockblaz/zeam:devnet1 +REAM_IMAGE=ghcr.io/reamlabs/ream:latest +LANTERN_IMAGE=piertwo/lantern:v0.0.1 +ETHLAMBDA_IMAGE=ghcr.io/lambdaclass/ethlambda:devnet3 + +# Clean data dirs +for d in zeam_0 ream_0 lantern_0 ethlambda_0 gean_0; do + rm -rf "$DATA/$d/"* +done + +# zeam +docker run -d --restart unless-stopped --name zeam_0 --network host \ + -v $GENESIS:/config -v $DATA/zeam_0:/data \ + $ZEAM_IMAGE node \ + --custom-network-config-dir /config \ + --gossipsub-port 9001 --node-id zeam_0 \ + --node-key /config/zeam_0.key \ + --metrics-port 8081 + +# ream +docker run -d --restart unless-stopped --name ream_0 --network host \ + -v $GENESIS:/config -v $DATA/ream_0:/data \ + $REAM_IMAGE \ + --custom-network-config-dir /config \ + --gossipsub-port 9002 --node-id ream_0 \ + --node-key /config/ream_0.key \ + --metrics-port 8082 + +# lantern +docker run -d --restart unless-stopped --name lantern_0 --network host \ + -v $GENESIS:/config -v $DATA/lantern_0:/data \ + $LANTERN_IMAGE \ + --custom-network-config-dir /config \ + --gossipsub-port 9004 --node-id lantern_0 \ + --node-key /config/lantern_0.key \ + --metrics-port 8084 + +# ethlambda +docker run -d --restart unless-stopped --name ethlambda_0 --network host \ + -v $GENESIS:/config -v $DATA/ethlambda_0:/data \ + $ETHLAMBDA_IMAGE \ + --custom-network-config-dir /config \ + --gossipsub-port 9007 --node-id ethlambda_0 \ + --node-key /config/ethlambda_0.key \ + --http-address 0.0.0.0 --api-port 5052 --metrics-port 8087 + +# gean (aggregator) +docker run -d --restart unless-stopped --name gean_0 --network host \ + -v $GENESIS:/config -v $DATA/gean_0:/data \ + $GEAN_IMAGE \ + --custom-network-config-dir /config \ + --gossipsub-port 9008 --node-id gean_0 \ + --node-key /config/gean_0.key \ + --is-aggregator \ + --http-address 0.0.0.0 --api-port 5058 --metrics-port 8088 +``` + +Do NOT include `--checkpoint-sync-url` in the initial start. Nodes start from +genesis. + +### Step 3: Verify + +Wait ~50 seconds (30s genesis offset + 20s for finalization to start), then +check: + +```bash +for n in zeam_0 ream_0 lantern_0 ethlambda_0 gean_0; do + printf "$n: " + docker logs --tail 30 "$n" 2>&1 | grep -i "finalized\|finalized_slot" | tail -1 +done +``` + +All nodes should show the same finalized slot advancing. + +## Rolling Restart Procedure (gean) + +To upgrade gean's image without losing chain state. Restart one node at a time; +the network continues finalizing with the remaining nodes. + +### Critical: 60-Second Wait + +After stopping a node, **wait at least 60 seconds** before starting the +replacement. This allows the gossipsub backoff timer on other nodes to expire. +Without this wait, the restarted node's GRAFT requests are rejected and it +never joins the gossip mesh, meaning it won't receive blocks or attestations +via gossip. + +### Restart Order + +1. Non-aggregator nodes first +2. Aggregator (gean_0) last (while it's offline, gean's aggregations stop and + finalization stalls) + +### Per-Node Procedure + +For each gean node: + +```bash +GENESIS=/path/to/lean-quickstart/local-devnet/genesis +DATA=/path/to/lean-quickstart/local-devnet/data +NEW_IMAGE=gean:my-new-tag + +# 1. Pull or build the new image first (minimizes downtime) +docker pull $NEW_IMAGE # if remote +# OR +make docker-build # if building locally + +# 2. Pick a healthy peer's API port as checkpoint source +# (any running gean or ethlambda node that is NOT the one being restarted) +# gean serves /lean/v0/states/finalized on --api-port +CHECKPOINT_SOURCE_PORT=5052 # ethlambda_0's API port (or another gean's port) + +# 3. Stop and remove the container +docker rm -f gean_0 +rm -rf "$DATA/gean_0/"* + +# 4. Wait 60 seconds for gossipsub backoff to expire +sleep 60 + +# 5. Start with new image + checkpoint sync +docker run -d --restart unless-stopped --name gean_0 --network host \ + -v $GENESIS:/config -v $DATA/gean_0:/data \ + $NEW_IMAGE \ + --custom-network-config-dir /config \ + --gossipsub-port 9008 --node-id gean_0 \ + --node-key /config/gean_0.key \ + --is-aggregator \ + --http-address 0.0.0.0 --api-port 5058 --metrics-port 8088 \ + --checkpoint-sync-url http://127.0.0.1:$CHECKPOINT_SOURCE_PORT/lean/v0/states/finalized +``` + +### Verification After Each Node + +Wait ~20 seconds, then verify: + +```bash +# Check the restarted node receives blocks via gossip (not just req-resp) +docker logs --tail 30 gean_0 2>&1 | grep -i "received block\|imported block" + +# Check finalization matches other nodes +for n in zeam_0 ream_0 lantern_0 ethlambda_0 gean_0; do + printf "$n: " + docker logs --tail 30 "$n" 2>&1 | grep -i "finalized" | tail -1 +done +``` + +**Only proceed to the next node after confirming:** +- The restarted node shows incoming gossip blocks +- No "NoPeersSubscribedToTopic" warnings in recent logs +- Finalized slot matches other nodes + +## Monitoring Stack + +If Prometheus and Grafana were previously started via `spin-node.sh --metrics`, +restart them separately since they're managed by docker-compose: + +```bash +cd lean-quickstart/metrics && docker compose -f docker-compose-metrics.yaml up -d +``` + +## Troubleshooting + +### Restarted node shows "NoPeersSubscribedToTopic" persistently + +The 60-second wait was not long enough, or was skipped. Stop the node, wait +60s, and start again. + +### Finalization stalls after restarting the aggregator + +Expected behavior. Finalization resumes once the aggregator catches up to head +and starts aggregating attestations again. This typically takes 10-20 seconds +after the node starts. + +### Chain doesn't progress after restarting all nodes + +If all nodes were restarted from genesis (no checkpoint sync) with a stale +`GENESIS_TIME`, the slot gap from genesis to current time may not satisfy +3SF-mini justifiability rules. Regenerate genesis with a fresh timestamp. + +### "genesis time mismatch" or "validator count mismatch" + +The checkpoint source is running a different genesis than the restarting node. +Ensure both use the same genesis config directory (`-v $GENESIS:/config`). + +### "HTTP request failed" or connection refused + +The checkpoint source node is down or unreachable. Verify with `curl`: +```bash +curl -s http://127.0.0.1:/lean/v0/health +# Should return: {"status":"healthy",...} +``` + +### Container name conflict on start + +The old container wasn't fully removed. Use `docker rm -f ` before +`docker run`. + +### "Fallback pruning (finalization stalled)" after catch-up + +Normal during catch-up. The node accumulated blocks faster than finalization +can advance. Resolves once fully caught up. + +### gean checkpoint sync gives wrong finalized slot + +This was a bug in `cmd/gean/main.go` that mixed the served block's root with +the served state's internal `LatestFinalized.Slot`. **Fixed in commit +e7e752c.** If you see this on an older gean image, upgrade to >= e7e752c. diff --git a/.claude/skills/devnet-runner/references/validator-config.md b/.claude/skills/devnet-runner/references/validator-config.md new file mode 100644 index 0000000..5d8274d --- /dev/null +++ b/.claude/skills/devnet-runner/references/validator-config.md @@ -0,0 +1,174 @@ +# Validator Config Reference + +Full schema and configuration guide for +`lean-quickstart/local-devnet/genesis/validator-config.yaml`. + +## Full Schema + +```yaml +shuffle: roundrobin # Proposer selection algorithm (roundrobin = deterministic turns) +deployment_mode: local # 'local' (localhost) or 'ansible' (remote servers) + +config: + activeEpoch: 18 # Log2 of active signing epochs for hash-sig keys (2^18) + keyType: "hash-sig" # Post-quantum signature scheme + +validators: + - name: "gean_0" # Node identifier: _ + privkey: "bdf953adc..." # 64-char hex P2P private key (libp2p identity) + enrFields: + ip: "127.0.0.1" # Node IP (127.0.0.1 for local, real IP for ansible) + quic: 9008 # QUIC/UDP port for P2P communication + metricsPort: 8088 # HTTP port exposed by the node (see note below) + count: 1 # Number of validator indices assigned to this node +``` + +## Field Reference + +| Field | Required | Description | +|---|---|---| +| `shuffle` | Yes | Proposer selection algorithm. Use `roundrobin` for deterministic turn-based proposing | +| `deployment_mode` | Yes | `local` or `ansible` — determines genesis time offset and config directory | +| `config.activeEpoch` | Yes | Exponent for hash-sig active epochs (e.g., 18 means 2^18 signatures per period) | +| `config.keyType` | Yes | Always `hash-sig` for post-quantum support | +| `name` | Yes | Format: `_`. Client name determines which `client-cmds/*.sh` script runs | +| `privkey` | Yes | 32-byte hex string (64 chars). Used for P2P identity and ENR generation | +| `enrFields.ip` | Yes | IP address. Use `127.0.0.1` for local, real IPs for ansible | +| `enrFields.quic` | Yes | QUIC port. Must be unique per node in local mode | +| `metricsPort` | Yes | HTTP port exposed by the node. Must be unique per node in local mode. For gean and ethlambda, this maps to `--metrics-port`; the API server uses a separate `--api-port` | +| `count` | Yes | Number of validator indices. Sum of all counts = total validators | + +## Default 5-Client Setup for gean Testing + +```yaml +shuffle: roundrobin +deployment_mode: local + +config: + activeEpoch: 18 + keyType: "hash-sig" + +validators: + - name: "zeam_0" + privkey: "<64-char-hex>" + enrFields: + ip: "127.0.0.1" + quic: 9001 + metricsPort: 8081 + count: 1 + + - name: "ream_0" + privkey: "<64-char-hex>" + enrFields: + ip: "127.0.0.1" + quic: 9002 + metricsPort: 8082 + count: 1 + + - name: "lantern_0" + privkey: "<64-char-hex>" + enrFields: + ip: "127.0.0.1" + quic: 9004 + metricsPort: 8084 + count: 1 + + - name: "ethlambda_0" + privkey: "<64-char-hex>" + enrFields: + ip: "127.0.0.1" + quic: 9007 + metricsPort: 8087 + count: 1 + + - name: "gean_0" + privkey: "<64-char-hex>" + enrFields: + ip: "127.0.0.1" + quic: 9008 + metricsPort: 8088 + count: 1 +``` + +This gives 5 validators across 5 nodes, one per language family +(Zig, Rust, C, Rust, Go). + +## Adding a New Validator Node + +1. **Choose a unique node name** following `_` convention: + ``` + gean_0, gean_1, zeam_0, ream_0, lantern_0, ethlambda_0 + ``` + +2. **Generate a P2P private key** (64-char hex): + ```bash + openssl rand -hex 32 + ``` + +3. **Assign unique ports** (for local mode): + - QUIC: 9001, 9002, 9003... (increment for each node) + - Metrics/API: 8081, 8082, 8083... (increment for each node) + - For gean and ethlambda, also assign a unique API port (5052, 5053, ...). + +4. **Add the entry to `validator-config.yaml`:** + ```yaml + validators: + # ... existing nodes ... + + - name: "gean_1" + privkey: "" + enrFields: + ip: "127.0.0.1" # Use real IP for ansible + quic: 9009 # Next available port + metricsPort: 8089 # Next available port + count: 1 + ``` + +5. **Regenerate genesis with new keys:** + ```bash + cd lean-quickstart && NETWORK_DIR=local-devnet ./spin-node.sh --node all --generateGenesis --forceKeyGen + ``` + +## Removing a Validator Node + +1. **Delete the node entry** from `validator-config.yaml` + +2. **Regenerate genesis** (required because genesis state must reflect new + validator set): + ```bash + cd lean-quickstart && NETWORK_DIR=local-devnet ./spin-node.sh --node all --generateGenesis + ``` + Note: `--forceKeyGen` is NOT needed when removing. Existing keys for + remaining indices are reused. + +## Port Allocation Guide (Local Mode) + +When running multiple nodes locally, each needs unique ports: + +| Node | QUIC Port | Metrics Port | API Port (if applicable) | +|---|---|---|---| +| zeam_0 | 9001 | 8081 | n/a | +| ream_0 | 9002 | 8082 | n/a | +| qlean_0 | 9003 | 8083 | n/a | +| lantern_0 | 9004 | 8084 | n/a | +| lighthouse_0 | 9005 | 8085 | n/a | +| grandine_0 | 9006 | 8086 | n/a | +| ethlambda_0 | 9007 | 8087 | 5052 | +| gean_0 | 9008 | 8088 | 5058 | + +When running **multiple gean or ethlambda nodes** locally, each needs a unique +`--api-port` since `validator-config.yaml` has no `apiPort` field. Pass +`--api-port` directly in `gean-cmd.sh` / `ethlambda-cmd.sh`. + +For **ansible mode**, all nodes can use the same ports (9001, 8081, 5052) +since they run on different machines. + +## Local vs Ansible Deployment + +| Aspect | Local | Ansible | +|---|---|---| +| Config file | `lean-quickstart/local-devnet/genesis/validator-config.yaml` | `lean-quickstart/ansible-devnet/genesis/validator-config.yaml` | +| `deployment_mode` | `local` | `ansible` | +| IP addresses | `127.0.0.1` for all | Real server IPs | +| Ports | Must be unique per node | Same port, different machines | +| Genesis offset | +30 seconds | +360 seconds | diff --git a/.claude/skills/devnet-runner/scripts/run-devnet-with-timeout.sh b/.claude/skills/devnet-runner/scripts/run-devnet-with-timeout.sh new file mode 100755 index 0000000..de6b848 --- /dev/null +++ b/.claude/skills/devnet-runner/scripts/run-devnet-with-timeout.sh @@ -0,0 +1,42 @@ +#!/bin/bash +# Run devnet for a specified number of seconds, dump logs before stopping +# +# Usage: ./run-devnet-with-timeout.sh +# Must be run from anywhere; resolves the gean repo root automatically + +if [ -z "$1" ]; then + echo "Usage: $0 " + exit 1 +fi + +REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../../.." && pwd)" +QUICKSTART_DIR="$REPO_ROOT/lean-quickstart" + +if [ ! -d "$QUICKSTART_DIR" ]; then + echo "Error: lean-quickstart not found at $QUICKSTART_DIR" + echo "Run 'make lean-quickstart' first to clone it." + exit 1 +fi + +cd "$QUICKSTART_DIR" +# --cleanData wipes per-node data dirs so each run starts from a fresh genesis; +# without it, clients (notably lantern) boot from stale on-disk state. +NETWORK_DIR=local-devnet ./spin-node.sh --node all --cleanData --generateGenesis 2>&1 | tee "$REPO_ROOT/devnet.log" & +PID=$! +sleep "$1" + +# Dump logs from all running node containers before stopping +echo "Dumping node logs..." +for node in $(docker ps --format '{{.Names}}' | grep -E '^(gean|zeam|ream|lantern|ethlambda)_'); do + docker logs "$node" > "$REPO_ROOT/${node}.log" 2>&1 + echo " Dumped ${node}.log" +done + +kill $PID 2>/dev/null +wait $PID 2>/dev/null + +# Explicitly stop and remove containers (spin-node.sh may not clean up on kill) +echo "Stopping and removing containers..." +for node in $(docker ps --format '{{.Names}}' | grep -E '^(gean|zeam|ream|lantern|ethlambda)_'); do + docker rm -f "$node" 2>/dev/null +done diff --git a/.claude/skills/test-pr-devnet/SKILL.md b/.claude/skills/test-pr-devnet/SKILL.md new file mode 100644 index 0000000..1f9fd3a --- /dev/null +++ b/.claude/skills/test-pr-devnet/SKILL.md @@ -0,0 +1,282 @@ +--- +name: test-pr-devnet +description: Test gean PR changes in multi-client devnet. Use when users want to (1) Test a branch/PR with other Lean clients, (2) Validate BlocksByRoot or P2P protocol changes, (3) Test sync recovery with pause/unpause, (4) Verify cross-client interoperability, (5) Run integration tests before merging. +disable-model-invocation: true +--- + +# Test PR in Devnet + +Test gean branch changes in a multi-client local devnet with zeam (Zig), +ream (Rust), lantern (C), ethlambda (Rust), and gean itself. + +## Quick Start + +```bash +# Test current branch (basic interoperability, ~60-90s) +.claude/skills/test-pr-devnet/scripts/test-branch.sh + +# Test with sync recovery (BlocksByRoot validation, ~90-120s) +.claude/skills/test-pr-devnet/scripts/test-branch.sh --with-sync-test + +# Test specific branch +.claude/skills/test-pr-devnet/scripts/test-branch.sh my-feature-branch + +# Check status while running +.claude/skills/test-pr-devnet/scripts/check-status.sh + +# Cleanup when done +.claude/skills/test-pr-devnet/scripts/cleanup.sh +``` + +## What It Does + +1. **Builds branch-specific Docker image** tagged as `gean:` +2. **Updates lean-quickstart config** to use the new image (backs up original) +3. **Starts 5-node devnet** with fresh genesis (zeam, ream, lantern, ethlambda, gean) +4. **Optionally tests sync recovery** by pausing/unpausing nodes +5. **Analyzes results** and provides summary +6. **Leaves devnet running** for manual inspection + +## Why 5 Clients + +The 5-client setup is gean's standard test surface: + +| Client | Language | Why included | +|---|---|---| +| zeam | Zig | Different SSZ implementation, catches encoding bugs | +| ream | Rust | Different libp2p stack | +| lantern | C | Most reliable client (gold standard) | +| ethlambda | Rust | Best fork choice viz, broadest interop | +| gean | Go | The system under test | + +qlean is excluded because it's historically unreliable +(`listen_addrs=0` config bug, frequent disconnects, no log shipping). + +## Prerequisites + +| Requirement | Location | Check | +|---|---|---| +| lean-quickstart | `/lean-quickstart` | `ls $LEAN_QUICKSTART` | +| Docker running | — | `docker ps` | +| Git repository | gean repo root | `git branch` | + +## Test Scenarios + +### Basic Interoperability (~60-90s) + +**Goal:** Verify gean produces blocks and reaches consensus with other clients + +**Success criteria:** +- ✅ No errors in gean logs +- ✅ All 5 nodes at same head slot +- ✅ Finalization advancing (every 6-12 slots) +- ✅ Each validator produces blocks for their slots +- ✅ gean blocks have `attestations=` count ≤ ~6 (per-validator selection) + +### Sync Recovery (~90-120s) + +**Goal:** Test BlocksByRoot batched fetch when nodes fall behind + +**Usage:** Add `--with-sync-test` flag + +**What happens:** +1. Devnet runs for 10s (~2-3 slots) +2. Pauses `zeam_0` and `ream_0` +3. Network progresses 20s (~5 slots) +4. Unpauses nodes → nodes sync via batched `blocks_by_root` + +**Success criteria:** +- ✅ gean's `[sync] batched fetch starting count=N` lines appear +- ✅ Paused nodes sync to current head +- ✅ No `MessageTooLarge` or oversized block errors + +## Configuration Changes + +The skill modifies `lean-quickstart/client-cmds/gean-cmd.sh` to use your +branch's Docker image. + +**Automatic backup:** Creates `gean-cmd.sh.backup` + +**Restore methods:** +```bash +# 1. Cleanup script (recommended) +.claude/skills/test-pr-devnet/scripts/cleanup.sh + +# 2. Manual restore +mv $LEAN_QUICKSTART/client-cmds/gean-cmd.sh.backup \ + $LEAN_QUICKSTART/client-cmds/gean-cmd.sh + +# 3. Git restore (if no uncommitted changes) +cd $LEAN_QUICKSTART && git checkout client-cmds/gean-cmd.sh +``` + +## Manual Workflow (Alternative to Script) + +If you need fine-grained control: + +### 1. Build Image + +```bash +cd +BRANCH=$(git rev-parse --abbrev-ref HEAD) +docker build \ + --build-arg GIT_COMMIT=$(git rev-parse HEAD) \ + --build-arg GIT_BRANCH=$BRANCH \ + -t gean:$BRANCH . +``` + +### 2. Update Configuration + +Edit `$LEAN_QUICKSTART/client-cmds/gean-cmd.sh`: +```bash +node_docker="gean: \ +``` + +### 3. Start Devnet + +```bash +cd $LEAN_QUICKSTART +NETWORK_DIR=local-devnet ./spin-node.sh --node all --generateGenesis --metrics +``` + +### 4. Test Sync (Optional) + +```bash +# Create sync gap +docker pause zeam_0 ream_0 +sleep 20 # Network progresses + +# Test recovery +docker unpause zeam_0 ream_0 +sleep 10 # Wait for sync +``` + +### 5. Check Results + +```bash +# Quick status +.claude/skills/test-pr-devnet/scripts/check-status.sh + +# Detailed analysis (use devnet-log-review skill) +.claude/skills/devnet-log-review/scripts/analyze-logs.sh +``` + +## Protocol Compatibility + +| Client | Status | Gossipsub | BlocksByRoot | +|---|---|---|---| +| zeam | ✅ Full | ✅ Full | ⚠️ SSZ bugs known | +| ream | ✅ Full | ✅ Full | ⚠️ Sync recovery fragile | +| lantern | ✅ Full | ✅ Full | ✅ Full | +| ethlambda | ✅ Full | ✅ Full | ✅ Full (but block bloat bug) | +| gean | ✅ Full | ✅ Full | ✅ Full (batched, up to 10 roots/req) | + +**Notes:** +- zeam may crash with SSZ panics — not gean's fault +- ethlambda may emit MessageTooLarge during stalls — known upstream bug +- lantern is the most reliable peer for `blocks_by_root` responses + +## Verification Checklist + +| Check | Command | Expected | +|---|---|---| +| All nodes running | `docker ps --filter "name=_0"` | 5 containers | +| gean peers connected | `docker logs gean_0 \| grep "Connected Peers:" \| tail -1` | > 0 | +| Blocks produced | `docker logs gean_0 \| grep "\\[validator\\] proposed block" \| wc -l` | > 0 | +| No errors | `docker logs gean_0 \| grep -i ERROR \| wc -l` | 0 | +| Bounded blocks | `docker logs gean_0 \| grep -oE 'attestations=[0-9]+' \| sort -u` | values ≤ ~10 | +| No oversized cascades | `docker logs gean_0 \| grep "MessageTooLarge\|exceeds max"` | empty | + +## Troubleshooting + +### Build Fails +```bash +docker ps # Check Docker running +docker system prune -a # Clean cache if needed +``` + +### Nodes Won't Start +```bash +# Clean and retry +docker stop zeam_0 ream_0 lantern_0 ethlambda_0 gean_0 2>/dev/null +docker rm zeam_0 ream_0 lantern_0 ethlambda_0 gean_0 2>/dev/null +cd $LEAN_QUICKSTART +NETWORK_DIR=local-devnet ./spin-node.sh --node all --generateGenesis +``` + +### Genesis Mismatch +```bash +cd $LEAN_QUICKSTART +NETWORK_DIR=local-devnet ./spin-node.sh --node all --cleanData --generateGenesis +``` + +### Image Tag Not Updated +```bash +# Verify the change +grep "node_docker=" $LEAN_QUICKSTART/client-cmds/gean-cmd.sh +# Should show your branch name, not :dev +``` + +### Port Already in Use +```bash +docker stop $(docker ps -q --filter "name=_0") 2>/dev/null || true +``` + +## Debugging + +### gean per-block analysis + +```bash +# Check gean's processing time per block +docker logs gean_0 2>&1 | grep -oE "proc_time=[0-9]+ms" | sort -u + +# Check attestation count distribution +docker logs gean_0 2>&1 | grep -oE "attestations=[0-9]+" | sort | uniq -c + +# Check for has_parent=false (orphan blocks) +docker logs gean_0 2>&1 | grep "has_parent=false" | wc -l + +# Check sync events +docker logs gean_0 2>&1 | grep "\\[sync\\]" +``` + +### Cross-client finalization comparison + +```bash +for node in zeam_0 ream_0 lantern_0 ethlambda_0 gean_0; do + echo "$node:" + docker logs "$node" 2>&1 | grep -i "finalized" | tail -1 +done +``` + +### Devnet Status Checks + +```bash +# Check all nodes are running +docker ps --format "{{.Names}}: {{.Status}}" --filter "name=_0" + +# Get gean chain status +docker logs gean_0 2>&1 | tail -200 | grep "CHAIN STATUS" | tail -1 + +# Get gean peer count +docker logs gean_0 2>&1 | grep "Connected Peers:" | tail -1 +``` + +### Common Investigation Patterns + +```bash +# Verify gean is proposing blocks +docker logs gean_0 2>&1 | grep "\\[validator\\] proposing block\|\\[validator\\] proposed block" + +# Verify gean is aggregating signatures +docker logs gean_0 2>&1 | grep "\\[signature\\] aggregate:" | head + +# Check peer discovery +docker logs gean_0 2>&1 | grep -i "peer\|connection" | head -20 +``` + +## References + +- **[gean Makefile](../../../Makefile)** — Build and run targets +- **[devnet-log-review skill](../../devnet-log-review/SKILL.md)** — Comprehensive log analysis +- **[devnet-runner skill](../../devnet-runner/SKILL.md)** — Devnet management diff --git a/.claude/skills/test-pr-devnet/scripts/check-status.sh b/.claude/skills/test-pr-devnet/scripts/check-status.sh new file mode 100755 index 0000000..fdf4786 --- /dev/null +++ b/.claude/skills/test-pr-devnet/scripts/check-status.sh @@ -0,0 +1,83 @@ +#!/bin/bash + +# Quick devnet status check for the gean 5-client test + +# Colors +GREEN='\033[0;32m' +RED='\033[0;31m' +BLUE='\033[0;34m' +NC='\033[0m' + +echo -e "${BLUE}=== Devnet Status ===${NC}" +echo "" + +# Check running nodes +echo "Running nodes:" +docker ps --format " {{.Names}}: {{.Status}}" --filter "name=_0" +echo "" + +# Check each node's latest status +for node in gean_0 zeam_0 ream_0 lantern_0 ethlambda_0; do + if docker ps --format "{{.Names}}" | grep -q "^$node$"; then + echo -e "${GREEN}$node${NC}:" + + case $node in + gean_0) + docker logs gean_0 2>&1 | tail -300 | grep "CHAIN STATUS" | tail -1 | sed 's/^/ /' + docker logs gean_0 2>&1 | tail -300 | grep "Latest Finalized:" | tail -1 | sed 's/^/ /' + ;; + zeam_0) + docker logs zeam_0 2>&1 | tail -300 | grep -i "finalized\|head_slot" | tail -1 | sed 's/^/ /' + ;; + ream_0) + docker logs ream_0 2>&1 | tail -300 | grep -i "finalized\|Processing block" | tail -1 | sed 's/^/ /' + ;; + lantern_0) + docker logs lantern_0 2>&1 | tail -300 | grep -i "imported\|finalized" | tail -1 | sed 's/^/ /' + ;; + ethlambda_0) + docker logs ethlambda_0 2>&1 | grep "Fork choice head updated" | tail -1 | sed 's/^/ /' + ;; + esac + echo "" + fi +done + +# gean specifics +if docker ps --format "{{.Names}}" | grep -q "^gean_0$"; then + echo "gean key metrics:" + + # Max attestations per block (block bloat regression check) + MAX_ATTS=$(docker logs gean_0 2>&1 | grep -oE "attestations=[0-9]+" | grep -oE "[0-9]+" | sort -n | tail -1) + MAX_ATTS=${MAX_ATTS:-0} + if [[ "$MAX_ATTS" -gt 30 ]]; then + echo -e " Max attestations/block: ${RED}$MAX_ATTS${NC} ⚠ regression risk" + else + echo -e " Max attestations/block: ${GREEN}$MAX_ATTS${NC}" + fi + + # Block bloat error count + SIZE_ERRORS=$(docker logs gean_0 2>&1 | grep -cE "MessageTooLarge|exceeds max" 2>/dev/null | head -1 | tr -d ' \n') + SIZE_ERRORS=${SIZE_ERRORS:-0} + if [[ "$SIZE_ERRORS" -eq 0 ]]; then + echo -e " Oversized block errors: ${GREEN}0${NC}" + else + echo -e " Oversized block errors: ${RED}$SIZE_ERRORS${NC} ⚠ regression" + fi + + echo "" +fi + +# Quick error check +echo "Error counts:" +for node in gean_0 zeam_0 ream_0 lantern_0 ethlambda_0; do + if docker ps --format "{{.Names}}" | grep -q "^$node$"; then + COUNT=$(docker logs "$node" 2>&1 | grep -c "ERROR" 2>/dev/null | head -1 | tr -d ' \n') + COUNT=${COUNT:-0} + if [[ "$COUNT" -eq 0 ]]; then + echo -e " $node: ${GREEN}$COUNT${NC}" + else + echo -e " $node: ${RED}$COUNT${NC}" + fi + fi +done diff --git a/.claude/skills/test-pr-devnet/scripts/cleanup.sh b/.claude/skills/test-pr-devnet/scripts/cleanup.sh new file mode 100755 index 0000000..766fb64 --- /dev/null +++ b/.claude/skills/test-pr-devnet/scripts/cleanup.sh @@ -0,0 +1,42 @@ +#!/bin/bash + +# Cleanup devnet and restore configurations + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +GEAN_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" +LEAN_QUICKSTART="${LEAN_QUICKSTART:-$GEAN_ROOT/lean-quickstart}" +GEAN_CMD="$LEAN_QUICKSTART/client-cmds/gean-cmd.sh" + +# Colors +GREEN='\033[0;32m' +BLUE='\033[0;34m' +NC='\033[0m' + +echo -e "${BLUE}=== Devnet Cleanup ===${NC}" +echo "" + +# Stop devnet +if [[ -d "$LEAN_QUICKSTART" ]]; then + echo "Stopping devnet..." + cd "$LEAN_QUICKSTART" + NETWORK_DIR=local-devnet ./spin-node.sh --node all --stop 2>/dev/null || true +fi + +# Force remove containers +echo "Removing containers..." +docker rm -f zeam_0 ream_0 lantern_0 ethlambda_0 gean_0 2>/dev/null || true + +echo -e "${GREEN}✓ Devnet stopped${NC}" +echo "" + +# Restore config if backup exists +if [[ -f "$GEAN_CMD.backup" ]]; then + echo "Restoring gean-cmd.sh..." + mv "$GEAN_CMD.backup" "$GEAN_CMD" + echo -e "${GREEN}✓ Config restored${NC}" +else + echo "No backup found, skipping config restore" +fi + +echo "" +echo "Cleanup complete!" diff --git a/.claude/skills/test-pr-devnet/scripts/test-branch.sh b/.claude/skills/test-pr-devnet/scripts/test-branch.sh new file mode 100755 index 0000000..8e9e387 --- /dev/null +++ b/.claude/skills/test-pr-devnet/scripts/test-branch.sh @@ -0,0 +1,318 @@ +#!/bin/bash +set -euo pipefail + +# Test gean branch in multi-client devnet +# Usage: ./test-branch.sh [branch-name] [--with-sync-test] + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +GEAN_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" +LEAN_QUICKSTART="${LEAN_QUICKSTART:-$GEAN_ROOT/lean-quickstart}" + +# Colors +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' # No Color + +# Count occurrences of a pattern in docker logs, always returns a single integer. +# Avoids the `grep -c ... || echo 0` pitfall that produces "0\n0" on no match. +count_in_logs() { + local container="$1" + local pattern="$2" + local result + set +e + result=$(docker logs "$container" 2>&1 | grep -cE "$pattern" 2>/dev/null) + set -e + echo "${result:-0}" +} + +# Parse arguments +BRANCH_NAME="" +WITH_SYNC_TEST=false + +# First positional arg is branch name (if not a flag) +for arg in "$@"; do + if [[ "$arg" == "--with-sync-test" ]]; then + WITH_SYNC_TEST=true + elif [[ -z "$BRANCH_NAME" ]]; then + BRANCH_NAME="$arg" + fi +done + +# Default to current branch if not specified +if [[ -z "$BRANCH_NAME" ]]; then + BRANCH_NAME=$(git -C "$GEAN_ROOT" rev-parse --abbrev-ref HEAD) +fi + +echo -e "${BLUE}=== gean Devnet Testing ===${NC}" +echo "" +echo "Branch: $BRANCH_NAME" +echo "Sync test: $WITH_SYNC_TEST" +echo "gean root: $GEAN_ROOT" +echo "lean-quickstart: $LEAN_QUICKSTART" +echo "" + +# Validate prerequisites +echo "Validating prerequisites..." + +if [[ ! -d "$LEAN_QUICKSTART" ]]; then + echo -e "${RED}✗ Error: lean-quickstart not found at $LEAN_QUICKSTART${NC}" + echo " Set LEAN_QUICKSTART environment variable or run:" + echo " cd $GEAN_ROOT && make lean-quickstart" + exit 1 +fi + +if [[ ! -f "$LEAN_QUICKSTART/spin-node.sh" ]]; then + echo -e "${RED}✗ Error: spin-node.sh not found in lean-quickstart${NC}" + exit 1 +fi + +if ! docker info &>/dev/null; then + echo -e "${RED}✗ Error: Docker is not running${NC}" + echo " Start Docker Desktop or docker daemon" + exit 1 +fi + +# Use `git rev-parse` instead of `-d .git` to support git worktrees, +# where .git is a file (not a directory) pointing to the main repo. +if ! git -C "$GEAN_ROOT" rev-parse --git-dir &>/dev/null; then + echo -e "${RED}✗ Error: Not in a git repository${NC}" + echo " Run this script from gean repository root" + exit 1 +fi + +echo -e "${GREEN}✓ Prerequisites validated${NC}" +echo "" + +# Step 1: Build Docker image +echo -e "${BLUE}[1/6] Building Docker image...${NC}" +cd "$GEAN_ROOT" +GIT_COMMIT=$(git rev-parse HEAD) + +docker build \ + --build-arg GIT_COMMIT="$GIT_COMMIT" \ + --build-arg GIT_BRANCH="$BRANCH_NAME" \ + -t "gean:$BRANCH_NAME" \ + . + +echo -e "${GREEN}✓ Image built: gean:$BRANCH_NAME${NC}" +echo "" + +# Step 2: Update gean-cmd.sh +echo -e "${BLUE}[2/6] Updating lean-quickstart config...${NC}" +GEAN_CMD="$LEAN_QUICKSTART/client-cmds/gean-cmd.sh" + +if [[ ! -f "$GEAN_CMD" ]]; then + echo -e "${RED}✗ Error: $GEAN_CMD not found${NC}" + echo " lean-quickstart may not have a gean entry yet." + exit 1 +fi + +# Backup original +cp "$GEAN_CMD" "$GEAN_CMD.backup" + +# Update docker tag +sed -i.tmp "s|gean:[^ ]*|gean:$BRANCH_NAME|" "$GEAN_CMD" +rm "$GEAN_CMD.tmp" + +echo -e "${GREEN}✓ Updated $GEAN_CMD${NC}" +echo " (Backup saved as $GEAN_CMD.backup)" +echo "" + +# Step 3: Stop any existing devnet +echo -e "${BLUE}[3/6] Cleaning up existing devnet...${NC}" +cd "$LEAN_QUICKSTART" +NETWORK_DIR=local-devnet ./spin-node.sh --node all --stop 2>/dev/null || true +docker rm -f zeam_0 ream_0 lantern_0 ethlambda_0 gean_0 2>/dev/null || true + +echo -e "${GREEN}✓ Cleanup complete${NC}" +echo "" + +# Step 4: Start devnet +echo -e "${BLUE}[4/6] Starting devnet...${NC}" +echo "This will take ~40 seconds (genesis generation + startup)" +echo "" + +# Run devnet in background. +# --cleanData wipes per-node data dirs so each run starts from a fresh genesis; +# without it, clients (notably lantern) boot from stale on-disk state. +NETWORK_DIR=local-devnet ./spin-node.sh --node all --cleanData --generateGenesis --metrics > "/tmp/devnet-$BRANCH_NAME.log" 2>&1 & +DEVNET_PID=$! + +# Wait for nodes to start (check docker ps) +# Disable pipefail temporarily — grep returns 1 when no matches, which is normal here. +set +e +echo -n "Waiting for nodes to start" +for i in {1..40}; do + sleep 1 + echo -n "." + running=$(docker ps --filter "name=_0" --format "{{.Names}}" 2>/dev/null | grep -cE '^(gean|zeam|ream|lantern|ethlambda)_0$' 2>/dev/null) + running=${running:-0} + if [[ "$running" -ge 5 ]]; then + echo "" + echo -e "${GREEN}✓ All 5 nodes running${NC}" + break + fi +done +set -e +echo "" + +# Show node status +docker ps --format " {{.Names}}: {{.Status}}" --filter "name=_0" +echo "" + +# Step 5: Sync recovery test (optional) +if [[ "$WITH_SYNC_TEST" == "true" ]]; then + echo -e "${BLUE}[5/6] Testing sync recovery...${NC}" + + # Let devnet run for a bit + echo "Letting devnet run for 10 seconds..." + sleep 10 + + # Pause nodes + echo "Pausing zeam_0 and ream_0..." + docker pause zeam_0 ream_0 + echo -e "${YELLOW}⏸ Nodes paused${NC}" + + # Wait for network to progress + echo "Network progressing for 20 seconds (~5 slots)..." + sleep 20 + + # Unpause + echo "Unpausing nodes..." + docker unpause zeam_0 ream_0 + echo -e "${GREEN}▶ Nodes resumed${NC}" + + # Wait for sync + echo "Waiting 10 seconds for sync recovery..." + sleep 10 + + echo -e "${GREEN}✓ Sync recovery test complete${NC}" + echo "" +else + echo -e "${BLUE}[5/6] Skipping sync recovery test${NC}" + echo "Use --with-sync-test to enable" + echo "" + + # Let it run long enough for the round-robin proposer cycle to reach gean. + # With 5 validators and 4s slots, 60s gives ~15 slots — every validator + # gets at least 2-3 turns. Override with RUN_DURATION=N for longer runs. + RUN_DURATION="${RUN_DURATION:-60}" + echo "Letting devnet run for ${RUN_DURATION} seconds..." + sleep "$RUN_DURATION" +fi + +# Step 6: Analyze results +echo -e "${BLUE}[6/6] Analyzing results...${NC}" +echo "" + +# Quick status check +echo "=== Quick Status ===" +echo "" + +# Check each node +for node in zeam_0 ream_0 lantern_0 ethlambda_0 gean_0; do + if docker ps --format "{{.Names}}" | grep -q "^$node$"; then + echo -e "${GREEN}✓${NC} $node: Running" + else + echo -e "${RED}✗${NC} $node: Not running" + fi +done +echo "" + +# Check gean specifics +echo "=== gean Status ===" +echo "" + +# Get latest chain status +LATEST_STATUS=$(docker logs gean_0 2>&1 | grep "CHAIN STATUS" | tail -1 || echo "No chain status found") +echo "$LATEST_STATUS" + +# Latest finalized +LATEST_FIN=$(docker logs gean_0 2>&1 | grep "Latest Finalized:" | tail -1 || echo "") +[[ -n "$LATEST_FIN" ]] && echo "$LATEST_FIN" +echo "" + +# Count blocks +BLOCKS_PROPOSED=$(count_in_logs gean_0 "\[validator\] proposed block") +echo "Blocks proposed: $BLOCKS_PROPOSED" + +# Max attestations per block (regression check) +MAX_ATTS=$(docker logs gean_0 2>&1 | grep -oE "attestations=[0-9]+" | grep -oE "[0-9]+" | sort -n | tail -1) +MAX_ATTS=${MAX_ATTS:-0} +echo "Max attestations per block: $MAX_ATTS" +if [[ "$MAX_ATTS" -gt 30 ]]; then + echo -e " ${RED}⚠ WARNING: attestations > 30 — possible block bloat regression${NC}" +fi + +# Count errors +ERROR_COUNT=$(count_in_logs gean_0 "ERROR") +if [[ "$ERROR_COUNT" -eq 0 ]]; then + echo -e "Errors: ${GREEN}$ERROR_COUNT${NC}" +else + echo -e "Errors: ${RED}$ERROR_COUNT${NC}" +fi + +# Critical regression check: MessageTooLarge / oversized blocks +SIZE_ERRORS=$(count_in_logs gean_0 "MessageTooLarge|exceeds max") +if [[ "$SIZE_ERRORS" -eq 0 ]]; then + echo -e "Oversized block errors: ${GREEN}0${NC}" +else + echo -e "Oversized block errors: ${RED}$SIZE_ERRORS${NC} ${RED}⚠ REGRESSION${NC}" +fi +echo "" + +# Sync stats (if sync test was run) +if [[ "$WITH_SYNC_TEST" == "true" ]]; then + echo "=== Sync Activity ===" + echo "" + + BATCHED=$(count_in_logs gean_0 "batched fetch starting") + QUEUED=$(count_in_logs gean_0 "queueing missing block") + EXHAUSTED=$(count_in_logs gean_0 "fetch exhausted for root") + + echo "Batched fetches issued: $BATCHED" + echo "Roots queued for fetch: $QUEUED" + echo "Fetches exhausted: $EXHAUSTED" + echo "" +fi + +# Final verdict +# +# PASSED = no errors AND no size regressions AND attestation count is bounded. +# (We don't require BLOCKS_PROPOSED > 0 because gean might not have +# reached its proposer slot yet on a short run.) +# FAILED = oversized block / message-too-large regression detected. +# CHECK = errors present but no clear regression — needs human inspection. +echo "=== Test Result ===" +echo "" +if [[ "$SIZE_ERRORS" -gt 0 ]] || [[ "$MAX_ATTS" -gt 30 ]]; then + echo -e "${RED}✗ FAILED${NC} - Block bloat regression detected" +elif [[ "$ERROR_COUNT" -eq 0 ]]; then + if [[ "$BLOCKS_PROPOSED" -gt 0 ]]; then + echo -e "${GREEN}✓ PASSED${NC} - Devnet running successfully (gean proposed $BLOCKS_PROPOSED block(s))" + else + echo -e "${GREEN}✓ PASSED${NC} - Devnet healthy, no errors (gean had no proposer slot in this run)" + fi +else + echo -e "${YELLOW}⚠ CHECK LOGS${NC} - $ERROR_COUNT error(s) detected, no regression — inspect logs" +fi +echo "" + +# Next steps +echo "=== Next Steps ===" +echo "" +echo "Check detailed logs:" +echo " docker logs gean_0 2>&1 | less" +echo "" +echo "Run log analysis:" +echo " cd $GEAN_ROOT" +echo " .claude/skills/devnet-log-review/scripts/analyze-logs.sh" +echo "" +echo "Stop devnet:" +echo " $SCRIPT_DIR/cleanup.sh" +echo "" + +# Keep devnet running +echo -e "${YELLOW}Devnet is still running. Stop it when done testing.${NC}" diff --git a/.dockerignore b/.dockerignore index 89140cc..d30f160 100644 --- a/.dockerignore +++ b/.dockerignore @@ -1,17 +1,17 @@ -# Rust/Cargo artifacts -xmss/leansig-ffi/target/ - -# LeanSpec artifacts -leanSpec/ - -# Binaries -bin/ - -# Data -data/ -keys/ -*.key -*.enr - -# CLAUDE.md (project-specific, not needed in container) -CLAUDE.md \ No newline at end of file +.git +bin +data +testnet +output.log +leanSpec +lean-quickstart +crypto/xmss/target +.claude +.gocache +*.log +devnet3_sync_test_20260407 +logs +config.yaml +nodes.yaml +node*.key +keys diff --git a/.github/workflows/build-test.yml b/.github/workflows/build-test.yml index a63fe6e..3bb6c53 100644 --- a/.github/workflows/build-test.yml +++ b/.github/workflows/build-test.yml @@ -7,7 +7,7 @@ on: branches: [main] env: - GO_VERSION: "1.24" + GO_VERSION: "1.25.9" RUST_VERSION: "1.88" jobs: @@ -35,8 +35,8 @@ jobs: path: | ~/.cargo/registry ~/.cargo/git - xmss/leansig-ffi/target - key: cargo-${{ runner.os }}-${{ hashFiles('xmss/leansig-ffi/Cargo.lock') }} + xmss/rust/target + key: cargo-${{ runner.os }}-${{ hashFiles('xmss/rust/Cargo.lock') }} restore-keys: | cargo-${{ runner.os }}- @@ -47,7 +47,4 @@ jobs: run: make build - name: Run unit tests - run: make unit-test - - - name: Run race detector tests - run: make test-race + run: make test diff --git a/.github/workflows/lint-format.yml b/.github/workflows/lint-format.yml index 3dd7f3e..90f312e 100644 --- a/.github/workflows/lint-format.yml +++ b/.github/workflows/lint-format.yml @@ -5,7 +5,7 @@ on: branches: [main] env: - GO_VERSION: "1.24" + GO_VERSION: "1.25.9" RUST_VERSION: "1.88" jobs: @@ -60,15 +60,19 @@ jobs: path: | ~/.cargo/registry ~/.cargo/git - xmss/leansig-ffi/target - key: cargo-lint-${{ runner.os }}-${{ hashFiles('xmss/leansig-ffi/Cargo.lock') }} + xmss/rust/target + key: cargo-lint-${{ runner.os }}-${{ hashFiles('xmss/rust/Cargo.lock') }} restore-keys: | cargo-lint-${{ runner.os }}- - name: Check Rust formatting - working-directory: xmss/leansig-ffi - run: cargo fmt --check + working-directory: xmss/rust + run: | + rustup component add --toolchain 1.90.0-x86_64-unknown-linux-gnu rustfmt + cargo fmt --check - name: Run Clippy - working-directory: xmss/leansig-ffi - run: cargo clippy -- -D warnings -A clippy::missing_safety_doc + working-directory: xmss/rust + run: | + rustup component add --toolchain 1.90.0-x86_64-unknown-linux-gnu clippy + cargo clippy -- -D warnings -A clippy::missing_safety_doc diff --git a/.github/workflows/security-audit.yml b/.github/workflows/security-audit.yml index 9ef1507..f3d4a4a 100644 --- a/.github/workflows/security-audit.yml +++ b/.github/workflows/security-audit.yml @@ -6,15 +6,15 @@ on: paths: - "go.mod" - "go.sum" - - "xmss/leansig-ffi/Cargo.toml" - - "xmss/leansig-ffi/Cargo.lock" + - "xmss/rust/Cargo.toml" + - "xmss/rust/Cargo.lock" schedule: # Runs every Monday at 08:00 UTC - cron: "0 8 * * 1" workflow_dispatch: env: - GO_VERSION: "1.24" + GO_VERSION: "1.25.9" RUST_VERSION: "1.88" jobs: @@ -53,5 +53,5 @@ jobs: run: cargo install --locked cargo-audit - name: Run cargo audit - working-directory: xmss/leansig-ffi + working-directory: xmss/rust run: cargo audit diff --git a/.github/workflows/spec-tests.yml b/.github/workflows/spec-tests.yml index a05bb36..2cb7b46 100644 --- a/.github/workflows/spec-tests.yml +++ b/.github/workflows/spec-tests.yml @@ -10,7 +10,7 @@ on: - "Makefile" env: - GO_VERSION: "1.24" + GO_VERSION: "1.25.9" RUST_VERSION: "1.88" jobs: @@ -38,8 +38,8 @@ jobs: path: | ~/.cargo/registry ~/.cargo/git - xmss/leansig-ffi/target - key: cargo-${{ runner.os }}-${{ hashFiles('xmss/leansig-ffi/Cargo.lock') }} + xmss/rust/target + key: cargo-${{ runner.os }}-${{ hashFiles('xmss/rust/Cargo.lock') }} restore-keys: | cargo-${{ runner.os }}- @@ -47,4 +47,4 @@ jobs: uses: astral-sh/setup-uv@v6 - name: Run spec tests - run: make spec-test + run: make test-spec diff --git a/.gitignore b/.gitignore index 0b000f5..1653325 100644 --- a/.gitignore +++ b/.gitignore @@ -1,31 +1,23 @@ -# Binaries +# Build artifacts bin/ gean -*.exe -# Test artifacts -coverage.out -coverage.html +# Runtime data +data/ -# Go cache -.gocache/ -.uv-cache/ -/leanSpec +# Local testnet config (generated by keygen) +testnet/ -# IDE -.idea/ -.vscode/ -*.swp +# Spec fixtures (cloned separately) +leanSpec/ -# Keys -keys/ -node*.key -*.enr +# Lean-quickstart (cloned separately) +lean-quickstart/ -# Local-only devnet artifacts (generated by `make run-setup`) -config.yaml -nodes.yaml +# Logs +*.log +output.log +devnet.log -# Data -data/ -finalized_states.ssz +# Rust build artifacts +xmss/rust/target/ diff --git a/CLAUDE.md b/CLAUDE.md deleted file mode 100644 index be88291..0000000 --- a/CLAUDE.md +++ /dev/null @@ -1,75 +0,0 @@ -# CLAUDE.md - -This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. - -## Project Overview - -Gean is a Go consensus client for Lean Ethereum — a complete redesign of Ethereum's consensus layer focused on security, decentralization, and finality in seconds. It implements LMD GHOST fork-choice, state transitions, and uses XMSS post-quantum signatures via Rust FFI. - -## Prerequisites - -- **Go** 1.24.6+ -- **Rust** 1.87+ (for XMSS FFI library in `xmss/leansig-ffi/`) -- **uv** (astral.sh/uv) — only needed to generate leanSpec test fixtures - -## Build & Test Commands - -```sh -make build # Build FFI library + gean binary + keygen → bin/ -make spec-test # Consensus spectests (clones leanSpec, generates fixtures, skips sig verify) -make unit-test # All Go unit tests with signature verification -make test-race # Race condition detection -make lint # go vet + staticcheck -make fmt # go fmt ./... -``` - -Run a single test or package: -```sh -go test -count=1 ./chain/forkchoice/... -go test -count=1 -run TestName ./package/... -``` - -Spectests use the build tag `skip_sig_verify` to bypass XMSS signature verification for speed. The FFI library (`make ffi`) must be built before running any tests. - -## Architecture - -The node starts at `cmd/gean/main.go`, which loads genesis config, bootnodes, and validator assignments, then delegates to `node/lifecycle.go` for initialization. - -**Consensus (`chain/`)** -- `forkchoice/` — LMD GHOST fork-choice: block processing, attestation weighting, canonical head selection -- `statetransition/` — State machine that processes blocks and attestations, advances epochs - -**Node orchestration (`node/`)** -- `lifecycle.go` — Initialization: genesis state, P2P host, gossipsub, discovery, validator keys, metrics -- `ticker.go` — Main event loop: slot ticker fires 4 intervals per slot (1s each, 4s slots). Advances fork-choice time, syncs peers, dispatches validator duties -- `validator.go` — Validator duties by interval: propose (0), attest (1), aggregate (2) -- `handler.go` — Gossip subscription and request/response handler registration -- `sync.go` — Peer sync protocol -- `clock.go` — Slot and interval timing relative to genesis - -**Networking (`network/`)** -- `host.go` — libp2p host with QUIC transport -- `gossipsub/` — Pub/sub for blocks and attestations; SSZ-encoded messages -- `p2p/` — Peer discovery via discv5, ENR parsing -- `reqresp/` — Request/response protocols (status, block sync) using Snappy framing - -**Cryptography (`xmss/`)** -- `leansig/` — CGo bindings for XMSS post-quantum signatures -- `leansig-ffi/` — Rust FFI library wrapping leanSig -- Devnet-1 instantiation: `SIGTopLevelTargetSumLifetime32Dim64Base8` - -**Data types (`types/`)** — Consensus state, blocks, attestations, checkpoints. All types implement SSZ encoding. - -**Storage (`storage/`)** — Interface with in-memory implementation (`memory/`). Thread-safe block and state storage. - -**Config (`config/`)** — Genesis state initialization, validator registry loading, bootnode configuration. Loaded from `config.yaml`, `validators.yaml`, `nodes.yaml`. - -**Observability (`observability/`)** — Structured logging with component tags and color output. Prometheus metrics for fork-choice, attestations, state transitions, validators, and network. - -## Design Principles - -- Readable over clever — explicit naming, linear control flow -- Minimal dependencies — fewer imports to audit -- No premature abstraction — concrete types until duplication is real -- Flat and direct — avoid deep package hierarchies -- Concurrency only at boundaries (networking, event loops); core consensus logic is sequential and deterministic diff --git a/Dockerfile b/Dockerfile index 27d2949..70b094d 100644 --- a/Dockerfile +++ b/Dockerfile @@ -1,46 +1,59 @@ -# Rust Builder for XMSS FFI crates -FROM rust:alpine AS rust-builder -RUN apk add --no-cache musl-dev +# Build stage: Rust FFI + Go binary +FROM golang:1.25-bookworm AS builder -WORKDIR /build -COPY xmss/leansig-ffi xmss/leansig-ffi/ -COPY xmss/leanmultisig-ffi xmss/leanmultisig-ffi/ +# Install Rust 1.90.0 (pinned for leansig/leanMultisig compatibility) +RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain 1.90.0 +ENV PATH="/root/.cargo/bin:${PATH}" -WORKDIR /build/xmss/leansig-ffi -RUN cargo build --release +# Install build dependencies +RUN apt-get update && apt-get install -y --no-install-recommends \ + build-essential \ + pkg-config \ + ca-certificates \ + && rm -rf /var/lib/apt/lists/* -WORKDIR /build/xmss/leanmultisig-ffi -RUN cargo build --release +WORKDIR /app -# Go Builder for gean -FROM golang:1.24-alpine AS go-builder -RUN apk add --no-cache git build-base +# Copy Rust FFI dependencies first for better caching +COPY xmss/rust/ xmss/rust/ -WORKDIR /build +# Build Rust FFI libraries +RUN cd xmss/rust && cargo build --release --locked -# Copy Go modules manifests +# Copy Go module files for dependency caching COPY go.mod go.sum ./ RUN go mod download -# Copy Go source code +# Copy all source code COPY . . -# Copy Rust compiled static library and headers -# leansig.go expects the header in ../leansig-ffi/include and the lib in ../leansig-ffi/target/release/deps/ -COPY --from=rust-builder /build/xmss/leansig-ffi/target/release/deps/libleansig_ffi.a xmss/leansig-ffi/target/release/deps/ -COPY --from=rust-builder /build/xmss/leansig-ffi/include xmss/leansig-ffi/include/ -# leanmultisig.go expects the header in ../leanmultisig-ffi/include and the lib in ../leanmultisig-ffi/target/release/deps/ -COPY --from=rust-builder /build/xmss/leanmultisig-ffi/target/release/deps/libleanmultisig_ffi.a xmss/leanmultisig-ffi/target/release/deps/ -COPY --from=rust-builder /build/xmss/leanmultisig-ffi/include xmss/leanmultisig-ffi/include/ +# Build Go binaries +ARG GIT_COMMIT=unknown +ARG GIT_BRANCH=unknown +RUN mkdir -p bin && \ + go build -o bin/gean ./cmd/gean && \ + go build -o bin/keygen ./cmd/keygen -# Build Go binary including CGO binding -RUN CGO_ENABLED=1 go build -o /build/gean ./cmd/gean +# Runtime stage +FROM ubuntu:24.04 AS runtime +WORKDIR /app -# Runtime minimal image -FROM alpine:3.21 +LABEL org.opencontainers.image.source=https://github.com/geanlabs/gean +LABEL org.opencontainers.image.description="Go Ethereum Lean Consensus Client" +LABEL org.opencontainers.image.licenses="MIT" -# libgcc is needed for cgo execution -RUN apk add --no-cache ca-certificates libgcc -COPY --from=go-builder /build/gean /usr/local/bin/gean +ARG GIT_COMMIT=unknown +ARG GIT_BRANCH=unknown +LABEL org.opencontainers.image.revision=$GIT_COMMIT +LABEL org.opencontainers.image.ref.name=$GIT_BRANCH -ENTRYPOINT ["gean"] +# Copy binaries +COPY --from=builder /app/bin/gean /usr/local/bin/ +COPY --from=builder /app/bin/keygen /usr/local/bin/ + +# 9000/udp - P2P QUIC +# 5052 - API +# 5054 - Prometheus metrics +EXPOSE 9000/udp 5052 5054 + +ENTRYPOINT ["/usr/local/bin/gean"] diff --git a/LICENSE b/LICENSE deleted file mode 100644 index f49e7da..0000000 --- a/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2025 Gean - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. \ No newline at end of file diff --git a/Makefile b/Makefile index 38e2906..d3bcefa 100644 --- a/Makefile +++ b/Makefile @@ -1,128 +1,131 @@ -.PHONY: build ffi spec-test unit-test test-race lint fmt clean docker-build refresh-genesis-time run-setup run-setup-if-missing run run-quic run-devnet run-node-1 run-node-2 help leanSpec leanSpec/fixtures +.PHONY: help build ffi test-ffi test test-spec test-all lint fmt sszgen clean tidy docker-build run-devnet run-setup run run-node1 run-node2 -VERSION := $(shell git describe --tags --always --dirty 2>/dev/null || echo "dev") +VERSION ?= $(shell git describe --tags --always --dirty 2>/dev/null || echo "dev") +GIT_COMMIT := $(shell git rev-parse HEAD 2>/dev/null || echo "unknown") +GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "unknown") -# Force Go build cache into the repo to avoid sandboxed $HOME cache paths. -export GOCACHE := $(CURDIR)/.gocache +TESTNET_DIR ?= testnet +NUM_VALIDATORS ?= 5 +NUM_NODES ?= 3 -ffi: - @cd xmss/leansig-ffi && cargo +nightly build --release --locked - @cd xmss/leanmultisig-ffi && cargo +nightly build --release --locked +help: ## Show help for each Makefile recipe + @grep -E '^[a-zA-Z0-9_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' -build: ffi +ffi: ## Build XMSS FFI glue libraries (hashsig-glue + multisig-glue) + @cd xmss/rust && cargo build --release --locked + +build: ffi ## Build gean and keygen binaries @mkdir -p bin - @go build -ldflags "-X github.com/geanlabs/gean/node.Version=$(VERSION)" -o bin/gean ./cmd/gean + @go build -o bin/gean ./cmd/gean @go build -o bin/keygen ./cmd/keygen -# Run the spectests with the leanSpec fixtures, skipping signature verification for faster test execution -spec-test: ffi leanSpec/fixtures - go test -tags skip_sig_verify -count=1 ./spectests/... +test: ## Run unit tests (excludes crypto FFI and spec tests) + go test $(shell go list ./... | grep -v '/xmss$$' | grep -v '/spectests$$' | grep -v '/cmd/') -v -count=1 + +test-ffi: ffi ## Run XMSS crypto FFI tests (builds FFI first) + go test ./xmss/ -v -count=1 -# Run the unit tests, which include signature verification and thus take longer to execute -unit-test: ffi - go test ./... -count=1 +test-spec: leanSpec/fixtures ## Run spec fixture tests only (fast, excludes xmss FFI) + go test ./spectests/ -count=1 -tags=spectests -test-race: ffi - go test -race ./... +test-all: leanSpec/fixtures ## Run all tests including spec fixtures and xmss FFI (slow) + go test ./... -v -count=1 -tags=spectests -lint: +lint: ## Run linters for go & rust go vet ./... - @which staticcheck > /dev/null 2>&1 && staticcheck ./... || echo "staticcheck not installed, skipping" - -fmt: - go fmt ./... - -clean: - rm -rf bin - go clean - -docker-build: - docker build -t gean:$(VERSION) -t ghcr.io/geanlabs/gean:devnet3 . - -# Resolve the directory this Makefile lives in -MAKEFILE_DIR := $(dir $(abspath $(lastword $(MAKEFILE_LIST)))) -CONFIG := $(MAKEFILE_DIR)config.yaml - -RUN_SETUP_NODES ?= 3 -RUN_SETUP_VALIDATORS ?= 5 -RUN_SETUP_IP ?= 127.0.0.1 -RUN_SETUP_BASE_PORT ?= 9000 - -refresh-genesis-time: - @NEW_TIME=$$(($$(date +%s) + 30)); \ - if [ "$$(uname -s)" = "Darwin" ]; then \ - sed -i '' "s/^GENESIS_TIME:.*/GENESIS_TIME: $$NEW_TIME/" $(CONFIG); \ - else \ - sed -i "s/^GENESIS_TIME:.*/GENESIS_TIME: $$NEW_TIME/" $(CONFIG); \ - fi; \ - echo "Updated GENESIS_TIME to $$NEW_TIME in $(CONFIG)" - -run-setup: build - @set -eu; \ - NOW=$$(date +%s); \ - echo "GENESIS_TIME: $$NOW" > config.yaml; \ - ./bin/keygen -validators $(RUN_SETUP_VALIDATORS) -keys-dir keys -print-yaml >> config.yaml; \ - go run ./scripts/gen_node_keys -nodes $(RUN_SETUP_NODES) -ip $(RUN_SETUP_IP) -base-port $(RUN_SETUP_BASE_PORT) -out nodes.yaml 1>/dev/null; \ - $(MAKE) refresh-genesis-time 1>/dev/null; \ - echo "Generated local devnet artifacts: config.yaml, nodes.yaml, keys/, node*.key" - -run-setup-if-missing: - @set -eu; \ - missing=0; \ - [ -f config.yaml ] || missing=1; \ - [ -f nodes.yaml ] || missing=1; \ - i=0; \ - while [ $$i -lt $(RUN_SETUP_NODES) ]; do \ - [ -f node$$i.key ] || missing=1; \ - i=$$(($$i + 1)); \ - done; \ - [ -f keys/validator_0_pk.ssz ] || missing=1; \ - [ -f keys/validator_0_sk.ssz ] || missing=1; \ - if [ $$missing -eq 0 ]; then \ - echo "Using existing local devnet artifacts (config.yaml, nodes.yaml). Run 'make run-setup' to regenerate."; \ - else \ - $(MAKE) run-setup; \ - fi - -run: build refresh-genesis-time run-setup-if-missing - @./bin/gean --genesis config.yaml --bootnodes nodes.yaml --validator-registry-path validators.yaml --validator-keys keys --node-id node0 --listen-addr /ip4/0.0.0.0/tcp/9000 --node-key node0.key --data-dir data/node0 --is-aggregator - -run-devnet: - @if [ ! -d "../lean-quickstart" ]; then \ - echo "Cloning lean-quickstart..."; \ - git clone https://github.com/blockblaz/lean-quickstart.git ../lean-quickstart; \ - fi - $(MAKE) docker-build - cd ../lean-quickstart && NETWORK_DIR=local-devnet ./spin-node.sh --node gean_0 --generateGenesis --metrics - -run-node-1: run-setup-if-missing - @./bin/gean --genesis config.yaml --bootnodes nodes.yaml --validator-registry-path validators.yaml --validator-keys keys --node-id node1 --listen-addr /ip4/0.0.0.0/tcp/9001 --node-key node1.key --data-dir data/node1 --discovery-port 9001 --api-port 5053 --metrics-port 8081 - - - -run-node-2: run-setup-if-missing - @./bin/gean --genesis config.yaml --bootnodes nodes.yaml --validator-registry-path validators.yaml --validator-keys keys --node-id node2 --listen-addr /ip4/0.0.0.0/tcp/9002 --node-key node2.key --data-dir data/node2 --discovery-port 9002 --api-port 5054 --metrics-port 8082 - -# The commit hash of the leanSpec repository to use for testing and fixtures -LEAN_SPEC_COMMIT_HASH := 8b7636bb8a95fe4bec414cc4c24e74079e6256b6 - -# A file to track which commit of the leanSpec fixtures have been generated, to avoid unnecessary regeneration -LEAN_SPEC_FIXTURE_STAMP := leanSpec/.fixtures-commit - -# Clone the leanSpec repository if it doesn't exist, and checkout the specified commit -leanSpec: - @if [ ! -d "leanSpec/.git" ]; then \ - git clone https://github.com/leanEthereum/leanSpec.git --single-branch leanSpec; \ - fi - @cd leanSpec && CURRENT_COMMIT=$$(git rev-parse HEAD) && \ - if [ "$$CURRENT_COMMIT" != "$(LEAN_SPEC_COMMIT_HASH)" ]; then \ - git fetch --all --tags --prune && git checkout $(LEAN_SPEC_COMMIT_HASH); \ - fi - -# Generate the leanSpec fixtures if they are not already generated for the specified commit -leanSpec/fixtures: leanSpec - @CURRENT_FIXTURE_COMMIT=$$(cat $(LEAN_SPEC_FIXTURE_STAMP) 2>/dev/null || true); \ - if [ "$$CURRENT_FIXTURE_COMMIT" != "$(LEAN_SPEC_COMMIT_HASH)" ] || [ ! -d "leanSpec/fixtures/consensus" ]; then \ - cd leanSpec && uv run fill --fork=Devnet --layer=consensus --clean -o fixtures && \ - echo "$(LEAN_SPEC_COMMIT_HASH)" > .fixtures-commit; \ - fi + cd xmss/rust && cargo fmt --check + cd xmss/rust && cargo clippy -- -D warnings -A clippy::missing_safety_doc + +fmt: ## Format all Go code + gofmt -w . + cd xmss/rust && cargo fmt + +sszgen: ## Regenerate SSZ encoding files from struct tags + @rm -f types/*_encoding.go + sszgen --path pkg/types --objs ChainConfig --output types/config_encoding.go + sszgen --path pkg/types --objs Checkpoint --output types/checkpoint_encoding.go + sszgen --path pkg/types --objs Validator --output types/validator_encoding.go + sszgen --path pkg/types --objs AttestationData,Attestation,SignedAttestation,AggregatedAttestation,SignedAggregatedAttestation --exclude-objs Checkpoint --output types/attestation_encoding.go + sszgen --path pkg/types --objs BlockHeader,BlockBody,Block,BlockWithAttestation,AggregatedSignatureProof,BlockSignatures,SignedBlockWithAttestation --exclude-objs Checkpoint,AttestationData,Attestation,AggregatedAttestation,AggregatedSignatureProof --output types/block_encoding.go + sszgen --path pkg/types --objs State --exclude-objs ChainConfig,Checkpoint,Validator,BlockHeader --output types/state_encoding.go + +clean: ## Remove build artifacts and generated files + rm -rf bin data + rm -f types/*_encoding.go + cd xmss/rust && cargo clean + +tidy: ## Tidy Go module dependencies + go mod tidy + +# --- Local testnet --- + +run-setup: build ## Generate testnet config + XMSS keys (first run only, refreshes genesis time) + @bin/keygen --validators $(NUM_VALIDATORS) --nodes $(NUM_NODES) --output $(TESTNET_DIR) + +run: build ## Run node0 (aggregator) — requires make run-setup first + @rm -rf data/node0 + @bin/keygen --validators $(NUM_VALIDATORS) --nodes $(NUM_NODES) --output $(TESTNET_DIR) + @bin/gean \ + --custom-network-config-dir $(TESTNET_DIR) \ + --node-key $(TESTNET_DIR)/node0.key \ + --node-id node0 \ + --data-dir data/node0 \ + --is-aggregator \ + --gossipsub-port 9000 \ + --api-port 5052 \ + --metrics-port 8080 + +run-node1: build ## Run node1 on port 9001 + @rm -rf data/node1 + @bin/gean \ + --custom-network-config-dir $(TESTNET_DIR) \ + --node-key $(TESTNET_DIR)/node1.key \ + --node-id node1 \ + --data-dir data/node1 \ + --gossipsub-port 9001 \ + --api-port 5053 \ + --metrics-port 8081 + +run-node2: build ## Run node2 on port 9002 + @rm -rf data/node2 + @bin/gean \ + --custom-network-config-dir $(TESTNET_DIR) \ + --node-key $(TESTNET_DIR)/node2.key \ + --node-id node2 \ + --data-dir data/node2 \ + --gossipsub-port 9002 \ + --api-port 5054 \ + --metrics-port 8082 + +# --- leanSpec fixtures --- + +LEAN_SPEC_COMMIT_HASH := be853180d21aa36d6401b8c1541aa6fcaad5008d + +leanSpec: ## Clone leanSpec at devnet-3 commit + git clone https://github.com/leanEthereum/leanSpec.git --single-branch + cd leanSpec && git checkout $(LEAN_SPEC_COMMIT_HASH) + +leanSpec/fixtures: leanSpec ## Generate consensus test fixtures from leanSpec + cd leanSpec && uv run fill --fork devnet --scheme=prod -o fixtures + +# --- Docker --- + +DOCKER_TAG ?= local + +docker-build: ## Build Docker image + docker build \ + --build-arg GIT_COMMIT=$(GIT_COMMIT) \ + --build-arg GIT_BRANCH=$(GIT_BRANCH) \ + -t gean:$(VERSION) \ + -t ghcr.io/geanlabs/gean:devnet3 . + +# --- Multi-client devnet --- + +lean-quickstart: ## Clone lean-quickstart for local devnet + git clone https://github.com/blockblaz/lean-quickstart.git --depth 1 --single-branch + +run-devnet: docker-build lean-quickstart ## Run local multi-client devnet + @echo "Starting local devnet with gean client (\"$(DOCKER_TAG)\" tag)." + @cd lean-quickstart \ + && NETWORK_DIR=local-devnet ./spin-node.sh --node all --generateGenesis --metrics > ../devnet.log 2>&1 + diff --git a/README.md b/README.md index 6038a32..d6b9479 100644 --- a/README.md +++ b/README.md @@ -1,16 +1,11 @@ # gean -A Go consensus client for Lean Ethereum, built around the idea that protocol simplicity is a security property. +A Go consensus client for [Lean Ethereum](https://github.com/leanEthereum/leanSpec), built around the idea that protocol simplicity is a security property. ## Philosophy A consensus client should be something a developer can read, understand, and verify without needing to trust a small class of experts. If you can't inspect it end-to-end, it's not fully yours. -## What is Lean Consensus - -A complete redesign of Ethereum's consensus layer, hardened for security, decentralization, and finality in seconds. - - ## Design approach - **Readable over clever.** Code is written so that someone unfamiliar with the codebase can follow it. Naming is explicit. Control flow is linear where possible. @@ -21,157 +16,202 @@ A complete redesign of Ethereum's consensus layer, hardened for security, decent ## Current status -gean currently targets **Lean Consensus devnet-3** (single attestation committee). +gean targets **Lean Consensus devnet-3** (single attestation committee). | Network | Status | Spec pin | |---------|--------|----------| -| devnet-3 | Active development | `leanSpec@8b7636b` | +| devnet-3 | Active | `leanSpec@be85318` | ## Prerequisites -- **Go** 1.24.6+ -- **Rust** 1.87+ (for the leanSig FFI library under `xmss/leansig-ffi/`) +- **Go** 1.25+ +- **Rust** 1.90.0 (for the XMSS FFI libraries under `xmss/rust/`) - **uv** ([astral.sh/uv](https://docs.astral.sh/uv/)) — needed to generate leanSpec test fixtures +- **Docker** (for multi-client devnet) -## Getting started +## Build ```sh -# Build (includes FFI library) +# Build Rust FFI libraries + Go binary make build -# Run consensus spectests (builds FFI + fixtures) -make spec-test +# Build Docker image +make docker-build +``` -# Run Go unit tests -make unit-test +## Local Testnet (Self-Interop) -# Lint -make lint +Run a 3-node local testnet with 5 validators: -# Generate local devnet artifacts (overwrites): -# - config.yaml (genesis time = now, genesis validators) -# - nodes.yaml (bootnodes derived from node*.key peer IDs) -# - keys/ (validator keypairs) -# - node*.key (node identity keys) +```sh +# First-time setup: generate XMSS keys + config make run-setup -# Run +# Terminal 1: node0 (aggregator) make run + +# Terminal 2: node1 +make run-node1 + +# Terminal 3: node2 +make run-node2 ``` -To run multiple local nodes: +### Ports -```sh -# Terminal 1 -make run +| Node | P2P (QUIC/UDP) | API | Metrics | +|-------|----------------|------|---------| +| node0 | 9000 | 5052 | 8080 | +| node1 | 9001 | 5053 | 8081 | +| node2 | 9002 | 5054 | 8082 | -# Terminal 2 -make run-node-1 +### Checkpoint Sync -# Terminal 3 -make run-node-2 -``` +Restart a node using checkpoint sync from another running node: -Notes: -- `config.yaml` and `nodes.yaml` are **local-only**, generated by `make run-setup`, and are gitignored. -- `make run` uses existing `config.yaml`/`nodes.yaml` if present (but always refreshes `GENESIS_TIME` to `NOW + 30s`); run `make run-setup` to force regeneration. +```sh +rm -rf data/node1 +bin/gean \ + --custom-network-config-dir testnet \ + --node-key testnet/node1.key \ + --node-id node1 \ + --data-dir data/node1 \ + --gossipsub-port 9001 \ + --api-port 5053 \ + --metrics-port 8081 \ + --checkpoint-sync-url http://127.0.0.1:5052/lean/v0/states/finalized +``` -## leanSpec fixtures and spectests (devnet-3) +## Multi-Client Devnet -`make spec-test` is the primary consensus-conformance entry point. It bootstraps leanSpec fixtures and runs spectests in a signature-skip lane. +gean is part of the [lean-quickstart](https://github.com/blockblaz/lean-quickstart) multi-client devnet tooling. ```sh -# Generate/update fixtures from pinned leanSpec commit -make leanSpec/fixtures +# Build Docker image and start devnet +make run-devnet +``` -# Verify leanSpec pin used for fixtures -git -C leanSpec rev-parse HEAD -cat leanSpec/.fixtures-commit +### Multi-client testing skills -# Run only consensus spectests (fork-choice + state-transition) -make spec-test +For repeatable testing against zeam, ream, lantern, and ethlambda, gean ships +three Claude Code skills under [`.claude/skills/`](.claude/skills/README.md). +The most useful entry points are exposed as `make` targets: -# Run Go unit tests across packages -make unit-test -``` +```sh +# Build current branch + run a 5-client devnet test (most common) +make devnet-test -Notes: -- Fixtures are generated under `leanSpec/fixtures`. -- `leanSpec/` is a local working directory and is gitignored. -- Fixture generation uses `uv run fill --fork=Devnet --layer=consensus --clean -o fixtures` pinned by `LEAN_SPEC_COMMIT_HASH` in `Makefile`. +# Same as above plus a sync recovery test (pause peers, then resume) +make devnet-test-sync -## Metrics and Grafana +# Inspect what's running right now +make devnet-status -gean exposes Prometheus metrics at `/metrics` when `--metrics-port` is enabled. +# Stop the devnet and restore configs +make devnet-cleanup -```sh -./bin/gean \ - --genesis config.yaml \ - --bootnodes nodes.yaml \ - --validator-registry-path validators.yaml \ - --validator-keys keys \ - --node-id node0 \ - --metrics-port 8080 +# Run a one-off devnet for 120s and dump every client's logs to the repo root +make devnet-run + +# Analyze .log files in the current directory (gean + peer clients) +make devnet-analyze ``` -Grafana assets for gean are provided at: +`make devnet-test` automatically watches for two regressions: oversized blocks +(`MessageTooLarge` / `exceeds max`) and excessive attestations per block +(> 30). If either fires, the test exits with `✗ FAILED`. -- `observability/grafana/client-dashboard.json` (dashboard import) -- `observability/grafana/prometheus-scrape.example.yml` (scrape config example) +See [`.claude/skills/README.md`](.claude/skills/README.md) for the full +overview of each skill (`devnet-log-review`, `devnet-runner`, `test-pr-devnet`). -Dashboard notes: +## API -- Datasource UID is hardcoded to `feyrb1q11ge0wa`. -- Panels filter targets using the `Gean Job` variable (`$gean_job`), populated from Prometheus `job` labels. +gean exposes a lightweight HTTP API on two separate ports: -## API +**API server** (default `:5052`): -gean exposes a lightweight HTTP API (standard library only) for Lean endpoints. +| Endpoint | Description | +|----------|-------------| +| `GET /lean/v0/health` | Health check | +| `GET /lean/v0/states/finalized` | Latest finalized state (SSZ) | +| `GET /lean/v0/checkpoints/justified` | Justified checkpoint (JSON) | +| `GET /lean/v0/fork_choice` | Fork choice tree (JSON) | -Flags: -- `--api-host` (default `0.0.0.0`) -- `--api-port` (default `5058`, set to `0` to disable) -- `--api-enabled` (default `true`) +**Metrics server** (default `:5054`): -Example: +| Endpoint | Description | +|----------|-------------| +| `GET /metrics` | Prometheus metrics | + +## Tests ```sh -curl http://localhost:5058/lean/v0/health -curl http://localhost:5058/lean/v0/fork_choice -curl -o finalized.ssz http://localhost:5058/lean/v0/states/finalized +# Unit tests (no FFI required) +make test + +# FFI/crypto tests (requires make ffi) +make ffi-test + +# leanSpec fixture tests (requires fixtures) +make leanSpec/fixtures +make test-spec ``` -Checkpoint sync example: +### Spec Test Coverage + +| Suite | Fixtures | Description | +|-------|----------|-------------| +| State Transition | 14 | Block processing, genesis validation | +| Fork Choice | 27 | Head selection, reorgs, tiebreakers, aggregation, finalization | +| Signature Verification | 8 | Proposer signatures, attestation aggregation, invalid cases | +| **Total** | **49** | **All passing** | + +### leanSpec Fixtures + +Consensus conformance tests use fixtures generated from the pinned leanSpec commit: ```sh -curl -I http://127.0.0.1:5058/lean/v0/states/finalized +# Generate/update fixtures +make leanSpec/fixtures -./bin/gean \ - --genesis config.yaml \ - --bootnodes nodes.yaml \ - --validator-registry-path validators.yaml \ - --validator-keys keys \ - --node-id node1 \ - --listen-addr /ip4/0.0.0.0/tcp/9001 \ - --node-key node1.key \ - --data-dir data/node1 \ - --discovery-port 9001 \ - --api-port 5053 \ - --metrics-port 8081 \ - --checkpoint-sync-url http://127.0.0.1:5058/lean/v0/states/finalized +# Verify pin +git -C leanSpec rev-parse HEAD ``` -See `api/README.md` for full route details. +Fixtures are generated under `leanSpec/fixtures/`. The `leanSpec/` directory is local and gitignored. -## Running in a devnet +## CLI Flags -gean is part of the [lean-quickstart](https://github.com/blockblaz/lean-quickstart) multi-client devnet tooling. +``` +--custom-network-config-dir Config directory (required) +--gossipsub-port P2P listen port, QUIC/UDP (default: 9000) +--http-address Bind address for API + metrics (default: 127.0.0.1) +--api-port API server port (default: 5052) +--metrics-port Metrics server port (default: 5054) +--node-key Path to hex-encoded secp256k1 private key (required) +--node-id Node identifier, e.g. gean_0 (required) +--checkpoint-sync-url URL for checkpoint sync (optional) +--is-aggregator Enable attestation aggregation +--attestation-committee-count Number of attestation subnets (default: 1) +--data-dir Pebble database directory (default: ./data) +``` -## Acknowledgements +## Architecture -- [Lean Ethereum](https://github.com/leanEthereum) -- [ethlambda](https://github.com/lambdaclass/ethlambda) +- **Single-writer node** goroutine with select on tick + gossip channels +- **3SF-mini fork choice** with LMD GHOST head selection (proto-array) +- **XMSS post-quantum signatures** via Rust FFI (leansig/leanMultisig) +- **Pebble** (CockroachDB's Go-native LSM) for persistent storage +- **GossipSub v1.1** with anonymous message signing +- **Req-resp** protocols: Status + BlocksByRoot with snappy framed encoding +- **5-interval slot structure** (800ms each, 4s total): propose, attest, aggregate, safe-target, accept +- **43 Prometheus metrics** matching the [leanMetrics](https://github.com/leanEthereum/leanMetrics) standard + +## Acknowledgements +- [Lean Ethereum](https://github.com/leanEthereum) +- [ethlambda](https://github.com/lambdaclass/ethlambda) +- [zeam](https://github.com/blockblaz/zeam) ## License diff --git a/SECURITY.md b/SECURITY.md deleted file mode 100644 index e2e0c53..0000000 --- a/SECURITY.md +++ /dev/null @@ -1,5 +0,0 @@ -# Security Policy - -## Report a Vulnerability - -Contact [developerlongs@gmail.com](mailto:developerlongs@gmail.com). \ No newline at end of file diff --git a/api/README.md b/api/README.md deleted file mode 100644 index 210c1f0..0000000 --- a/api/README.md +++ /dev/null @@ -1,92 +0,0 @@ -# gean API - -Lightweight HTTP API for Lean endpoints using Go's standard library. - -## Base URL - -`http://:` (defaults to `http://0.0.0.0:5058`) - -## Routes - -### `GET /lean/v0/health` - -**Purpose:** Liveness probe. - -**Response (200, application/json):** -```json -{"status":"healthy","service":"lean-rpc-api"} -``` - ---- - -### `GET /lean/v0/states/finalized` - -**Purpose:** Fetch finalized state as raw SSZ bytes. - -**Response (200, application/octet-stream):** -- Body: SSZ bytes of the finalized `State`. - -**Common responses:** -- `404`: Finalized state not available yet. -- `503`: Store not initialized. - -**Example:** -```sh -curl -o finalized_states.ssz http://localhost:5058/lean/v0/states/finalized -``` - ---- - -### `GET /lean/v0/checkpoints/justified` - -**Purpose:** Latest justified checkpoint. - -**Response (200, application/json):** -```json -{"slot":56,"root":"0x..."} -``` - -**Common responses:** -- `503`: Store not initialized. - ---- - -### `GET /lean/v0/fork_choice` - -**Purpose:** Fork choice snapshot for monitoring. - -**Response (200, application/json):** -```json -{ - "nodes": [ - { - "root": "0x...", - "slot": 62, - "parent_root": "0x...", - "proposer_index": 2, - "weight": 5 - } - ], - "head": "0x...", - "justified": {"slot": 63, "root": "0x..."}, - "finalized": {"slot": 62, "root": "0x..."}, - "safe_target": "0x...", - "validator_count": 5 -} -``` - -**Common responses:** -- `503`: Store not initialized. - -## Configuration - -Enable and configure via CLI flags: - -- `--api-host` (default `0.0.0.0`) -- `--api-port` (default `5058`, `0` disables) -- `--api-enabled` (default `true`) - -## Notes - -- `/states/finalized` is binary output. Use `curl -o` to avoid terminal warnings. -- `weight` reflects current fork choice vote weight based on latest known attestations. diff --git a/api/handlers/checkpoints.go b/api/handlers/checkpoints.go deleted file mode 100644 index eb6b153..0000000 --- a/api/handlers/checkpoints.go +++ /dev/null @@ -1,31 +0,0 @@ -package handlers - -import ( - "encoding/json" - "net/http" - - apitypes "github.com/geanlabs/gean/api/types" - "github.com/geanlabs/gean/chain/forkchoice" -) - -// JustifiedCheckpoint returns a handler for the justified checkpoint endpoint. -func JustifiedCheckpoint(storeGetter func() *forkchoice.Store) http.HandlerFunc { - return func(w http.ResponseWriter, r *http.Request) { - if r.Method != http.MethodGet { - w.WriteHeader(http.StatusMethodNotAllowed) - return - } - store := storeGetter() - if store == nil { - http.Error(w, "Store not initialized", http.StatusServiceUnavailable) - return - } - - snap := store.ForkChoiceSnapshot() - w.Header().Set("Content-Type", "application/json") - _ = json.NewEncoder(w).Encode(apitypes.CheckpointResponse{ - Slot: snap.Justified.Slot, - Root: hex32(snap.Justified.Root), - }) - } -} diff --git a/api/handlers/fork_choice.go b/api/handlers/fork_choice.go deleted file mode 100644 index e21e081..0000000 --- a/api/handlers/fork_choice.go +++ /dev/null @@ -1,54 +0,0 @@ -package handlers - -import ( - "encoding/json" - "net/http" - - apitypes "github.com/geanlabs/gean/api/types" - "github.com/geanlabs/gean/chain/forkchoice" -) - -// ForkChoice returns a handler for the fork choice endpoint. -func ForkChoice(storeGetter func() *forkchoice.Store) http.HandlerFunc { - return func(w http.ResponseWriter, r *http.Request) { - if r.Method != http.MethodGet { - w.WriteHeader(http.StatusMethodNotAllowed) - return - } - store := storeGetter() - if store == nil { - http.Error(w, "Store not initialized", http.StatusServiceUnavailable) - return - } - - snap := store.ForkChoiceSnapshot() - nodes := make([]apitypes.ForkChoiceNode, 0, len(snap.Nodes)) - for _, n := range snap.Nodes { - nodes = append(nodes, apitypes.ForkChoiceNode{ - Root: hex32(n.Root), - Slot: n.Slot, - ParentRoot: hex32(n.ParentRoot), - ProposerIndex: n.ProposerIndex, - Weight: n.Weight, - }) - } - - resp := apitypes.ForkChoiceResponse{ - Nodes: nodes, - Head: hex32(snap.Head), - Justified: apitypes.CheckpointResponse{ - Slot: snap.Justified.Slot, - Root: hex32(snap.Justified.Root), - }, - Finalized: apitypes.CheckpointResponse{ - Slot: snap.Finalized.Slot, - Root: hex32(snap.Finalized.Root), - }, - SafeTarget: hex32(snap.SafeTarget), - ValidatorCount: snap.ValidatorCount, - } - - w.Header().Set("Content-Type", "application/json") - _ = json.NewEncoder(w).Encode(resp) - } -} diff --git a/api/handlers/health.go b/api/handlers/health.go deleted file mode 100644 index 1319295..0000000 --- a/api/handlers/health.go +++ /dev/null @@ -1,28 +0,0 @@ -package handlers - -import ( - "encoding/json" - "net/http" - - apitypes "github.com/geanlabs/gean/api/types" -) - -const ( - healthStatus = "healthy" - healthService = "lean-rpc-api" -) - -// Health returns a handler for the health endpoint. -func Health() http.HandlerFunc { - return func(w http.ResponseWriter, r *http.Request) { - if r.Method != http.MethodGet { - w.WriteHeader(http.StatusMethodNotAllowed) - return - } - w.Header().Set("Content-Type", "application/json") - _ = json.NewEncoder(w).Encode(apitypes.HealthResponse{ - Status: healthStatus, - Service: healthService, - }) - } -} diff --git a/api/handlers/states.go b/api/handlers/states.go deleted file mode 100644 index 791426b..0000000 --- a/api/handlers/states.go +++ /dev/null @@ -1,35 +0,0 @@ -package handlers - -import ( - "net/http" - - "github.com/geanlabs/gean/chain/forkchoice" -) - -// FinalizedState returns a handler for the finalized state endpoint. -func FinalizedState(storeGetter func() *forkchoice.Store) http.HandlerFunc { - return func(w http.ResponseWriter, r *http.Request) { - if r.Method != http.MethodGet { - w.WriteHeader(http.StatusMethodNotAllowed) - return - } - store := storeGetter() - if store == nil { - http.Error(w, "Store not initialized", http.StatusServiceUnavailable) - return - } - - sszBytes, ok, err := store.FinalizedStateSSZ() - if err != nil { - http.Error(w, "Encoding failed", http.StatusInternalServerError) - return - } - if !ok { - http.Error(w, "Finalized state not available", http.StatusNotFound) - return - } - - w.Header().Set("Content-Type", "application/octet-stream") - _, _ = w.Write(sszBytes) - } -} diff --git a/api/handlers/util.go b/api/handlers/util.go deleted file mode 100644 index a55ff01..0000000 --- a/api/handlers/util.go +++ /dev/null @@ -1,7 +0,0 @@ -package handlers - -import "encoding/hex" - -func hex32(root [32]byte) string { - return "0x" + hex.EncodeToString(root[:]) -} diff --git a/api/httprest/router.go b/api/httprest/router.go deleted file mode 100644 index 85d9c13..0000000 --- a/api/httprest/router.go +++ /dev/null @@ -1,18 +0,0 @@ -package httprest - -import ( - "net/http" - - "github.com/geanlabs/gean/api/handlers" - "github.com/geanlabs/gean/chain/forkchoice" -) - -// NewMux constructs the HTTP router for the Lean API. -func NewMux(storeGetter func() *forkchoice.Store) *http.ServeMux { - mux := http.NewServeMux() - mux.HandleFunc(HealthPath, handlers.Health()) - mux.HandleFunc(FinalizedStatePath, handlers.FinalizedState(storeGetter)) - mux.HandleFunc(JustifiedPath, handlers.JustifiedCheckpoint(storeGetter)) - mux.HandleFunc(ForkChoicePath, handlers.ForkChoice(storeGetter)) - return mux -} diff --git a/api/httprest/routes.go b/api/httprest/routes.go deleted file mode 100644 index 9ed0cd6..0000000 --- a/api/httprest/routes.go +++ /dev/null @@ -1,8 +0,0 @@ -package httprest - -const ( - HealthPath = "/lean/v0/health" - FinalizedStatePath = "/lean/v0/states/finalized" - JustifiedPath = "/lean/v0/checkpoints/justified" - ForkChoicePath = "/lean/v0/fork_choice" -) diff --git a/api/server.go b/api/server.go new file mode 100644 index 0000000..4206335 --- /dev/null +++ b/api/server.go @@ -0,0 +1,107 @@ +package api + +import ( + "encoding/json" + "fmt" + "net" + "net/http" + + "github.com/geanlabs/gean/logger" + "github.com/geanlabs/gean/node" + "github.com/geanlabs/gean/types" + "github.com/prometheus/client_golang/prometheus/promhttp" +) + +// StartAPIServer starts the API server on the given address. +func StartAPIServer(address string, s *node.ConsensusStore) error { + mux := http.NewServeMux() + + mux.HandleFunc("GET /lean/v0/health", handleHealth) + mux.HandleFunc("GET /lean/v0/states/finalized", handleFinalizedState(s)) + mux.HandleFunc("GET /lean/v0/checkpoints/justified", handleJustifiedCheckpoint(s)) + mux.HandleFunc("GET /lean/v0/fork_choice", handleForkChoice(s)) + + listener, err := net.Listen("tcp", address) + if err != nil { + return fmt.Errorf("api listen: %w", err) + } + + logger.Info(logger.Network, "api server listening on %s", address) + return http.Serve(listener, mux) +} + +// StartMetricsServer starts the metrics server on the given address. +func StartMetricsServer(address string) error { + mux := http.NewServeMux() + mux.Handle("GET /metrics", promhttp.Handler()) + + listener, err := net.Listen("tcp", address) + if err != nil { + return fmt.Errorf("metrics listen: %w", err) + } + + logger.Info(logger.Network, "metrics server listening on %s", address) + return http.Serve(listener, mux) +} + +// handleHealth returns a simple health check. +func handleHealth(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Content-Type", "application/json") + w.Write([]byte(`{"status":"healthy","service":"gean"}`)) +} + +// handleFinalizedState returns the finalized state as SSZ bytes. +// Zeros state_root in latest_block_header for canonical post-state form. +func handleFinalizedState(s *node.ConsensusStore) http.HandlerFunc { + return func(w http.ResponseWriter, r *http.Request) { + finalized := s.LatestFinalized() + state := s.GetState(finalized.Root) + if state == nil { + http.Error(w, "finalized state not available", http.StatusServiceUnavailable) + return + } + + // Zero state_root to match canonical post-state representation. + + state.LatestBlockHeader.StateRoot = types.ZeroRoot + + data, err := state.MarshalSSZ() + if err != nil { + http.Error(w, "ssz marshal failed", http.StatusInternalServerError) + return + } + + w.Header().Set("Content-Type", "application/octet-stream") + w.Write(data) + } +} + +// handleJustifiedCheckpoint returns the justified checkpoint as JSON. +func handleJustifiedCheckpoint(s *node.ConsensusStore) http.HandlerFunc { + return func(w http.ResponseWriter, r *http.Request) { + cp := s.LatestJustified() + w.Header().Set("Content-Type", "application/json") + json.NewEncoder(w).Encode(map[string]interface{}{ + "slot": cp.Slot, + "root": fmt.Sprintf("0x%x", cp.Root), + }) + } +} + +// handleForkChoice returns fork choice info as JSON. +func handleForkChoice(s *node.ConsensusStore) http.HandlerFunc { + return func(w http.ResponseWriter, r *http.Request) { + head := s.Head() + justified := s.LatestJustified() + finalized := s.LatestFinalized() + safeTarget := s.SafeTarget() + + w.Header().Set("Content-Type", "application/json") + json.NewEncoder(w).Encode(map[string]interface{}{ + "head": fmt.Sprintf("0x%x", head), + "justified": map[string]interface{}{"slot": justified.Slot, "root": fmt.Sprintf("0x%x", justified.Root)}, + "finalized": map[string]interface{}{"slot": finalized.Slot, "root": fmt.Sprintf("0x%x", finalized.Root)}, + "safe_target": fmt.Sprintf("0x%x", safeTarget), + }) + } +} diff --git a/api/server/config.go b/api/server/config.go deleted file mode 100644 index 52de1fa..0000000 --- a/api/server/config.go +++ /dev/null @@ -1,8 +0,0 @@ -package server - -// Config holds API server configuration. -type Config struct { - Host string - Port int - Enabled bool -} diff --git a/api/server/server.go b/api/server/server.go deleted file mode 100644 index 6cbdd73..0000000 --- a/api/server/server.go +++ /dev/null @@ -1,87 +0,0 @@ -package server - -import ( - "context" - "errors" - "fmt" - "log/slog" - "net" - "net/http" - "syscall" - "time" - - "github.com/geanlabs/gean/api/httprest" - "github.com/geanlabs/gean/chain/forkchoice" - "github.com/geanlabs/gean/observability/logging" -) - -// StoreGetter returns the current forkchoice store. -type StoreGetter func() *forkchoice.Store - -// Server is a lightweight HTTP API server. -type Server struct { - cfg Config - storeGetter StoreGetter - httpServer *http.Server - log *slog.Logger -} - -// New constructs a new API server. -func New(cfg Config, storeGetter StoreGetter) *Server { - return &Server{ - cfg: cfg, - storeGetter: storeGetter, - log: logging.NewComponentLogger(logging.CompAPI), - } -} - -// Start launches the HTTP server in the background. -func (s *Server) Start() error { - if !s.cfg.Enabled { - return nil - } - if s.httpServer != nil { - return nil - } - - addr := fmt.Sprintf("%s:%d", s.cfg.Host, s.cfg.Port) - mux := httprest.NewMux(s.storeGetter) - - ln, err := net.Listen("tcp", addr) - if err != nil { - if errors.Is(err, syscall.EADDRINUSE) { - s.log.Warn("api server already running; skipping", "addr", addr) - return nil - } - return err - } - - s.httpServer = &http.Server{ - Addr: addr, - Handler: mux, - ReadTimeout: 5 * time.Second, - WriteTimeout: 10 * time.Second, - IdleTimeout: 60 * time.Second, - } - - go func() { - if err := s.httpServer.Serve(ln); err != nil && !errors.Is(err, http.ErrServerClosed) { - s.log.Error("api server error", "err", err) - } - }() - - s.log.Info("api server started", "addr", addr) - return nil -} - -// Stop gracefully shuts down the HTTP server. -func (s *Server) Stop() { - if s.httpServer == nil { - return - } - ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) - defer cancel() - _ = s.httpServer.Shutdown(ctx) - s.httpServer = nil - s.log.Info("api server stopped") -} diff --git a/api/types/types.go b/api/types/types.go deleted file mode 100644 index ff4439e..0000000 --- a/api/types/types.go +++ /dev/null @@ -1,32 +0,0 @@ -package types - -// HealthResponse is the JSON response for the health endpoint. -type HealthResponse struct { - Status string `json:"status"` - Service string `json:"service"` -} - -// CheckpointResponse is the JSON response for checkpoint endpoints. -type CheckpointResponse struct { - Slot uint64 `json:"slot"` - Root string `json:"root"` -} - -// ForkChoiceNode is a JSON-safe fork choice node response. -type ForkChoiceNode struct { - Root string `json:"root"` - Slot uint64 `json:"slot"` - ParentRoot string `json:"parent_root"` - ProposerIndex uint64 `json:"proposer_index"` - Weight int `json:"weight"` -} - -// ForkChoiceResponse is the JSON response for the fork choice endpoint. -type ForkChoiceResponse struct { - Nodes []ForkChoiceNode `json:"nodes"` - Head string `json:"head"` - Justified CheckpointResponse `json:"justified"` - Finalized CheckpointResponse `json:"finalized"` - SafeTarget string `json:"safe_target"` - ValidatorCount uint64 `json:"validator_count"` -} diff --git a/chain/forkchoice/aggregated_payloads.go b/chain/forkchoice/aggregated_payloads.go deleted file mode 100644 index 3d2cf81..0000000 --- a/chain/forkchoice/aggregated_payloads.go +++ /dev/null @@ -1,90 +0,0 @@ -package forkchoice - -import ( - "bytes" - - "github.com/geanlabs/gean/types" -) - -type aggregatedPayload struct { - data *types.AttestationData - proofs []*types.AggregatedSignatureProof -} - -func makeAttestationDataKey(data *types.AttestationData) ([32]byte, bool) { - if data == nil { - return [32]byte{}, false - } - root, err := data.HashTreeRoot() - if err != nil { - return [32]byte{}, false - } - return root, true -} - -func sameAggregatedProof(a, b *types.AggregatedSignatureProof) bool { - if a == nil || b == nil { - return false - } - return bytes.Equal(a.Participants, b.Participants) && bytes.Equal(a.ProofData, b.ProofData) -} - -func addAggregatedPayload(dst map[[32]byte]aggregatedPayload, data *types.AttestationData, proof *types.AggregatedSignatureProof) { - if data == nil || proof == nil { - return - } - key, ok := makeAttestationDataKey(data) - if !ok { - return - } - - payload := dst[key] - if payload.data == nil { - payload.data = data - } - for _, existing := range payload.proofs { - if sameAggregatedProof(existing, proof) { - dst[key] = payload - return - } - } - payload.proofs = append(payload.proofs, cloneAggregatedSignatureProof(proof)) - dst[key] = payload -} - -func mergeAggregatedPayloads(dst map[[32]byte]aggregatedPayload, src map[[32]byte]aggregatedPayload) map[[32]byte]aggregatedPayload { - if dst == nil { - dst = make(map[[32]byte]aggregatedPayload) - } - for _, payload := range src { - if payload.data == nil { - continue - } - for _, proof := range payload.proofs { - addAggregatedPayload(dst, payload.data, proof) - } - } - return dst -} - -func extractAttestationsFromAggregatedPayloads(payloads map[[32]byte]aggregatedPayload) map[uint64]*types.SignedAttestation { - attestations := make(map[uint64]*types.SignedAttestation) - for _, payload := range payloads { - if payload.data == nil { - continue - } - for _, proof := range payload.proofs { - if proof == nil { - continue - } - for _, vid := range bitlistToValidatorIDs(proof.Participants) { - sa := &types.SignedAttestation{ValidatorID: vid, Message: payload.data} - existing := attestations[vid] - if existing == nil || existing.Message == nil || existing.Message.Slot < payload.data.Slot { - attestations[vid] = sa - } - } - } - } - return attestations -} diff --git a/chain/forkchoice/aggregation.go b/chain/forkchoice/aggregation.go deleted file mode 100644 index dce8280..0000000 --- a/chain/forkchoice/aggregation.go +++ /dev/null @@ -1,515 +0,0 @@ -package forkchoice - -import ( - "bytes" - "fmt" - "sort" - "time" - - "github.com/geanlabs/gean/chain/statetransition" - "github.com/geanlabs/gean/observability/metrics" - "github.com/geanlabs/gean/types" - "github.com/geanlabs/gean/xmss/leanmultisig" -) - -// aggregationInput holds pre-collected data for a single aggregation group, -// extracted while holding the lock so the FFI call can run without it. -type aggregationInput struct { - root [32]byte - data *types.AttestationData - bits []byte - signerIDs []uint64 - pubkeys [][]byte - signatures [][]byte - // cachedProof is non-nil if a reusable proof was found in the cache. - cachedProof *types.AggregatedSignatureProof -} - -// AggregateCommitteeSignatures collects gossip signatures, builds aggregated -// proofs, and returns SignedAggregatedAttestation objects ready for publishing. -// Called by aggregators at interval 2. -// -// The mutex is released during the expensive leanmultisig.Aggregate() FFI calls -// to prevent blocking block processing, attestation handling, and time advances. -// This matches zeam's pattern of using a separate signatures_mutex from the -// forkchoice lock (forkchoice.zig:308). -func (c *Store) AggregateCommitteeSignatures() ([]*types.SignedAggregatedAttestation, error) { - // Phase 1: Collect inputs while holding the lock. - inputs, err := c.collectAggregationInputs() - if err != nil { - return nil, err - } - if len(inputs) == 0 { - return nil, nil - } - - // Phase 2: Build proofs WITHOUT the lock — this is the expensive part. - results, err := buildAggregationProofs(inputs) - if err != nil { - return nil, fmt.Errorf("build aggregated proofs: %w", err) - } - - // Phase 3: Store results and build output while holding the lock. - return c.storeAggregationResults(results) -} - -// collectAggregationInputs extracts attestation data and signatures from the -// gossip cache while holding the lock. Clears consumed gossip signatures. -func (c *Store) collectAggregationInputs() ([]aggregationInput, error) { - c.mu.Lock() - defer c.mu.Unlock() - - headState, ok := c.storage.GetState(c.head) - if !ok { - return nil, fmt.Errorf("head state not found") - } - - // Collect full attestations from gossip signatures. - var attestations []*types.SignedAttestation - for key, stored := range c.gossipSignatures { - if stored.data == nil { - continue - } - attestations = append(attestations, &types.SignedAttestation{ - ValidatorID: key.validatorID, - Message: stored.data, - Signature: stored.signature, - }) - } - - if len(attestations) == 0 { - return nil, nil - } - - // Group by attestation data root, keep latest per validator. - grouped := make(map[[32]byte]map[uint64]*types.SignedAttestation) - dataByRoot := make(map[[32]byte]*types.AttestationData) - for _, sa := range attestations { - if sa == nil || sa.Message == nil { - continue - } - dataRoot, err := sa.Message.HashTreeRoot() - if err != nil { - return nil, fmt.Errorf("hash attestation data: %w", err) - } - if _, ok := grouped[dataRoot]; !ok { - grouped[dataRoot] = make(map[uint64]*types.SignedAttestation) - dataByRoot[dataRoot] = sa.Message - } - existing, ok := grouped[dataRoot][sa.ValidatorID] - if !ok || existing == nil || (existing.Message != nil && existing.Message.Slot < sa.Message.Slot) { - grouped[dataRoot][sa.ValidatorID] = sa - } - } - - if len(grouped) == 0 { - return nil, nil - } - - // Sort roots for deterministic ordering. - roots := make([][32]byte, 0, len(grouped)) - for root := range grouped { - roots = append(roots, root) - } - sort.Slice(roots, func(i, j int) bool { - return bytes.Compare(roots[i][:], roots[j][:]) < 0 - }) - - // Build inputs for each group, checking the proof cache while we have the lock. - var inputs []aggregationInput - for _, root := range roots { - group := grouped[root] - validatorIDs := make([]uint64, 0, len(group)) - for validatorID := range group { - validatorIDs = append(validatorIDs, validatorID) - } - sort.Slice(validatorIDs, func(i, j int) bool { return validatorIDs[i] < validatorIDs[j] }) - if len(validatorIDs) == 0 { - continue - } - - data := dataByRoot[root] - bits := makeAggregationBits(validatorIDs) - - // Check for a cached proof before collecting signatures. - if cached := c.findReusableAggregatedProof(data, validatorIDs, bits); cached != nil { - inputs = append(inputs, aggregationInput{ - root: root, - data: data, - bits: bits, - signerIDs: validatorIDs, - cachedProof: cached, - }) - continue - } - - // Collect pubkeys and signatures for this group. - signerIDs := make([]uint64, 0, len(validatorIDs)) - pubkeys := make([][]byte, 0, len(validatorIDs)) - signatures := make([][]byte, 0, len(validatorIDs)) - for _, validatorID := range validatorIDs { - if validatorID >= uint64(len(headState.Validators)) { - return nil, fmt.Errorf("validator index out of range: %d", validatorID) - } - pubkey := headState.Validators[validatorID].Pubkey - sa := group[validatorID] - if sa == nil || sa.Message == nil { - continue - } - - signature := sa.Signature - key, keyOK := makeSignatureKey(validatorID, data) - if keyOK { - if cached, ok := c.gossipSignatures[key]; ok && (!hasNonZeroSignature(signature) || cached.slot >= sa.Message.Slot) { - signature = cached.signature - } - } - if !hasNonZeroSignature(signature) { - continue - } - - signerIDs = append(signerIDs, validatorID) - pubkeys = append(pubkeys, pubkey[:]) - sig := make([]byte, len(signature)) - copy(sig, signature[:]) - signatures = append(signatures, sig) - } - if len(signerIDs) == 0 { - continue - } - - // Rebuild bits for actual signers (may differ from validatorIDs if some had zero sigs). - bits = makeAggregationBits(signerIDs) - - // Check cache again with actual signer set. - if cached := c.findReusableAggregatedProof(data, signerIDs, bits); cached != nil { - inputs = append(inputs, aggregationInput{ - root: root, - data: data, - bits: bits, - signerIDs: signerIDs, - cachedProof: cached, - }) - continue - } - - inputs = append(inputs, aggregationInput{ - root: root, - data: data, - bits: bits, - signerIDs: signerIDs, - pubkeys: pubkeys, - signatures: signatures, - }) - } - - // Clear consumed gossip signatures (spec: remove aggregated entries). - c.gossipSignatures = make(map[signatureKey]storedSignature) - - return inputs, nil -} - -// buildAggregationProofs runs the expensive leanmultisig.Aggregate() FFI calls -// WITHOUT holding the fork-choice mutex. This is the critical change that -// prevents consensus stalls during proof building. -func buildAggregationProofs(inputs []aggregationInput) ([]aggregationInput, error) { - proverReady := false - - for i := range inputs { - inp := &inputs[i] - if inp.cachedProof != nil { - log.Info("attestation aggregate proof reused (leanMultisig)", - "slot", inp.data.Slot, - "participants", len(inp.signerIDs), - "proof_size", fmt.Sprintf("%d bytes", len(inp.cachedProof.ProofData)), - ) - continue - } - - if !proverReady { - leanmultisig.SetupProver() - proverReady = true - } - - buildStart := time.Now() - proofData, err := leanmultisig.Aggregate(inp.pubkeys, inp.signatures, inp.root, uint32(inp.data.Slot)) - metrics.PQSigSignaturesBuildingTime.Observe(time.Since(buildStart).Seconds()) - if err != nil { - return nil, fmt.Errorf("aggregate signatures for slot %d: %w", inp.data.Slot, err) - } - - inp.cachedProof = &types.AggregatedSignatureProof{ - Participants: append([]byte(nil), inp.bits...), - ProofData: proofData, - } - log.Info("attestation aggregate proof built (leanMultisig)", - "slot", inp.data.Slot, - "participants", len(inp.signerIDs), - "proof_size", fmt.Sprintf("%d bytes", len(proofData)), - ) - } - - return inputs, nil -} - -// storeAggregationResults stores built proofs in the cache and assembles the -// final output while holding the lock. -func (c *Store) storeAggregationResults(inputs []aggregationInput) ([]*types.SignedAggregatedAttestation, error) { - c.mu.Lock() - defer c.mu.Unlock() - - result := make([]*types.SignedAggregatedAttestation, 0, len(inputs)) - for _, inp := range inputs { - if inp.cachedProof == nil { - continue - } - - // Store in cache for future reuse. - for _, validatorID := range inp.signerIDs { - c.storeAggregatedPayloadLocked(validatorID, inp.data, inp.cachedProof) - } - - result = append(result, &types.SignedAggregatedAttestation{ - Data: inp.data, - Proof: inp.cachedProof, - }) - metrics.PQSigAggregatedSignaturesTotal.Inc() - metrics.PQSigAttestationsInAggregatedTotal.Add(float64(len(inp.signerIDs))) - } - - return result, nil -} - -// buildAggregatedAttestationsFromSignedLocked builds aggregated attestations -// and proofs from a set of signed attestations. Must be called with c.mu held. -// Used by block production (produce.go) which already holds the lock and needs -// proofs synchronously before publishing the block. -func (c *Store) buildAggregatedAttestationsFromSignedLocked( - state *types.State, - attestations []*types.SignedAttestation, -) ([]*types.AggregatedAttestation, []*types.AggregatedSignatureProof, error) { - if len(attestations) == 0 { - return []*types.AggregatedAttestation{}, []*types.AggregatedSignatureProof{}, nil - } - - grouped := make(map[[32]byte]map[uint64]*types.SignedAttestation) - dataByRoot := make(map[[32]byte]*types.AttestationData) - for _, sa := range attestations { - if sa == nil || sa.Message == nil { - continue - } - dataRoot, err := sa.Message.HashTreeRoot() - if err != nil { - return nil, nil, fmt.Errorf("hash attestation data: %w", err) - } - if _, ok := grouped[dataRoot]; !ok { - grouped[dataRoot] = make(map[uint64]*types.SignedAttestation) - dataByRoot[dataRoot] = sa.Message - } - existing, ok := grouped[dataRoot][sa.ValidatorID] - if !ok || existing == nil || (existing.Message != nil && existing.Message.Slot < sa.Message.Slot) { - grouped[dataRoot][sa.ValidatorID] = sa - } - } - - if len(grouped) == 0 { - return []*types.AggregatedAttestation{}, []*types.AggregatedSignatureProof{}, nil - } - - roots := make([][32]byte, 0, len(grouped)) - for root := range grouped { - roots = append(roots, root) - } - sort.Slice(roots, func(i, j int) bool { - return bytes.Compare(roots[i][:], roots[j][:]) < 0 - }) - - aggregatedAttestations := make([]*types.AggregatedAttestation, 0, len(roots)) - attestationProofs := make([]*types.AggregatedSignatureProof, 0, len(roots)) - proverReady := false - for _, root := range roots { - group := grouped[root] - validatorIDs := make([]uint64, 0, len(group)) - for validatorID := range group { - validatorIDs = append(validatorIDs, validatorID) - } - sort.Slice(validatorIDs, func(i, j int) bool { return validatorIDs[i] < validatorIDs[j] }) - if len(validatorIDs) == 0 { - continue - } - - data := dataByRoot[root] - bits := makeAggregationBits(validatorIDs) - if cached := c.findReusableAggregatedProof(data, validatorIDs, bits); cached != nil { - aggregatedAttestations = append(aggregatedAttestations, &types.AggregatedAttestation{ - AggregationBits: bits, - Data: data, - }) - attestationProofs = append(attestationProofs, cached) - continue - } - - signerIDs := make([]uint64, 0, len(validatorIDs)) - pubkeys := make([][]byte, 0, len(validatorIDs)) - signatures := make([][]byte, 0, len(validatorIDs)) - for _, validatorID := range validatorIDs { - if validatorID >= uint64(len(state.Validators)) { - return nil, nil, fmt.Errorf("validator index out of range: %d", validatorID) - } - pubkey := state.Validators[validatorID].Pubkey - sa := group[validatorID] - if sa == nil || sa.Message == nil { - continue - } - signature := sa.Signature - key, keyOK := makeSignatureKey(validatorID, data) - if keyOK { - if cached, ok := c.gossipSignatures[key]; ok && (!hasNonZeroSignature(signature) || cached.slot >= sa.Message.Slot) { - signature = cached.signature - } - } - if !hasNonZeroSignature(signature) { - continue - } - signerIDs = append(signerIDs, validatorID) - pubkeys = append(pubkeys, pubkey[:]) - sig := make([]byte, len(signature)) - copy(sig, signature[:]) - signatures = append(signatures, sig) - } - if len(signerIDs) == 0 { - continue - } - - bits = makeAggregationBits(signerIDs) - proof := c.findReusableAggregatedProof(data, signerIDs, bits) - if proof == nil { - if !proverReady { - leanmultisig.SetupProver() - proverReady = true - } - proofData, err := leanmultisig.Aggregate(pubkeys, signatures, root, uint32(data.Slot)) - if err != nil { - return nil, nil, fmt.Errorf("aggregate signatures: %w", err) - } - proof = &types.AggregatedSignatureProof{ - Participants: append([]byte(nil), bits...), - ProofData: proofData, - } - for _, validatorID := range signerIDs { - c.storeAggregatedPayloadLocked(validatorID, data, proof) - } - } - - aggregatedAttestations = append(aggregatedAttestations, &types.AggregatedAttestation{ - AggregationBits: bits, - Data: data, - }) - attestationProofs = append(attestationProofs, proof) - metrics.PQSigAggregatedSignaturesTotal.Inc() - metrics.PQSigAttestationsInAggregatedTotal.Add(float64(len(signerIDs))) - } - - return aggregatedAttestations, attestationProofs, nil -} - -func bitlistToValidatorIDs(bits []byte) []uint64 { - numBits := uint64(statetransition.BitlistLen(bits)) - validatorIDs := make([]uint64, 0, numBits) - for i := uint64(0); i < numBits; i++ { - if statetransition.GetBit(bits, i) { - validatorIDs = append(validatorIDs, i) - } - } - return validatorIDs -} - -func bitlistsEqual(a, b []byte) bool { - aLen := statetransition.BitlistLen(a) - bLen := statetransition.BitlistLen(b) - if aLen != bLen { - return false - } - for i := 0; i < aLen; i++ { - idx := uint64(i) - if statetransition.GetBit(a, idx) != statetransition.GetBit(b, idx) { - return false - } - } - return true -} - -func makeAggregationBits(validatorIDs []uint64) []byte { - if len(validatorIDs) == 0 { - return []byte{0x01} - } - maxValidatorID := validatorIDs[len(validatorIDs)-1] - bits := statetransition.MakeBitlist(maxValidatorID + 1) - for _, validatorID := range validatorIDs { - bits = statetransition.SetBit(bits, validatorID, true) - } - return bits -} - -func hasNonZeroSignature(signature [types.XMSSSignatureSize]byte) bool { - for _, b := range signature { - if b != 0 { - return true - } - } - return false -} - -func (c *Store) findReusableAggregatedProof( - data *types.AttestationData, - validatorIDs []uint64, - participants []byte, -) *types.AggregatedSignatureProof { - if data == nil || len(validatorIDs) == 0 { - return nil - } - - firstKey, ok := makeSignatureKey(validatorIDs[0], data) - if !ok { - return nil - } - candidates := c.aggregatedPayloads[firstKey] - - for _, candidate := range candidates { - if candidate.proof == nil { - continue - } - if !bitlistsEqual(candidate.proof.Participants, participants) { - continue - } - - matchAll := true - for _, validatorID := range validatorIDs[1:] { - key, keyOK := makeSignatureKey(validatorID, data) - if !keyOK || !containsCachedProof(c.aggregatedPayloads[key], candidate.proof) { - matchAll = false - break - } - } - if matchAll { - return cloneAggregatedSignatureProof(candidate.proof) - } - } - - return nil -} - -func containsCachedProof(list []storedAggregatedPayload, target *types.AggregatedSignatureProof) bool { - for _, candidate := range list { - if candidate.proof == nil { - continue - } - if !bitlistsEqual(candidate.proof.Participants, target.Participants) { - continue - } - if bytes.Equal(candidate.proof.ProofData, target.ProofData) { - return true - } - } - return false -} diff --git a/chain/forkchoice/aggregation_test.go b/chain/forkchoice/aggregation_test.go deleted file mode 100644 index e15d1bb..0000000 --- a/chain/forkchoice/aggregation_test.go +++ /dev/null @@ -1,58 +0,0 @@ -package forkchoice - -import ( - "testing" - - "github.com/geanlabs/gean/chain/statetransition" -) - -func TestBitlistToValidatorIDs(t *testing.T) { - bits := statetransition.MakeBitlist(6) - bits = statetransition.SetBit(bits, 0, true) - bits = statetransition.SetBit(bits, 2, true) - bits = statetransition.SetBit(bits, 5, true) - - got := bitlistToValidatorIDs(bits) - want := []uint64{0, 2, 5} - - if len(got) != len(want) { - t.Fatalf("unexpected length: got %d, want %d", len(got), len(want)) - } - for i := range want { - if got[i] != want[i] { - t.Fatalf("unexpected validator id at %d: got %d, want %d", i, got[i], want[i]) - } - } -} - -func TestBitlistsEqual(t *testing.T) { - a := statetransition.MakeBitlist(4) - a = statetransition.SetBit(a, 1, true) - a = statetransition.SetBit(a, 3, true) - - b := statetransition.MakeBitlist(4) - b = statetransition.SetBit(b, 1, true) - b = statetransition.SetBit(b, 3, true) - - if !bitlistsEqual(a, b) { - t.Fatal("expected bitlists to be equal by bit values") - } - - b = statetransition.SetBit(b, 2, true) - if bitlistsEqual(a, b) { - t.Fatal("expected bitlists with different set bits to be unequal") - } -} - -func TestBitlistsEqual_DifferentBitlistLengths(t *testing.T) { - onlyZero := statetransition.MakeBitlist(1) - onlyZero = statetransition.SetBit(onlyZero, 0, true) - - zeroAndOne := statetransition.MakeBitlist(2) - zeroAndOne = statetransition.SetBit(zeroAndOne, 0, true) - zeroAndOne = statetransition.SetBit(zeroAndOne, 1, true) - - if bitlistsEqual(onlyZero, zeroAndOne) { - t.Fatal("expected bitlists with different lengths/participants to be unequal") - } -} diff --git a/chain/forkchoice/api_snapshot.go b/chain/forkchoice/api_snapshot.go deleted file mode 100644 index c531573..0000000 --- a/chain/forkchoice/api_snapshot.go +++ /dev/null @@ -1,128 +0,0 @@ -package forkchoice - -import ( - "sort" - - "github.com/geanlabs/gean/types" -) - -// ForkChoiceNode is a lightweight view of a block for API responses. -type ForkChoiceNode struct { - Root [32]byte - Slot uint64 - ParentRoot [32]byte - ProposerIndex uint64 - Weight int -} - -// ForkChoiceSnapshot is a read-only snapshot of fork choice state. -type ForkChoiceSnapshot struct { - Nodes []ForkChoiceNode - Head [32]byte - SafeTarget [32]byte - Justified types.Checkpoint - Finalized types.Checkpoint - ValidatorCount uint64 -} - -// ForkChoiceSnapshot returns a consistent fork choice snapshot for API responses. -func (c *Store) ForkChoiceSnapshot() ForkChoiceSnapshot { - c.mu.Lock() - defer c.mu.Unlock() - - blocks := c.allKnownBlockSummaries() - weights := computeBlockWeights(blocks, c.latestKnownAttestations) - - finalized := types.Checkpoint{} - if c.latestFinalized != nil { - finalized = *c.latestFinalized - } - justified := types.Checkpoint{} - if c.latestJustified != nil { - justified = *c.latestJustified - } - - nodes := make([]ForkChoiceNode, 0, len(blocks)) - for root, block := range blocks { - if block.Slot < finalized.Slot { - continue - } - nodes = append(nodes, ForkChoiceNode{ - Root: root, - Slot: block.Slot, - ParentRoot: block.ParentRoot, - ProposerIndex: block.ProposerIndex, - Weight: weights[root], - }) - } - - sort.Slice(nodes, func(i, j int) bool { - if nodes[i].Slot != nodes[j].Slot { - return nodes[i].Slot < nodes[j].Slot - } - return hashGreater(nodes[i].Root, nodes[j].Root) - }) - - validatorCount := uint64(0) - if headState, ok := c.storage.GetState(c.head); ok && headState != nil { - validatorCount = uint64(len(headState.Validators)) - } - - return ForkChoiceSnapshot{ - Nodes: nodes, - Head: c.head, - SafeTarget: c.safeTarget, - Justified: justified, - Finalized: finalized, - ValidatorCount: validatorCount, - } -} - -// FinalizedStateSSZ returns SSZ bytes for the latest finalized state. -// ok is false when the finalized state is not available. -func (c *Store) FinalizedStateSSZ() ([]byte, bool, error) { - c.mu.Lock() - if c.latestFinalized == nil { - c.mu.Unlock() - return nil, false, nil - } - root := c.latestFinalized.Root - c.mu.Unlock() - - state, ok := c.storage.GetState(root) - if !ok || state == nil { - return nil, false, nil - } - - sszBytes, err := state.MarshalSSZ() - if err != nil { - return nil, true, err - } - return sszBytes, true, nil -} - -func computeBlockWeights(blocks map[[32]byte]blockSummary, latestAttestations map[uint64]*types.SignedAttestation) map[[32]byte]int { - weights := make(map[[32]byte]int, len(blocks)) - for _, sa := range latestAttestations { - if sa == nil || sa.Message == nil || sa.Message.Head == nil { - continue - } - headRoot := sa.Message.Head.Root - if _, ok := blocks[headRoot]; !ok { - continue - } - blockHash := headRoot - for { - b, ok := blocks[blockHash] - if !ok { - break - } - weights[blockHash]++ - if b.Slot == 0 { - break - } - blockHash = b.ParentRoot - } - } - return weights -} diff --git a/chain/forkchoice/attestation.go b/chain/forkchoice/attestation.go deleted file mode 100644 index fe6cfb2..0000000 --- a/chain/forkchoice/attestation.go +++ /dev/null @@ -1,308 +0,0 @@ -package forkchoice - -import ( - "fmt" - "time" - - "github.com/geanlabs/gean/observability/metrics" - "github.com/geanlabs/gean/types" - "github.com/geanlabs/gean/xmss/leanmultisig" -) - -// ProcessAttestation processes an attestation from a block or for direct forkchoice inclusion. -func (c *Store) ProcessAttestation(sa *types.SignedAttestation) { - c.mu.Lock() - defer c.mu.Unlock() - - if c.NowFn != nil { - c.advanceTimeLockedMillis(c.NowFn(), false) - } - - c.processAttestationLocked(sa, false) - if c.isAggregator { - c.storeGossipSignatureLocked(sa) - } -} - -// ProcessSubnetAttestation processes an individual attestation from the subnet gossip topic. -// It validates and stores the gossip signature for aggregation, but does NOT update -// latestNewAttestations or latestKnownAttestations (no direct forkchoice influence). -func (c *Store) ProcessSubnetAttestation(sa *types.SignedAttestation) { - c.mu.Lock() - defer c.mu.Unlock() - - if c.NowFn != nil { - c.advanceTimeLockedMillis(c.NowFn(), false) - } - - c.processSubnetAttestationLocked(sa) -} - -func (c *Store) processSubnetAttestationLocked(sa *types.SignedAttestation) { - start := time.Now() - defer func() { - metrics.AttestationValidationTime.Observe(time.Since(start).Seconds()) - }() - - data := sa.Message - if data == nil { - metrics.AttestationsInvalid.WithLabelValues("subnet").Inc() - return - } - - if reason := c.validateAttestationData(data); reason != "" { - log.Debug("subnet attestation rejected", "reason", reason, "slot", data.Slot, "validator", sa.ValidatorID) - if !isTransientAttestationRejection(reason) { - metrics.AttestationsInvalid.WithLabelValues("subnet").Inc() - } - return - } - - // Verify signature. - if c.shouldVerifySignatures() { - if err := c.verifyAttestationSignature(sa); err != nil { - metrics.AttestationsInvalid.WithLabelValues("subnet").Inc() - return - } - } - - // Future attestation guard. - currentSlot := c.time / types.IntervalsPerSlot - if data.Slot > currentSlot { - return - } - - // Store gossip signature for aggregation — only if this node is an aggregator. - if c.isAggregator { - c.storeGossipSignatureLocked(sa) - } - metrics.AttestationsValid.WithLabelValues("subnet").Inc() -} - -func (c *Store) processAttestationLocked(sa *types.SignedAttestation, isFromBlock bool) { - start := time.Now() - defer func() { - metrics.AttestationValidationTime.Observe(time.Since(start).Seconds()) - }() - sourceLabel := "gossip" - if isFromBlock { - sourceLabel = "block" - } - - data := sa.Message - validatorID := sa.ValidatorID - - if data == nil { - metrics.AttestationsInvalid.WithLabelValues(sourceLabel).Inc() - return - } - - if reason := c.validateAttestationData(data); reason != "" { - log.Debug("attestation rejected", "reason", reason, "slot", data.Slot, "validator", validatorID) - // Unknown/future references are common during gossip races and sync lag. - // Keep invalid metric for deterministic/protocol-invalid cases. - if !isTransientAttestationRejection(reason) { - metrics.AttestationsInvalid.WithLabelValues(sourceLabel).Inc() - } - return - } - - // Verify signature (skip for on-chain attestations; already verified in ProcessBlock). - if !isFromBlock && c.shouldVerifySignatures() { - if err := c.verifyAttestationSignature(sa); err != nil { - metrics.AttestationsInvalid.WithLabelValues(sourceLabel).Inc() - return - } - } - - if isFromBlock { - // On-chain: update known attestations if this is newer. - existing, ok := c.latestKnownAttestations[validatorID] - if !ok || existing == nil || existing.Message == nil || existing.Message.Slot < data.Slot { - c.latestKnownAttestations[validatorID] = sa - } - // Remove from new attestations if superseded. - newAtt, ok := c.latestNewAttestations[validatorID] - if ok && newAtt != nil && newAtt.Message != nil && newAtt.Message.Slot <= data.Slot { - delete(c.latestNewAttestations, validatorID) - } - } else { - // Network gossip attestation processing — used by aggregated attestation path. - currentSlot := c.time / types.IntervalsPerSlot - if data.Slot > currentSlot { - return - } - - // Update new attestations for forkchoice consideration. - existing, ok := c.latestNewAttestations[validatorID] - if !ok || existing == nil || existing.Message == nil || existing.Message.Slot < data.Slot { - c.latestNewAttestations[validatorID] = sa - } - if c.isAggregator { - c.storeGossipSignatureLocked(sa) - } - } - - metrics.AttestationsValid.WithLabelValues(sourceLabel).Inc() -} - -func isTransientAttestationRejection(reason string) bool { - switch reason { - case "source block unknown", "target block unknown", "head block unknown", "attestation too far in future": - return true - default: - return false - } -} - -// verifyAttestationSignature verifies the XMSS signature on the attestation. -func (c *Store) verifyAttestationSignature(sa *types.SignedAttestation) error { - headState, ok := c.storage.GetState(c.head) - if !ok { - return fmt.Errorf("head state not found") - } - - return c.verifyAttestationSignatureWithState(headState, sa.ValidatorID, sa.Message, sa.Signature) -} - -// validateAttestationData performs attestation validation checks. -// Returns an empty string if valid, or a rejection reason. -func (c *Store) validateAttestationData(data *types.AttestationData) string { - if data == nil || data.Source == nil || data.Target == nil || data.Head == nil { - return "incomplete attestation data" - } - - // Availability check: source, target, and head blocks must exist. - sourceBlock, ok := c.lookupBlockSummary(data.Source.Root) - if !ok { - return "source block unknown" - } - targetBlock, ok := c.lookupBlockSummary(data.Target.Root) - if !ok { - return "target block unknown" - } - if _, ok := c.lookupBlockSummary(data.Head.Root); !ok { - return "head block unknown" - } - - // Topology check. - if sourceBlock.Slot > targetBlock.Slot { - return "source slot > target slot" - } - if data.Source.Slot > data.Target.Slot { - return "source slot > target slot" - } - - // Consistency check. - if sourceBlock.Slot != data.Source.Slot { - return "source checkpoint slot mismatch" - } - if targetBlock.Slot != data.Target.Slot { - return "target checkpoint slot mismatch" - } - - // Time check. - currentSlot := c.time / types.IntervalsPerSlot - if data.Slot > currentSlot+1 { - return "attestation too far in future" - } - - return "" -} - -// ProcessAggregatedAttestation processes an aggregated attestation from the aggregation gossip topic. -// It verifies the aggregated proof, expands participants into per-validator votes for forkchoice, -// and caches the proof for proposer reuse in block building. -func (c *Store) ProcessAggregatedAttestation(saa *types.SignedAggregatedAttestation) { - c.mu.Lock() - defer c.mu.Unlock() - - if c.NowFn != nil { - c.advanceTimeLockedMillis(c.NowFn(), false) - } - - c.processAggregatedAttestationLocked(saa) -} - -func (c *Store) processAggregatedAttestationLocked(saa *types.SignedAggregatedAttestation) { - start := time.Now() - defer func() { - metrics.AttestationValidationTime.Observe(time.Since(start).Seconds()) - }() - - if saa == nil || saa.Data == nil || saa.Proof == nil { - metrics.AttestationsInvalid.WithLabelValues("aggregation").Inc() - return - } - - data := saa.Data - proof := saa.Proof - - // Validate attestation data references. - if reason := c.validateAttestationData(data); reason != "" { - log.Debug("aggregated attestation rejected", "reason", reason, "slot", data.Slot) - if !isTransientAttestationRejection(reason) { - metrics.AttestationsInvalid.WithLabelValues("aggregation").Inc() - } - return - } - - // Extract validators from participants bitlist. - validatorIDs := bitlistToValidatorIDs(proof.Participants) - if len(validatorIDs) == 0 { - metrics.AttestationsInvalid.WithLabelValues("aggregation").Inc() - return - } - - // Verify aggregated proof signature. - if c.shouldVerifySignatures() { - headState, ok := c.storage.GetState(c.head) - if !ok { - log.Warn("head state not found for aggregated proof verification") - metrics.AttestationsInvalid.WithLabelValues("aggregation").Inc() - return - } - - pubkeys := make([][]byte, 0, len(validatorIDs)) - for _, vid := range validatorIDs { - if vid >= uint64(len(headState.Validators)) { - metrics.AttestationsInvalid.WithLabelValues("aggregation").Inc() - return - } - pubkey := headState.Validators[vid].Pubkey - pubkeys = append(pubkeys, pubkey[:]) - } - - messageRoot, err := data.HashTreeRoot() - if err != nil { - metrics.AttestationsInvalid.WithLabelValues("aggregation").Inc() - return - } - - leanmultisig.SetupVerifier() - verifyStart := time.Now() - if err := leanmultisig.VerifyAggregated(pubkeys, messageRoot, proof.ProofData, uint32(data.Slot)); err != nil { - metrics.PQSigAggregatedVerificationTime.Observe(time.Since(verifyStart).Seconds()) - metrics.PQSigAggregatedInvalidTotal.Inc() - log.Warn("aggregated attestation proof invalid", "slot", data.Slot, "participants", len(validatorIDs), "err", err) - metrics.AttestationsInvalid.WithLabelValues("aggregation").Inc() - return - } - metrics.PQSigAggregatedVerificationTime.Observe(time.Since(verifyStart).Seconds()) - metrics.PQSigAggregatedValidTotal.Inc() - log.Info("aggregated attestation proof verified", "slot", data.Slot, "participants", len(validatorIDs)) - } - - // Store into the aggregated payloads buffer. - // Attestations are expanded into per-validator votes during acceptNewAttestationsLocked. - addAggregatedPayload(c.latestNewAggregatedPayloads, data, proof) - metrics.LatestNewAggregatedPayloads.Set(float64(len(c.latestNewAggregatedPayloads))) - - // Also cache per-validator proof in aggregatedPayloads for proposer reuse. - for _, vid := range validatorIDs { - c.storeAggregatedPayloadLocked(vid, data, proof) - } - - metrics.AttestationsValid.WithLabelValues("aggregation").Inc() - log.Debug("processed aggregated attestation", "slot", data.Slot, "participants", len(validatorIDs)) -} diff --git a/chain/forkchoice/block.go b/chain/forkchoice/block.go deleted file mode 100644 index 1f0c7c4..0000000 --- a/chain/forkchoice/block.go +++ /dev/null @@ -1,211 +0,0 @@ -package forkchoice - -import ( - "fmt" - "time" - - "github.com/geanlabs/gean/chain/statetransition" - "github.com/geanlabs/gean/observability/metrics" - "github.com/geanlabs/gean/types" - "github.com/geanlabs/gean/xmss/leanmultisig" - "github.com/geanlabs/gean/xmss/leansig" -) - -func (c *Store) verifyAttestationSignatureWithState( - state *types.State, - validatorID uint64, - data *types.AttestationData, - sig [3112]byte, -) error { - if data == nil { - return fmt.Errorf("attestation data is nil") - } - valID := validatorID - if valID >= uint64(len(state.Validators)) { - return fmt.Errorf("invalid validator index %d", valID) - } - pubkey := state.Validators[valID].Pubkey - - messageRoot, err := data.HashTreeRoot() - if err != nil { - return fmt.Errorf("failed to hash attestation message: %w", err) - } - - signingSlot := uint32(data.Slot) - - verifyStart := time.Now() - if err := leansig.Verify(pubkey[:], signingSlot, messageRoot, sig[:]); err != nil { - metrics.PQSigAttestationVerificationTime.Observe(time.Since(verifyStart).Seconds()) - metrics.PQSigAttestationSignaturesInvalidTotal.Inc() - log.Warn("attestation signature invalid", "slot", data.Slot, "validator", valID, "err", err) - return fmt.Errorf("signature verification failed: %w", err) - } - metrics.PQSigAttestationVerificationTime.Observe(time.Since(verifyStart).Seconds()) - metrics.PQSigAttestationSignaturesValidTotal.Inc() - log.Info("attestation signature verified (XMSS)", "slot", data.Slot, "validator", valID, "sig_size", fmt.Sprintf("%d bytes", len(sig))) - return nil -} - -// ProcessBlock processes a new signed block envelope and updates chain state. -// Attestation processing follows leanSpec on_block ordering: -// 1. State transition on the bare block. -// 2. Process body attestations as on-chain votes (is_from_block=true). -// 3. Update head. -// 4. Process proposer attestation as gossip vote (is_from_block=false). -func (c *Store) ProcessBlock(envelope *types.SignedBlockWithAttestation) error { - start := time.Now() - c.mu.Lock() - defer c.mu.Unlock() - - if c.NowFn != nil { - c.advanceTimeLockedMillis(c.NowFn(), false) - } - - if envelope == nil || envelope.Message == nil || envelope.Message.Block == nil { - return fmt.Errorf("invalid block envelope") - } - block := envelope.Message.Block - blockHash, _ := block.HashTreeRoot() - - if _, ok := c.storage.GetBlock(blockHash); ok { - return nil // already known - } - - parentState, ok := c.storage.GetState(block.ParentRoot) - if !ok { - return fmt.Errorf("parent state not found for %x", block.ParentRoot) - } - - stStart := time.Now() - state, err := statetransition.StateTransition(parentState, block) - metrics.StateTransitionTime.Observe(time.Since(stStart).Seconds()) - if err != nil { - return fmt.Errorf("state_transition: %w", err) - } - - // Validate signature container shape. - numBodyAtts := len(block.Body.Attestations) - if len(envelope.Signature.AttestationSignatures) != numBodyAtts { - return fmt.Errorf( - "attestation signature proof count mismatch: got %d, want %d", - len(envelope.Signature.AttestationSignatures), - numBodyAtts, - ) - } - if envelope.Message.ProposerAttestation == nil || envelope.Message.ProposerAttestation.Data == nil { - return fmt.Errorf("missing proposer attestation") - } - - // Step 1b: Verify signatures (skipped when skip_sig_verify build tag is set). - if c.shouldVerifySignatures() { - leanmultisig.SetupVerifier() - - // Verify aggregated body attestations and their matching proofs. - for i, aggregated := range block.Body.Attestations { - if aggregated == nil || aggregated.Data == nil { - return fmt.Errorf("invalid body attestation at index %d", i) - } - proof := envelope.Signature.AttestationSignatures[i] - if proof == nil { - return fmt.Errorf("missing attestation signature proof at index %d", i) - } - if !bitlistsEqual(aggregated.AggregationBits, proof.Participants) { - return fmt.Errorf("participants mismatch for attestation %d", i) - } - - validatorIDs := bitlistToValidatorIDs(aggregated.AggregationBits) - if len(validatorIDs) == 0 { - return fmt.Errorf("empty aggregated attestation participants at index %d", i) - } - - pubkeys := make([][]byte, 0, len(validatorIDs)) - for _, validatorID := range validatorIDs { - if validatorID >= uint64(len(parentState.Validators)) { - return fmt.Errorf("invalid participant index %d at attestation %d", validatorID, i) - } - pubkey := parentState.Validators[validatorID].Pubkey - pubkeys = append(pubkeys, pubkey[:]) - } - - messageRoot, err := aggregated.Data.HashTreeRoot() - if err != nil { - return fmt.Errorf("hash aggregated attestation data %d: %w", i, err) - } - verifyStart := time.Now() - if err := leanmultisig.VerifyAggregated(pubkeys, messageRoot, proof.ProofData, uint32(aggregated.Data.Slot)); err != nil { - metrics.PQSigAggregatedVerificationTime.Observe(time.Since(verifyStart).Seconds()) - metrics.PQSigAggregatedInvalidTotal.Inc() - return fmt.Errorf("verify aggregated proof %d: %w", i, err) - } - metrics.PQSigAggregatedVerificationTime.Observe(time.Since(verifyStart).Seconds()) - metrics.PQSigAggregatedValidTotal.Inc() - log.Info( - "attestation aggregate proof verified (leanMultisig)", - "slot", aggregated.Data.Slot, - "participants", len(validatorIDs), - "proof_size", fmt.Sprintf("%d bytes", len(proof.ProofData)), - ) - } - - // Verify proposer signature (always individual XMSS). - proposerAtt := envelope.Message.ProposerAttestation - if err := c.verifyAttestationSignatureWithState( - parentState, - proposerAtt.ValidatorID, - proposerAtt.Data, - envelope.Signature.ProposerSignature, - ); err != nil { - return fmt.Errorf("invalid proposer attestation signature: %w", err) - } - } - - c.storage.PutBlock(blockHash, block) - c.storage.PutSignedBlock(blockHash, envelope) - c.storage.PutState(blockHash, state) - - // Update justified checkpoint from this block's post-state (monotonic). - if state.LatestJustified.Slot > c.latestJustified.Slot { - c.latestJustified = state.LatestJustified - } - // Update finalized checkpoint from this block's post-state (monotonic). - if state.LatestFinalized.Slot > c.latestFinalized.Slot { - c.latestFinalized = state.LatestFinalized - metrics.FinalizationsTotal.WithLabelValues("success").Inc() - metrics.LatestFinalizedSlot.Set(float64(state.LatestFinalized.Slot)) - c.pruneOnFinalization() - } - - // Step 2: Process body attestations as on-chain votes. - for i, aggregated := range block.Body.Attestations { - if aggregated == nil || aggregated.Data == nil { - continue - } - proof := envelope.Signature.AttestationSignatures[i] - addAggregatedPayload(c.latestKnownAggregatedPayloads, aggregated.Data, proof) - for _, validatorID := range bitlistToValidatorIDs(aggregated.AggregationBits) { - sa := &types.SignedAttestation{ - ValidatorID: validatorID, - Message: aggregated.Data, - } - c.processAttestationLocked(sa, true) - c.storeAggregatedPayloadLocked(validatorID, aggregated.Data, proof) - } - } - - // Step 3: Update head. - c.updateHeadLocked() - - // Step 4: Process proposer attestation as gossip vote (is_from_block=false). - proposerSA := &types.SignedAttestation{ - ValidatorID: envelope.Message.ProposerAttestation.ValidatorID, - Message: envelope.Message.ProposerAttestation.Data, - Signature: envelope.Signature.ProposerSignature, - } - c.processAttestationLocked(proposerSA, false) - if c.isAggregator { - c.storeGossipSignatureLocked(proposerSA) - } - - metrics.ForkChoiceBlockProcessingTime.Observe(time.Since(start).Seconds()) - return nil -} diff --git a/chain/forkchoice/ghost.go b/chain/forkchoice/ghost.go deleted file mode 100644 index e18fc3c..0000000 --- a/chain/forkchoice/ghost.go +++ /dev/null @@ -1,95 +0,0 @@ -package forkchoice - -import "github.com/geanlabs/gean/types" - -// GetForkChoiceHead uses LMD GHOST to find the head block from a given root. -func GetForkChoiceHead( - blocks map[[32]byte]blockSummary, - root [32]byte, - latestAttestations map[uint64]*types.SignedAttestation, - minScore int, -) [32]byte { - // Start at earliest block if root is zero hash. - if root == types.ZeroHash { - var earliest [32]byte - minSlot := uint64(^uint64(0)) - for h, b := range blocks { - if b.Slot < minSlot { - minSlot = b.Slot - earliest = h - } - } - root = earliest - } - - rootBlock, ok := blocks[root] - if !ok { - return root - } - rootSlot := rootBlock.Slot - - // Count votes for each block. Votes for descendants count toward ancestors. - voteWeights := make(map[[32]byte]int) - for _, sa := range latestAttestations { - if sa == nil || sa.Message == nil || sa.Message.Head == nil { - continue - } - headRoot := sa.Message.Head.Root - if _, ok := blocks[headRoot]; !ok { - continue - } - blockHash := headRoot - for { - b, exists := blocks[blockHash] - if !exists || b.Slot <= rootSlot { - break - } - voteWeights[blockHash]++ - blockHash = b.ParentRoot - } - } - - // Build children mapping for blocks above min score. - childrenMap := make(map[[32]byte][][32]byte) - for blockHash := range blocks { - block := blocks[blockHash] - if voteWeights[blockHash] >= minScore { - childrenMap[block.ParentRoot] = append(childrenMap[block.ParentRoot], blockHash) - } - } - - // Walk down tree, choosing child with most votes. - // Tiebreak: highest slot, then largest hash (lexicographic). - current := root - for { - children := childrenMap[current] - if len(children) == 0 { - return current - } - - best := children[0] - bestWeight := voteWeights[best] - bestSlot := blocks[best].Slot - for _, c := range children[1:] { - w := voteWeights[c] - s := blocks[c].Slot - if w > bestWeight || (w == bestWeight && s > bestSlot) || (w == bestWeight && s == bestSlot && hashGreater(c, best)) { - best = c - bestWeight = w - bestSlot = s - } - } - current = best - } -} -func hashGreater(a, b [32]byte) bool { - for i := 0; i < 32; i++ { - if a[i] > b[i] { - return true - } - if a[i] < b[i] { - return false - } - } - return false -} diff --git a/chain/forkchoice/helpers.go b/chain/forkchoice/helpers.go deleted file mode 100644 index 6f284a1..0000000 --- a/chain/forkchoice/helpers.go +++ /dev/null @@ -1,17 +0,0 @@ -package forkchoice - -import "github.com/geanlabs/gean/types" - -func containsAttestation(list []*types.Attestation, att *types.Attestation) bool { - for _, existing := range list { - if existing.ValidatorID == att.ValidatorID && - existing.Data.Slot == att.Data.Slot { - return true - } - } - return false -} - -func ceilDiv(a, b uint64) uint64 { - return (a + b - 1) / b -} diff --git a/chain/forkchoice/produce.go b/chain/forkchoice/produce.go deleted file mode 100644 index f775f90..0000000 --- a/chain/forkchoice/produce.go +++ /dev/null @@ -1,275 +0,0 @@ -package forkchoice - -import ( - "fmt" - "sort" - - "github.com/geanlabs/gean/chain/statetransition" - "github.com/geanlabs/gean/types" -) - -// Signer abstracts the signing capability (XMSS or mock). -type Signer interface { - Sign(signingSlot uint32, message [32]byte) ([]byte, error) -} - -// GetProposalHead returns the head for block proposal at the given slot. -func (c *Store) GetProposalHead(slot uint64) [32]byte { - c.mu.Lock() - defer c.mu.Unlock() - slotTime := c.genesisTime + slot*types.SecondsPerSlot - c.advanceTimeLocked(slotTime, true) - c.acceptNewAttestationsLocked() - return c.head -} - -// GetVoteTarget calculates the target checkpoint for validator votes. -func (c *Store) GetVoteTarget() (*types.Checkpoint, error) { - c.mu.Lock() - defer c.mu.Unlock() - return c.getVoteTargetLocked() -} - -func (c *Store) getVoteTargetLocked() (*types.Checkpoint, error) { - targetRoot := c.head - - // Walk back up to JustificationLookback steps if safe target is newer. - safeBlock, safeOK := c.lookupBlockSummary(c.safeTarget) - for i := 0; i < types.JustificationLookback; i++ { - tBlock, ok := c.lookupBlockSummary(targetRoot) - if ok && safeOK && tBlock.Slot > safeBlock.Slot { - targetRoot = tBlock.ParentRoot - } - } - - // Ensure target is in justifiable slot range. - for { - tBlock, ok := c.lookupBlockSummary(targetRoot) - if !ok { - break - } - if types.IsJustifiableAfter(tBlock.Slot, c.latestFinalized.Slot) { - break - } - targetRoot = tBlock.ParentRoot - } - - tBlock, ok := c.lookupBlockSummary(targetRoot) - if !ok { - return nil, fmt.Errorf("vote target block not found") - } - - // Ensure target is at or after the source (latest_justified) to maintain invariant: - // source.slot <= target.slot. This prevents creating invalid attestations where - // source slot exceeds target slot. If the calculated target is older than - // latest_justified, use latest_justified instead. - if tBlock.Slot < c.latestJustified.Slot { - return c.latestJustified, nil - } - - return &types.Checkpoint{Root: targetRoot, Slot: tBlock.Slot}, nil -} - -// ProduceBlock creates a new devnet-2 signed block envelope for the given slot. -func (c *Store) ProduceBlock(slot, validatorIndex uint64, signer Signer) (*types.SignedBlockWithAttestation, error) { - c.mu.Lock() - defer c.mu.Unlock() - - if !statetransition.IsProposer(validatorIndex, slot, c.numValidators) { - return nil, fmt.Errorf("validator %d is not proposer for slot %d", validatorIndex, slot) - } - - slotTime := c.genesisTime + slot*types.SecondsPerSlot - c.advanceTimeLocked(slotTime, true) - c.acceptNewAttestationsLocked() - headRoot := c.head - - headState, ok := c.storage.GetState(headRoot) - if !ok { - return nil, fmt.Errorf("head state not found") - } - - advancedState, err := statetransition.ProcessSlots(headState, slot) - if err != nil { - return nil, err - } - - selectedByValidator := make(map[uint64]*types.SignedAttestation) - selected := make([]*types.SignedAttestation, 0, len(c.latestKnownAttestations)) - - // Fixed-point collection: include votes whose source matches post-state justified. - for { - aggregatedAttestations, _, err := c.buildAggregatedAttestationsFromSignedLocked(headState, selected) - if err != nil { - return nil, err - } - - candidate := &types.Block{ - Slot: slot, - ProposerIndex: validatorIndex, - ParentRoot: headRoot, - StateRoot: types.ZeroHash, - Body: &types.BlockBody{Attestations: aggregatedAttestations}, - } - - postState, err := statetransition.ProcessBlock(advancedState, candidate) - if err != nil { - return nil, err - } - - added := false - for _, sa := range c.latestKnownAttestations { - if sa == nil || sa.Message == nil || sa.Message.Source == nil || sa.Message.Head == nil { - continue - } - if _, ok := c.storage.GetBlock(sa.Message.Head.Root); !ok { - continue - } - if sa.Message.Source.Root != postState.LatestJustified.Root || - sa.Message.Source.Slot != postState.LatestJustified.Slot { - continue - } - - existing, ok := selectedByValidator[sa.ValidatorID] - if ok && existing != nil && existing.Message != nil && existing.Message.Slot >= sa.Message.Slot { - continue - } - selectedByValidator[sa.ValidatorID] = sa - added = true - } - - if !added { - break - } - selected = orderedSignedAttestations(selectedByValidator) - } - - finalAttestations, attestationProofs, err := c.buildAggregatedAttestationsFromSignedLocked(headState, selected) - if err != nil { - return nil, err - } - - finalBlock := &types.Block{ - Slot: slot, - ProposerIndex: validatorIndex, - ParentRoot: headRoot, - StateRoot: types.ZeroHash, - Body: &types.BlockBody{Attestations: finalAttestations}, - } - finalState, err := statetransition.ProcessBlock(advancedState, finalBlock) - if err != nil { - return nil, err - } - stateRoot, _ := finalState.HashTreeRoot() - finalBlock.StateRoot = stateRoot - - blockHash, _ := finalBlock.HashTreeRoot() - voteTarget, err := c.getVoteTargetLocked() - if err != nil { - return nil, fmt.Errorf("vote target: %w", err) - } - - proposerAtt := &types.Attestation{ - ValidatorID: validatorIndex, - Data: &types.AttestationData{ - Slot: slot, - Head: &types.Checkpoint{Root: blockHash, Slot: slot}, - Target: voteTarget, - Source: &types.Checkpoint{Root: c.latestJustified.Root, Slot: c.latestJustified.Slot}, - }, - } - - envelope := &types.SignedBlockWithAttestation{ - Message: &types.BlockWithAttestation{ - Block: finalBlock, - ProposerAttestation: proposerAtt, - }, - Signature: types.BlockSignatures{ - AttestationSignatures: attestationProofs, - }, - } - - messageRoot, err := proposerAtt.Data.HashTreeRoot() - if err != nil { - return nil, fmt.Errorf("hash proposer attestation data: %w", err) - } - sig, err := signer.Sign(uint32(proposerAtt.Data.Slot), messageRoot) - if err != nil { - return nil, fmt.Errorf("sign proposer attestation: %w", err) - } - copy(envelope.Signature.ProposerSignature[:], sig) - - return envelope, nil -} - -// ProduceAttestation produces a signed attestation for the given slot and validator. -func (c *Store) ProduceAttestation(slot, validatorIndex uint64, signer Signer) (*types.SignedAttestation, error) { - c.mu.Lock() - defer c.mu.Unlock() - - // Advance and accept before voting (matches leanSpec produce_attestation_vote). - slotTime := c.genesisTime + slot*types.SecondsPerSlot - c.advanceTimeLocked(slotTime, true) - c.acceptNewAttestationsLocked() - headRoot := c.head - - headBlock, ok := c.storage.GetBlock(headRoot) - if !ok { - return nil, fmt.Errorf("head block not found") - } - - headCheckpoint := &types.Checkpoint{Root: headRoot, Slot: headBlock.Slot} - targetCheckpoint, err := c.getVoteTargetLocked() - if err != nil { - return nil, fmt.Errorf("vote target: %w", err) - } - - // Cannot produce valid attestation if target is strictly before the source. - // target == source is valid (e.g. genesis bootstrap where both are slot 0). - if targetCheckpoint.Slot < c.latestJustified.Slot { - return nil, fmt.Errorf("cannot produce valid attestation: target slot %d < source slot %d", - targetCheckpoint.Slot, c.latestJustified.Slot) - } - - data := &types.AttestationData{ - Slot: slot, - Head: headCheckpoint, - Target: targetCheckpoint, - Source: &types.Checkpoint{Root: c.latestJustified.Root, Slot: c.latestJustified.Slot}, - } - - messageRoot, err := data.HashTreeRoot() - if err != nil { - return nil, fmt.Errorf("hash attestation data: %w", err) - } - sig, err := signer.Sign(uint32(data.Slot), messageRoot) - if err != nil { - return nil, fmt.Errorf("sign attestation: %w", err) - } - - var sigBytes [3112]byte - copy(sigBytes[:], sig) - - return &types.SignedAttestation{ - ValidatorID: validatorIndex, - Message: data, - Signature: sigBytes, - }, nil -} - -func orderedSignedAttestations(indexed map[uint64]*types.SignedAttestation) []*types.SignedAttestation { - if len(indexed) == 0 { - return nil - } - validatorIDs := make([]uint64, 0, len(indexed)) - for validatorID := range indexed { - validatorIDs = append(validatorIDs, validatorID) - } - sort.Slice(validatorIDs, func(i, j int) bool { return validatorIDs[i] < validatorIDs[j] }) - - out := make([]*types.SignedAttestation, 0, len(indexed)) - for _, validatorID := range validatorIDs { - out = append(out, indexed[validatorID]) - } - return out -} diff --git a/chain/forkchoice/prune.go b/chain/forkchoice/prune.go deleted file mode 100644 index 0c28726..0000000 --- a/chain/forkchoice/prune.go +++ /dev/null @@ -1,226 +0,0 @@ -package forkchoice - -import ( - "github.com/geanlabs/gean/types" -) - -// Storage retention limits aligned with ethlambda (store.rs:83-92). -const ( - // blocksToKeep is ~1 day of block history at 4-second slots (86400/4). - blocksToKeep = 21_600 - - // statesToKeep is ~3.3 hours of state history at 4-second slots (12000/4). - statesToKeep = 3_000 - - // maxKnownAggregatedPayloads caps the known aggregated payloads map to - // prevent unbounded growth during stalled finalization. Matches - // ethlambda's AGGREGATED_PAYLOAD_CAP. - maxKnownAggregatedPayloads = 4096 - - // maxAggregatedPayloadKeys caps the aggregatedPayloads proof cache to - // prevent unbounded key growth. - maxAggregatedPayloadKeys = 8192 - - // periodicPruningInterval is the number of slots between periodic - // pruning passes. Acts as a safety net when finalization stalls. - // Matches zeam FORKCHOICE_PRUNING_INTERVAL_SLOTS (constants.zig:22). - periodicPruningInterval = 7200 - - // periodicPruningLagThreshold defines how far finalization must lag - // behind the current slot before periodic pruning kicks in. Prevents - // unnecessary pruning work when finalization is healthy. - // Matches zeam's guard: finalized.slot + 2*7200 < current_slot. - periodicPruningLagThreshold = 2 * periodicPruningInterval -) - -// pruneOnFinalization removes data that can no longer influence fork choice -// after finalization advances. Matches leanSpec prune_stale_attestation_data() -// (store.py:228-268). -// -// Must be called with c.mu held. -func (c *Store) pruneOnFinalization() { - finalizedSlot := c.latestFinalized.Slot - - c.pruneStaleAttestationData(finalizedSlot) - c.pruneAggregatedPayloadsCache(finalizedSlot) - c.pruneStorage(finalizedSlot) -} - -// pruneStaleAttestationData removes aggregated payload entries where the -// attestation target slot is at or before the finalized slot. Matches -// leanSpec store.py:245-268 which filters by target.slot > finalized_slot. -func (c *Store) pruneStaleAttestationData(finalizedSlot uint64) { - for key, payload := range c.latestKnownAggregatedPayloads { - if payload.data != nil && payload.data.Target != nil && payload.data.Target.Slot <= finalizedSlot { - delete(c.latestKnownAggregatedPayloads, key) - } - } - for key, payload := range c.latestNewAggregatedPayloads { - if payload.data != nil && payload.data.Target != nil && payload.data.Target.Slot <= finalizedSlot { - delete(c.latestNewAggregatedPayloads, key) - } - } -} - -// pruneAggregatedPayloadsCache removes signature cache entries for -// attestation data at or before the finalized slot. -func (c *Store) pruneAggregatedPayloadsCache(finalizedSlot uint64) { - for key, entries := range c.aggregatedPayloads { - if len(entries) > 0 && entries[0].slot <= finalizedSlot { - delete(c.aggregatedPayloads, key) - } - } - for key, stored := range c.gossipSignatures { - if stored.slot <= finalizedSlot { - delete(c.gossipSignatures, key) - } - } -} - -// pruneStorage removes blocks and states that are below the finalized slot -// and not on the canonical chain. Uses retention limits for blocks and states. -// -// Uses ForEachBlock to iterate without copying the full block map. -func (c *Store) pruneStorage(finalizedSlot uint64) { - if finalizedSlot == 0 { - return - } - - // Collect canonical chain roots by walking from head to finalized root. - canonical := make(map[[32]byte]struct{}) - current := c.head - for { - canonical[current] = struct{}{} - block, ok := c.storage.GetBlock(current) - if !ok { - break - } - if block.Slot <= finalizedSlot { - break - } - current = block.ParentRoot - } - // Always keep finalized and justified roots. - canonical[c.latestFinalized.Root] = struct{}{} - canonical[c.latestJustified.Root] = struct{}{} - - // State retention: prune canonical states older than this threshold. - // Matches ethlambda STATES_TO_KEEP (~3.3 hours). - var pruneStatesBelow uint64 - if finalizedSlot > statesToKeep { - pruneStatesBelow = finalizedSlot - statesToKeep - } - - // Single pass: collect roots to delete. We cannot delete during - // ForEachBlock iteration (bolt doesn't allow mutation during View tx). - var deleteRoots [][32]byte - var deleteStateOnlyRoots [][32]byte - - c.storage.ForEachBlock(func(root [32]byte, block *types.Block) bool { - if block.Slot >= finalizedSlot { - return true // keep: at or above finalized - } - - if _, ok := canonical[root]; !ok { - // Non-canonical block below finalized: delete everything. - deleteRoots = append(deleteRoots, root) - return true - } - - // Canonical block below finalized: keep block, but prune old states. - if pruneStatesBelow > 0 && block.Slot < pruneStatesBelow { - if root != c.latestFinalized.Root && root != c.latestJustified.Root { - deleteStateOnlyRoots = append(deleteStateOnlyRoots, root) - } - } - return true - }) - - for _, root := range deleteRoots { - c.storage.DeleteBlock(root) - c.storage.DeleteSignedBlock(root) - c.storage.DeleteState(root) - } - for _, root := range deleteStateOnlyRoots { - c.storage.DeleteState(root) - } - - if len(deleteRoots) > 0 || len(deleteStateOnlyRoots) > 0 { - log.Info("pruned storage on finalization", - "finalized_slot", finalizedSlot, - "blocks_deleted", len(deleteRoots), - "states_deleted", len(deleteRoots)+len(deleteStateOnlyRoots), - ) - } -} - -// enforcePayloadCap evicts the oldest entries from latestKnownAggregatedPayloads -// when the map exceeds maxKnownAggregatedPayloads. This bounds memory even when -// finalization stalls. Matches ethlambda's FIFO PayloadBuffer pattern. -func (c *Store) enforcePayloadCap() { - for len(c.latestKnownAggregatedPayloads) > maxKnownAggregatedPayloads { - var oldestKey [32]byte - oldestSlot := uint64(^uint64(0)) - found := false - for key, payload := range c.latestKnownAggregatedPayloads { - if payload.data != nil && payload.data.Target != nil && payload.data.Target.Slot < oldestSlot { - oldestSlot = payload.data.Target.Slot - oldestKey = key - found = true - } - } - if !found { - break - } - delete(c.latestKnownAggregatedPayloads, oldestKey) - } -} - -// maybePeriodicPruneLocked runs a pruning pass every periodicPruningInterval -// slots when finalization is lagging. This is a safety net that prevents -// unbounded memory growth even if finalization stalls for an extended period. -// Matches zeam's periodic pruning pattern (chain.zig:302-326). -// -// Must be called with c.mu held. -func (c *Store) maybePeriodicPruneLocked() { - currentSlot := c.time / types.IntervalsPerSlot - if currentSlot == 0 || currentSlot%periodicPruningInterval != 0 { - return - } - - finalizedSlot := c.latestFinalized.Slot - if finalizedSlot+periodicPruningLagThreshold >= currentSlot { - return // finalization is healthy, no need for periodic pruning - } - - log.Warn("finalization lagging, running periodic pruning", - "current_slot", currentSlot, - "finalized_slot", finalizedSlot, - "lag", currentSlot-finalizedSlot, - ) - - c.pruneStaleAttestationData(finalizedSlot) - c.pruneAggregatedPayloadsCache(finalizedSlot) - c.pruneStorage(finalizedSlot) -} - -// enforceAggregatedPayloadsCacheCap bounds the aggregatedPayloads proof cache -// keys to prevent unbounded growth independent of finalization. -func (c *Store) enforceAggregatedPayloadsCacheCap() { - for len(c.aggregatedPayloads) > maxAggregatedPayloadKeys { - var oldestKey signatureKey - oldestSlot := uint64(^uint64(0)) - found := false - for key, entries := range c.aggregatedPayloads { - if len(entries) > 0 && entries[0].slot < oldestSlot { - oldestSlot = entries[0].slot - oldestKey = key - found = true - } - } - if !found { - break - } - delete(c.aggregatedPayloads, oldestKey) - } -} diff --git a/chain/forkchoice/sig_verify.go b/chain/forkchoice/sig_verify.go deleted file mode 100644 index 943efe1..0000000 --- a/chain/forkchoice/sig_verify.go +++ /dev/null @@ -1,5 +0,0 @@ -//go:build !skip_sig_verify - -package forkchoice - -func (c *Store) shouldVerifySignatures() bool { return true } diff --git a/chain/forkchoice/sig_verify_skip.go b/chain/forkchoice/sig_verify_skip.go deleted file mode 100644 index 19a408c..0000000 --- a/chain/forkchoice/sig_verify_skip.go +++ /dev/null @@ -1,5 +0,0 @@ -//go:build skip_sig_verify - -package forkchoice - -func (c *Store) shouldVerifySignatures() bool { return false } diff --git a/chain/forkchoice/signature_cache.go b/chain/forkchoice/signature_cache.go deleted file mode 100644 index 874cb69..0000000 --- a/chain/forkchoice/signature_cache.go +++ /dev/null @@ -1,103 +0,0 @@ -package forkchoice - -import ( - "bytes" - - "github.com/geanlabs/gean/observability/metrics" - "github.com/geanlabs/gean/types" -) - -type signatureKey struct { - validatorID uint64 - dataRoot [32]byte -} - -type storedSignature struct { - slot uint64 - data *types.AttestationData - signature [types.XMSSSignatureSize]byte -} - -type storedAggregatedPayload struct { - slot uint64 - proof *types.AggregatedSignatureProof -} - -func makeSignatureKey(validatorID uint64, data *types.AttestationData) (signatureKey, bool) { - if data == nil { - return signatureKey{}, false - } - dataRoot, err := data.HashTreeRoot() - if err != nil { - return signatureKey{}, false - } - return signatureKey{ - validatorID: validatorID, - dataRoot: dataRoot, - }, true -} - -func cloneAggregatedSignatureProof(proof *types.AggregatedSignatureProof) *types.AggregatedSignatureProof { - if proof == nil { - return nil - } - return &types.AggregatedSignatureProof{ - Participants: append([]byte(nil), proof.Participants...), - ProofData: append([]byte(nil), proof.ProofData...), - } -} - -func (c *Store) storeGossipSignatureLocked(sa *types.SignedAttestation) { - if sa == nil || sa.Message == nil { - return - } - key, ok := makeSignatureKey(sa.ValidatorID, sa.Message) - if !ok { - return - } - existing, exists := c.gossipSignatures[key] - if !exists || existing.slot <= sa.Message.Slot { - c.gossipSignatures[key] = storedSignature{ - slot: sa.Message.Slot, - data: sa.Message, - signature: sa.Signature, - } - } - metrics.GossipSignaturesCount.Set(float64(len(c.gossipSignatures))) -} - -func (c *Store) storeAggregatedPayloadLocked( - validatorID uint64, - data *types.AttestationData, - proof *types.AggregatedSignatureProof, -) { - if proof == nil || data == nil { - return - } - key, ok := makeSignatureKey(validatorID, data) - if !ok { - return - } - entry := storedAggregatedPayload{ - slot: data.Slot, - proof: cloneAggregatedSignatureProof(proof), - } - - existing := c.aggregatedPayloads[key] - for _, current := range existing { - if current.proof == nil { - continue - } - if bytes.Equal(current.proof.Participants, proof.Participants) && - bytes.Equal(current.proof.ProofData, proof.ProofData) { - return - } - } - - existing = append(existing, entry) - const maxProofsPerKey = 8 - if len(existing) > maxProofsPerKey { - existing = existing[len(existing)-maxProofsPerKey:] - } - c.aggregatedPayloads[key] = existing -} diff --git a/chain/forkchoice/store.go b/chain/forkchoice/store.go deleted file mode 100644 index 94a8843..0000000 --- a/chain/forkchoice/store.go +++ /dev/null @@ -1,297 +0,0 @@ -package forkchoice - -import ( - "fmt" - "sync" - - "github.com/geanlabs/gean/observability/logging" - "github.com/geanlabs/gean/storage" - "github.com/geanlabs/gean/types" -) - -var log = logging.NewComponentLogger(logging.CompForkChoice) - -type blockSummary struct { - Slot uint64 - ParentRoot [32]byte - ProposerIndex uint64 -} - -// Store tracks chain state and validator votes for the LMD GHOST algorithm. -type Store struct { - mu sync.Mutex - - time uint64 - genesisTime uint64 - numValidators uint64 - head [32]byte - safeTarget [32]byte - - latestJustified *types.Checkpoint - latestFinalized *types.Checkpoint - storage storage.Store - checkpointRoots map[[32]byte]blockSummary - isAggregator bool - - latestKnownAttestations map[uint64]*types.SignedAttestation - latestNewAttestations map[uint64]*types.SignedAttestation - latestKnownAggregatedPayloads map[[32]byte]aggregatedPayload - latestNewAggregatedPayloads map[[32]byte]aggregatedPayload - gossipSignatures map[signatureKey]storedSignature - aggregatedPayloads map[signatureKey][]storedAggregatedPayload - - NowFn func() uint64 -} - -// ChainStatus is a snapshot of the fork choice head and checkpoint state. -type ChainStatus struct { - Head [32]byte - HeadSlot uint64 - JustifiedRoot [32]byte - JustifiedSlot uint64 - FinalizedRoot [32]byte - FinalizedSlot uint64 -} - -// GetStatus returns a consistent snapshot of the chain head and checkpoints. -func (c *Store) GetStatus() ChainStatus { - c.mu.Lock() - defer c.mu.Unlock() - headSlot := uint64(0) - if head, ok := c.lookupBlockSummary(c.head); ok { - headSlot = head.Slot - } - return ChainStatus{ - Head: c.head, - HeadSlot: headSlot, - JustifiedRoot: c.latestJustified.Root, - JustifiedSlot: c.latestJustified.Slot, - FinalizedRoot: c.latestFinalized.Root, - FinalizedSlot: c.latestFinalized.Slot, - } -} - -// NumValidators returns the number of validators in the store. -func (c *Store) NumValidators() uint64 { - return c.numValidators -} - -// GetBlock retrieves a block by its root hash. -func (c *Store) GetBlock(root [32]byte) (*types.Block, bool) { - return c.storage.GetBlock(root) -} - -// GetSignedBlock retrieves a signed block envelope by its root hash. -func (c *Store) GetSignedBlock(root [32]byte) (*types.SignedBlockWithAttestation, bool) { - return c.storage.GetSignedBlock(root) -} - -// HasState returns true if the state for the given block root exists. -// This is used by sync to verify chain connectivity: ProcessBlock requires -// the parent state, not just the parent block, to succeed. -func (c *Store) HasState(root [32]byte) bool { - _, ok := c.storage.GetState(root) - return ok -} - -// GetKnownAttestation returns the latest known attestation for a validator. -func (c *Store) GetKnownAttestation(validator uint64) (*types.SignedAttestation, bool) { - c.mu.Lock() - defer c.mu.Unlock() - sa, ok := c.latestKnownAttestations[validator] - return sa, ok -} - -// GetNewAttestation returns the latest new (pending) attestation for a validator. -func (c *Store) GetNewAttestation(validator uint64) (*types.SignedAttestation, bool) { - c.mu.Lock() - defer c.mu.Unlock() - sa, ok := c.latestNewAttestations[validator] - return sa, ok -} - -// SetIsAggregator configures whether this store's node acts as an aggregator. -func (c *Store) SetIsAggregator(isAggregator bool) { - c.mu.Lock() - defer c.mu.Unlock() - c.isAggregator = isAggregator -} - -// RestoreFromDB reconstructs a fork-choice store from persisted blocks and states. -// It finds the highest-slot block as the head and derives checkpoints from its state. -// Returns nil if the database is empty. -func RestoreFromDB(store storage.Store) *Store { - allBlocks := store.GetAllBlocks() - if len(allBlocks) == 0 { - return nil - } - - // Find the block with the highest slot (chain head). - var headRoot [32]byte - var headBlock *types.Block - for root, blk := range allBlocks { - if headBlock == nil || blk.Slot > headBlock.Slot { - headRoot = root - headBlock = blk - } - } - - headState, ok := store.GetState(headRoot) - if !ok { - return nil - } - - return &Store{ - time: headBlock.Slot * types.IntervalsPerSlot, - genesisTime: headState.Config.GenesisTime, - numValidators: uint64(len(headState.Validators)), - head: headRoot, - safeTarget: headState.LatestFinalized.Root, - latestJustified: headState.LatestJustified, - latestFinalized: headState.LatestFinalized, - storage: store, - checkpointRoots: buildCheckpointRootIndex(headState, headRoot), - latestKnownAttestations: make(map[uint64]*types.SignedAttestation), - latestNewAttestations: make(map[uint64]*types.SignedAttestation), - latestKnownAggregatedPayloads: make(map[[32]byte]aggregatedPayload), - latestNewAggregatedPayloads: make(map[[32]byte]aggregatedPayload), - gossipSignatures: make(map[signatureKey]storedSignature), - aggregatedPayloads: make(map[signatureKey][]storedAggregatedPayload), - } -} - -// NewStore initializes a store from an anchor state and block. -func NewStore(state *types.State, anchorBlock *types.Block, store storage.Store) *Store { - stateRoot, _ := state.HashTreeRoot() - if anchorBlock.StateRoot != stateRoot { - panic(fmt.Sprintf("anchor block state root mismatch: block=%x state=%x", anchorBlock.StateRoot, stateRoot)) - } - - anchorRoot, _ := anchorBlock.HashTreeRoot() - - store.PutBlock(anchorRoot, anchorBlock) - store.PutSignedBlock(anchorRoot, &types.SignedBlockWithAttestation{ - Message: &types.BlockWithAttestation{Block: anchorBlock}, - }) - store.PutState(anchorRoot, state) - - return &Store{ - time: anchorBlock.Slot * types.IntervalsPerSlot, - genesisTime: state.Config.GenesisTime, - numValidators: uint64(len(state.Validators)), - head: anchorRoot, - safeTarget: anchorRoot, - latestJustified: &types.Checkpoint{Root: anchorRoot, Slot: anchorBlock.Slot}, - latestFinalized: &types.Checkpoint{Root: anchorRoot, Slot: anchorBlock.Slot}, - storage: store, - checkpointRoots: nil, - latestKnownAttestations: make(map[uint64]*types.SignedAttestation), - latestNewAttestations: make(map[uint64]*types.SignedAttestation), - latestKnownAggregatedPayloads: make(map[[32]byte]aggregatedPayload), - latestNewAggregatedPayloads: make(map[[32]byte]aggregatedPayload), - gossipSignatures: make(map[signatureKey]storedSignature), - aggregatedPayloads: make(map[signatureKey][]storedAggregatedPayload), - } -} - -// NewStoreFromCheckpointState initializes a store from a checkpoint state. -// The checkpoint state is expected to have a latest block header whose -// hash-tree-root matches anchorRoot after the header state root has been set. -func NewStoreFromCheckpointState(state *types.State, anchorRoot [32]byte, store storage.Store) *Store { - anchorHeader := state.LatestBlockHeader - anchorBlock := &types.Block{ - Slot: anchorHeader.Slot, - ProposerIndex: anchorHeader.ProposerIndex, - ParentRoot: anchorHeader.ParentRoot, - StateRoot: anchorHeader.StateRoot, - Body: emptyCheckpointBody(), - } - - store.PutBlock(anchorRoot, anchorBlock) - store.PutSignedBlock(anchorRoot, &types.SignedBlockWithAttestation{ - Message: &types.BlockWithAttestation{Block: anchorBlock}, - }) - store.PutState(anchorRoot, state) - - return &Store{ - time: state.Slot * types.IntervalsPerSlot, - genesisTime: state.Config.GenesisTime, - numValidators: uint64(len(state.Validators)), - head: anchorRoot, - safeTarget: state.LatestFinalized.Root, - latestJustified: &types.Checkpoint{Root: state.LatestJustified.Root, Slot: state.LatestJustified.Slot}, - latestFinalized: &types.Checkpoint{Root: state.LatestFinalized.Root, Slot: state.LatestFinalized.Slot}, - storage: store, - checkpointRoots: buildCheckpointRootIndex(state, anchorRoot), - latestKnownAttestations: make(map[uint64]*types.SignedAttestation), - latestNewAttestations: make(map[uint64]*types.SignedAttestation), - latestKnownAggregatedPayloads: make(map[[32]byte]aggregatedPayload), - latestNewAggregatedPayloads: make(map[[32]byte]aggregatedPayload), - gossipSignatures: make(map[signatureKey]storedSignature), - aggregatedPayloads: make(map[signatureKey][]storedAggregatedPayload), - } -} - -func emptyCheckpointBody() *types.BlockBody { - return &types.BlockBody{Attestations: []*types.AggregatedAttestation{}} -} - -func summarizeBlock(block *types.Block) blockSummary { - return blockSummary{ - Slot: block.Slot, - ParentRoot: block.ParentRoot, - ProposerIndex: block.ProposerIndex, - } -} - -func buildCheckpointRootIndex(state *types.State, anchorRoot [32]byte) map[[32]byte]blockSummary { - if state == nil || state.LatestBlockHeader == nil { - return nil - } - - refs := make(map[[32]byte]blockSummary, len(state.HistoricalBlockHashes)+1) - lastNonZeroRoot := types.ZeroHash - for slot, root := range state.HistoricalBlockHashes { - if root == types.ZeroHash { - continue - } - refs[root] = blockSummary{ - Slot: uint64(slot), - ParentRoot: lastNonZeroRoot, - } - lastNonZeroRoot = root - } - - refs[anchorRoot] = blockSummary{ - Slot: state.LatestBlockHeader.Slot, - ParentRoot: state.LatestBlockHeader.ParentRoot, - ProposerIndex: state.LatestBlockHeader.ProposerIndex, - } - return refs -} - -func (c *Store) lookupBlockSummary(root [32]byte) (blockSummary, bool) { - if block, ok := c.storage.GetBlock(root); ok { - return summarizeBlock(block), true - } - if c.checkpointRoots == nil { - return blockSummary{}, false - } - summary, ok := c.checkpointRoots[root] - return summary, ok -} - -func (c *Store) allKnownBlockSummaries() map[[32]byte]blockSummary { - summaries := make(map[[32]byte]blockSummary, len(c.checkpointRoots)) - // Iterate storage without copying the full block map. - c.storage.ForEachBlock(func(root [32]byte, block *types.Block) bool { - summaries[root] = summarizeBlock(block) - return true - }) - for root, summary := range c.checkpointRoots { - if _, ok := summaries[root]; !ok { - summaries[root] = summary - } - } - return summaries -} diff --git a/chain/forkchoice/store_test.go b/chain/forkchoice/store_test.go deleted file mode 100644 index 4570ac7..0000000 --- a/chain/forkchoice/store_test.go +++ /dev/null @@ -1,95 +0,0 @@ -package forkchoice - -import ( - "testing" - - "github.com/geanlabs/gean/storage/memory" - "github.com/geanlabs/gean/types" -) - -func TestNewStoreFromCheckpointState(t *testing.T) { - state := makeCheckpointState() - anchorRoot := prepareCheckpointStateForStore(t, state) - store := memory.New() - - fc := NewStoreFromCheckpointState(state, anchorRoot, store) - - if _, ok := store.GetBlock(state.LatestJustified.Root); ok { - t.Fatal("expected no stored placeholder for justified checkpoint root") - } - if _, ok := store.GetBlock(state.LatestFinalized.Root); ok { - t.Fatal("expected no stored placeholder for finalized checkpoint root") - } - - if proposalHead := fc.GetProposalHead(state.Slot + 1); proposalHead != anchorRoot { - t.Fatalf("proposal head = %x, want %x", proposalHead, anchorRoot) - } - - target, err := fc.GetVoteTarget() - if err != nil { - t.Fatalf("GetVoteTarget returned error: %v", err) - } - if target.Root != state.LatestJustified.Root { - t.Fatalf("vote target root = %x, want %x", target.Root, state.LatestJustified.Root) - } - - valid := fc.validateAttestationData(&types.AttestationData{ - Slot: 3, - Head: &types.Checkpoint{Root: anchorRoot, Slot: 3}, - Source: &types.Checkpoint{Root: state.LatestJustified.Root, Slot: state.LatestJustified.Slot}, - Target: &types.Checkpoint{Root: anchorRoot, Slot: 3}, - }) - if valid != "" { - t.Fatalf("validateAttestationData returned %q, want success", valid) - } -} - -func prepareCheckpointStateForStore(t *testing.T, state *types.State) [32]byte { - t.Helper() - - originalStateRoot := state.LatestBlockHeader.StateRoot - state.LatestBlockHeader.StateRoot = types.ZeroHash - stateRoot, err := state.HashTreeRoot() - if err != nil { - t.Fatalf("HashTreeRoot returned error: %v", err) - } - state.LatestBlockHeader.StateRoot = originalStateRoot - - prepared := state.Copy() - prepared.LatestBlockHeader.StateRoot = stateRoot - anchorRoot, err := prepared.LatestBlockHeader.HashTreeRoot() - if err != nil { - t.Fatalf("HashTreeRoot header returned error: %v", err) - } - *state = *prepared - return anchorRoot -} - -func makeCheckpointState() *types.State { - emptyBody := &types.BlockBody{Attestations: []*types.AggregatedAttestation{}} - bodyRoot, _ := emptyBody.HashTreeRoot() - - validators := []*types.Validator{ - {Index: 0, Pubkey: [52]byte{0x01}}, - {Index: 1, Pubkey: [52]byte{0x02}}, - } - - return &types.State{ - Config: &types.Config{GenesisTime: 1234}, - Slot: 3, - LatestBlockHeader: &types.BlockHeader{ - Slot: 3, - ProposerIndex: 0, - ParentRoot: [32]byte{0x11}, - StateRoot: types.ZeroHash, - BodyRoot: bodyRoot, - }, - LatestJustified: &types.Checkpoint{Root: [32]byte{0x11}, Slot: 2}, - LatestFinalized: &types.Checkpoint{Root: [32]byte{0x22}, Slot: 1}, - HistoricalBlockHashes: [][32]byte{{0x33}, {0x22}, {0x11}}, - JustifiedSlots: []byte{0x01}, - Validators: validators, - JustificationsRoots: [][32]byte{}, - JustificationsValidators: []byte{0x01}, - } -} diff --git a/chain/forkchoice/time.go b/chain/forkchoice/time.go deleted file mode 100644 index 3b605a3..0000000 --- a/chain/forkchoice/time.go +++ /dev/null @@ -1,211 +0,0 @@ -package forkchoice - -import ( - "github.com/geanlabs/gean/observability/logging" - "github.com/geanlabs/gean/observability/metrics" - "github.com/geanlabs/gean/types" -) - -// AdvanceTime advances the chain to the given unix time in seconds. -// -// This wrapper is kept for second-based callers. -func (c *Store) AdvanceTime(unixSeconds uint64, hasProposal bool) { - c.mu.Lock() - defer c.mu.Unlock() - c.advanceTimeLocked(unixSeconds, hasProposal) -} - -// AdvanceTimeMillis advances the chain to the given unix time in milliseconds. -func (c *Store) AdvanceTimeMillis(unixMillis uint64, hasProposal bool) { - c.mu.Lock() - defer c.mu.Unlock() - c.advanceTimeLockedMillis(unixMillis, hasProposal) -} - -func (c *Store) advanceTimeLocked(unixSeconds uint64, hasProposal bool) { - c.advanceTimeLockedMillis(unixSeconds*1000, hasProposal) -} - -func (c *Store) advanceTimeLockedMillis(unixMillis uint64, hasProposal bool) { - genesisTimeMillis := c.genesisTime * 1000 - if unixMillis <= genesisTimeMillis { - return - } - tickInterval := (unixMillis - genesisTimeMillis) / types.MillisecondsPerInterval - for c.time < tickInterval { - shouldSignal := hasProposal && (c.time+1) == tickInterval - c.tickIntervalLocked(shouldSignal) - } -} - -// TickInterval advances by one interval and performs interval-specific actions. -func (c *Store) TickInterval(hasProposal bool) { - c.mu.Lock() - defer c.mu.Unlock() - c.tickIntervalLocked(hasProposal) -} - -func (c *Store) tickIntervalLocked(hasProposal bool) { - c.time++ - currentInterval := c.time % types.IntervalsPerSlot - - switch currentInterval { - case 0: - if hasProposal { - c.acceptNewAttestationsLocked() - } - // Periodic pruning safety net: prune stale data when finalization - // is lagging, even if pruneOnFinalization hasn't been triggered. - // Runs every periodicPruningInterval slots. Matches zeam's - // FORKCHOICE_PRUNING_INTERVAL_SLOTS pattern (constants.zig:22). - c.maybePeriodicPruneLocked() - case 1: - // Validator voting interval — no action. - case 2: - // Committee aggregation interval — handled outside the store. - case 3: - c.updateSafeTargetLocked() - case 4: - c.acceptNewAttestationsLocked() - } -} - -// AcceptNewAttestations moves pending attestations to known and updates head. -func (c *Store) AcceptNewAttestations() { - c.mu.Lock() - defer c.mu.Unlock() - c.acceptNewAttestationsLocked() -} - -func (c *Store) acceptNewAttestationsLocked() { - // Expand aggregated payloads into per-validator votes. - newAggAttestations := extractAttestationsFromAggregatedPayloads(c.latestNewAggregatedPayloads) - for vid, sa := range newAggAttestations { - existing, ok := c.latestNewAttestations[vid] - if !ok || existing == nil || existing.Message == nil || existing.Message.Slot < sa.Message.Slot { - c.latestNewAttestations[vid] = sa - } - } - c.latestKnownAggregatedPayloads = mergeAggregatedPayloads(c.latestKnownAggregatedPayloads, c.latestNewAggregatedPayloads) - c.latestNewAggregatedPayloads = make(map[[32]byte]aggregatedPayload) - metrics.LatestNewAggregatedPayloads.Set(0) - - // Move new → known and update head. - for id, sa := range c.latestNewAttestations { - c.latestKnownAttestations[id] = sa - } - c.latestNewAttestations = make(map[uint64]*types.SignedAttestation) - - // Enforce caps to bound memory even when finalization stalls. - c.enforcePayloadCap() - c.enforceAggregatedPayloadsCacheCap() - - metrics.LatestKnownAggregatedPayloads.Set(float64(len(c.latestKnownAggregatedPayloads))) - c.updateHeadLocked() -} - -func (c *Store) updateHeadLocked() { - oldHead := c.head - c.head = GetForkChoiceHead(c.allKnownBlockSummaries(), c.latestJustified.Root, c.latestKnownAttestations, 0) - - if oldHead == c.head { - return - } - - var oldSlot, newSlot uint64 - if s, ok := c.lookupBlockSummary(oldHead); ok { - oldSlot = s.Slot - } - if s, ok := c.lookupBlockSummary(c.head); ok { - newSlot = s.Slot - } - - if depth, reorged := c.reorgDepth(oldHead, c.head); reorged { - metrics.ForkChoiceReorgsTotal.Inc() - metrics.ForkChoiceReorgDepth.Observe(float64(depth)) - log.Warn("fork choice reorg detected", - "old_head_slot", oldSlot, - "old_head_root", logging.LongHash(oldHead), - "new_head_slot", newSlot, - "new_head_root", logging.LongHash(c.head), - "depth", depth, - ) - } - log.Info("fork choice head updated", - "head_slot", newSlot, - "head_root", logging.LongHash(c.head), - "previous_head_slot", oldSlot, - "previous_head_root", logging.LongHash(oldHead), - "justified_slot", c.latestJustified.Slot, - "justified_root", logging.LongHash(c.latestJustified.Root), - "finalized_slot", c.latestFinalized.Slot, - "finalized_root", logging.LongHash(c.latestFinalized.Root), - ) -} - -// reorgDepth checks if a head change is a reorg (chain divergence, not a simple extension). -// Returns (depth, true) if reorg, (0, false) otherwise. -func (c *Store) reorgDepth(oldHead, newHead [32]byte) (uint64, bool) { - if oldHead == newHead { - return 0, false - } - - // Collect the full ancestor chain of the new head. If the old head is in - // this ancestry, the head change is a normal extension, not a reorg. - newHeadAncestors := make(map[[32]byte]struct{}) - current := newHead - for { - newHeadAncestors[current] = struct{}{} - if current == oldHead { - return 0, false - } - summary, ok := c.lookupBlockSummary(current) - if !ok { - return 0, false - } - if summary.Slot == 0 { - break - } - current = summary.ParentRoot - } - - // Walk back from the old head until we reach the common ancestor with the - // new head. The number of replaced blocks is the reorg depth. - current = oldHead - var depth uint64 - for { - if _, ok := newHeadAncestors[current]; ok { - return depth, true - } - summary, ok := c.lookupBlockSummary(current) - if !ok { - return 0, false - } - if summary.Slot == 0 { - break - } - current = summary.ParentRoot - depth++ - } - - return 0, false -} - -// UpdateSafeTarget finds the head with sufficient (2/3+) vote support. -func (c *Store) UpdateSafeTarget() { - c.mu.Lock() - defer c.mu.Unlock() - c.updateSafeTargetLocked() -} - -func (c *Store) updateSafeTargetLocked() { - minScore := int(ceilDiv(c.numValidators*2, 3)) - mergedPayloads := make(map[[32]byte]aggregatedPayload) - mergedPayloads = mergeAggregatedPayloads(mergedPayloads, c.latestKnownAggregatedPayloads) - mergedPayloads = mergeAggregatedPayloads(mergedPayloads, c.latestNewAggregatedPayloads) - attestations := extractAttestationsFromAggregatedPayloads(mergedPayloads) - c.safeTarget = GetForkChoiceHead(c.allKnownBlockSummaries(), c.latestJustified.Root, attestations, minScore) - if block, ok := c.lookupBlockSummary(c.safeTarget); ok { - metrics.SafeTargetSlot.Set(float64(block.Slot)) - } -} diff --git a/chain/statetransition/bitlist.go b/chain/statetransition/bitlist.go deleted file mode 100644 index 9ada8a2..0000000 --- a/chain/statetransition/bitlist.go +++ /dev/null @@ -1,104 +0,0 @@ -package statetransition - -// SSZ bitlist helpers. -// -// Bits are packed LSB-first into bytes. A sentinel '1' bit is appended -// after the last data bit to mark the length. The byte length is -// ceil((numBits + 1) / 8). - -// BitlistLen returns the number of data bits in an SSZ bitlist. -func BitlistLen(bl []byte) int { - if len(bl) == 0 { - return 0 - } - lastByte := bl[len(bl)-1] - if lastByte == 0 { - return 0 - } - msb := 0 - for b := lastByte; b > 0; b >>= 1 { - msb++ - } - return (len(bl)-1)*8 + msb - 1 -} - -// GetBit returns the value of bit at index idx in an SSZ bitlist. -func GetBit(bl []byte, idx uint64) bool { - byteIdx := idx / 8 - bitIdx := idx % 8 - if int(byteIdx) >= len(bl) { - return false - } - return (bl[byteIdx] & (1 << bitIdx)) != 0 -} - -// SetBit sets the value of bit at index idx in an SSZ bitlist. -func SetBit(bl []byte, idx uint64, val bool) []byte { - byteIdx := idx / 8 - bitIdx := idx % 8 - if int(byteIdx) >= len(bl) { - return bl - } - if val { - bl[byteIdx] |= 1 << bitIdx - } else { - bl[byteIdx] &^= 1 << bitIdx - } - return bl -} - -// AppendBit adds a new data bit to an SSZ bitlist, maintaining the sentinel. -func AppendBit(bl []byte, val bool) []byte { - n := BitlistLen(bl) - newLen := n + 1 - neededBytes := (newLen + 1 + 7) / 8 - - for len(bl) < neededBytes { - bl = append(bl, 0) - } - bl = bl[:neededBytes] - - // Clear old sentinel. - if n > 0 { - sentinelByte := n / 8 - sentinelBit := n % 8 - if sentinelByte < len(bl) { - bl[sentinelByte] &^= 1 << uint(sentinelBit) - } - } - - // Set the new data bit. - dataByte := n / 8 - dataBit := n % 8 - if val { - bl[dataByte] |= 1 << uint(dataBit) - } else { - bl[dataByte] &^= 1 << uint(dataBit) - } - - // Set new sentinel at position newLen. - sentinelByte := newLen / 8 - sentinelBit := newLen % 8 - bl[sentinelByte] |= 1 << uint(sentinelBit) - - return bl -} - -// MakeBitlist creates a zero-filled SSZ bitlist with numBits data bits -// and a sentinel bit at position numBits. -func MakeBitlist(numBits uint64) []byte { - if numBits == 0 { - return []byte{0x01} - } - numBytes := (numBits + 1 + 7) / 8 - bl := make([]byte, numBytes) - bl[numBits/8] |= 1 << (numBits % 8) - return bl -} - -// CloneBitlist returns a copy of an SSZ bitlist. -func CloneBitlist(src []byte) []byte { - dst := make([]byte, len(src)) - copy(dst, src) - return dst -} diff --git a/chain/statetransition/genesis.go b/chain/statetransition/genesis.go deleted file mode 100644 index 3e42e0a..0000000 --- a/chain/statetransition/genesis.go +++ /dev/null @@ -1,36 +0,0 @@ -package statetransition - -import ( - "github.com/geanlabs/gean/types" -) - -// GenerateGenesis creates a genesis state with the given parameters. -func GenerateGenesis(genesisTime uint64, validators []*types.Validator) *types.State { - config := &types.Config{ - GenesisTime: genesisTime, - } - - emptyBody := &types.BlockBody{Attestations: []*types.AggregatedAttestation{}} - bodyRoot, _ := emptyBody.HashTreeRoot() - - genesisHeader := &types.BlockHeader{ - Slot: 0, - ProposerIndex: 0, - ParentRoot: types.ZeroHash, - StateRoot: types.ZeroHash, - BodyRoot: bodyRoot, - } - - return &types.State{ - Config: config, - Slot: 0, - LatestBlockHeader: genesisHeader, - LatestJustified: &types.Checkpoint{Root: types.ZeroHash, Slot: 0}, - LatestFinalized: &types.Checkpoint{Root: types.ZeroHash, Slot: 0}, - HistoricalBlockHashes: [][32]byte{}, - JustifiedSlots: []byte{0x01}, // empty bitlist with sentinel - Validators: validators, - JustificationsRoots: [][32]byte{}, - JustificationsValidators: []byte{0x01}, // empty bitlist with sentinel - } -} diff --git a/chain/statetransition/justified_slots.go b/chain/statetransition/justified_slots.go deleted file mode 100644 index 03aa881..0000000 --- a/chain/statetransition/justified_slots.go +++ /dev/null @@ -1,69 +0,0 @@ -package statetransition - -// justifiedIndexAfter returns the relative justified-slots index for targetSlot. -// Slots at or before finalizedSlot are considered implicitly justified and have no index. -func justifiedIndexAfter(finalizedSlot, targetSlot uint64) (uint64, bool) { - if targetSlot <= finalizedSlot { - return 0, false - } - return targetSlot - finalizedSlot - 1, true -} - -// isSlotJustified checks justification status using the finalized-relative slot window. -func isSlotJustified(justifiedSlots []byte, finalizedSlot, targetSlot uint64) bool { - relativeIndex, ok := justifiedIndexAfter(finalizedSlot, targetSlot) - if !ok { - return true - } - if relativeIndex >= uint64(BitlistLen(justifiedSlots)) { - return false - } - return GetBit(justifiedSlots, relativeIndex) -} - -// extendJustifiedSlotsToSlot ensures justifiedSlots can represent targetSlot. -// New entries are initialized to false. -func extendJustifiedSlotsToSlot(justifiedSlots []byte, finalizedSlot, targetSlot uint64) []byte { - relativeIndex, ok := justifiedIndexAfter(finalizedSlot, targetSlot) - if !ok { - return CloneBitlist(justifiedSlots) - } - - out := CloneBitlist(justifiedSlots) - for uint64(BitlistLen(out)) <= relativeIndex { - out = AppendBit(out, false) - } - return out -} - -// setSlotJustified updates the justified status for targetSlot if it is tracked. -func setSlotJustified(justifiedSlots []byte, finalizedSlot, targetSlot uint64, value bool) []byte { - relativeIndex, ok := justifiedIndexAfter(finalizedSlot, targetSlot) - if !ok { - return CloneBitlist(justifiedSlots) - } - - out := CloneBitlist(justifiedSlots) - if relativeIndex >= uint64(BitlistLen(out)) { - return out - } - return SetBit(out, relativeIndex, value) -} - -// shiftJustifiedSlotsWindow drops delta entries from the head of the tracking window. -func shiftJustifiedSlotsWindow(justifiedSlots []byte, delta uint64) []byte { - if delta == 0 { - return CloneBitlist(justifiedSlots) - } - - currentLen := uint64(BitlistLen(justifiedSlots)) - if delta >= currentLen { - return []byte{0x01} - } - - out := []byte{0x01} - for i := delta; i < currentLen; i++ { - out = AppendBit(out, GetBit(justifiedSlots, i)) - } - return out -} diff --git a/chain/statetransition/process_attestations.go b/chain/statetransition/process_attestations.go deleted file mode 100644 index 978fc83..0000000 --- a/chain/statetransition/process_attestations.go +++ /dev/null @@ -1,212 +0,0 @@ -package statetransition - -import ( - "bytes" - "sort" - - "github.com/geanlabs/gean/observability/metrics" - "github.com/geanlabs/gean/types" -) - -// ProcessAttestations applies attestation votes and updates -// justification/finalization according to leanSpec 3SF-mini rules. -// -// Per-validator votes are tracked via justifications_roots (sorted list of -// block roots being voted on) and justifications_validators (flat bitlist -// where each root's validator votes are packed consecutively). -func ProcessAttestations(state *types.State, attestations []*types.AggregatedAttestation) *types.State { - numValidators := uint64(len(state.Validators)) - - // Deserialize justifications from SSZ form into working map. - justifications := make(map[[32]byte][]bool) - for i, root := range state.JustificationsRoots { - votes := make([]bool, numValidators) - for v := uint64(0); v < numValidators; v++ { - bitIdx := uint64(i)*numValidators + v - votes[v] = GetBit(state.JustificationsValidators, bitIdx) - } - justifications[root] = votes - } - - justifiedSlots := CloneBitlist(state.JustifiedSlots) - latestJustified := &types.Checkpoint{Root: state.LatestJustified.Root, Slot: state.LatestJustified.Slot} - latestFinalized := &types.Checkpoint{Root: state.LatestFinalized.Root, Slot: state.LatestFinalized.Slot} - finalizedSlot := latestFinalized.Slot - - // Map each known root to its latest materialized slot after the finalized boundary. - rootToSlot := make(map[[32]byte]uint64) - startSlot := finalizedSlot + 1 - for i := startSlot; i < uint64(len(state.HistoricalBlockHashes)); i++ { - root := state.HistoricalBlockHashes[i] - if prev, ok := rootToSlot[root]; !ok || i > prev { - rootToSlot[root] = i - } - } - - processVote := func(validatorID uint64, data *types.AttestationData) { - if data == nil || data.Source == nil || data.Target == nil { - return - } - - source := data.Source - target := data.Target - srcSlot := source.Slot - tgtSlot := target.Slot - - // Target must be after source (strict). - if tgtSlot <= srcSlot { - return - } - - // Source must be justified. Slots at/before finalized are implicitly justified. - if !isSlotJustified(justifiedSlots, finalizedSlot, srcSlot) { - return - } - - // Target must not already be justified. - if isSlotJustified(justifiedSlots, finalizedSlot, tgtSlot) { - return - } - - // Source root must match historical block hashes. - if srcSlot >= uint64(len(state.HistoricalBlockHashes)) || state.HistoricalBlockHashes[srcSlot] != source.Root { - return - } - - // Target root must match historical block hashes. - if tgtSlot >= uint64(len(state.HistoricalBlockHashes)) || state.HistoricalBlockHashes[tgtSlot] != target.Root { - return - } - - // Target must be justifiable after the dynamically updated finalized slot. - if !types.IsJustifiableAfter(tgtSlot, finalizedSlot) { - return - } - - // Validate validator ID. - if validatorID >= numValidators { - return - } - - // Record vote (idempotent — skip if already voted). - if _, ok := justifications[target.Root]; !ok { - justifications[target.Root] = make([]bool, numValidators) - } - if justifications[target.Root][validatorID] { - return - } - justifications[target.Root][validatorID] = true - - // Count votes for this target. - count := uint64(0) - for _, voted := range justifications[target.Root] { - if voted { - count++ - } - } - - // Supermajority: 3 * count >= 2 * numValidators. - if 3*count < 2*numValidators { - return - } - - // Justify target. - latestJustified = &types.Checkpoint{Root: target.Root, Slot: tgtSlot} - justifiedSlots = extendJustifiedSlotsToSlot(justifiedSlots, finalizedSlot, tgtSlot) - justifiedSlots = setSlotJustified(justifiedSlots, finalizedSlot, tgtSlot, true) - delete(justifications, target.Root) - - // Finalization: if no justifiable slot exists between source and target, - // then source becomes finalized. - hasJustifiableGap := false - for s := srcSlot + 1; s < tgtSlot; s++ { - if types.IsJustifiableAfter(s, finalizedSlot) { - hasJustifiableGap = true - break - } - } - if hasJustifiableGap { - metrics.FinalizationsTotal.WithLabelValues("error").Inc() - } else { - oldFinalizedSlot := finalizedSlot - latestFinalized = &types.Checkpoint{Root: source.Root, Slot: srcSlot} - finalizedSlot = latestFinalized.Slot - - // Rebase the justified-slots tracking window and prune stale pending votes. - if finalizedSlot > oldFinalizedSlot { - justifiedSlots = shiftJustifiedSlotsWindow(justifiedSlots, finalizedSlot-oldFinalizedSlot) - for root := range justifications { - slot, ok := rootToSlot[root] - if !ok || slot <= finalizedSlot { - delete(justifications, root) - } - } - } - } - } - - for _, aggregated := range attestations { - if aggregated == nil || aggregated.Data == nil { - continue - } - numBits := uint64(BitlistLen(aggregated.AggregationBits)) - for validatorID := uint64(0); validatorID < numBits; validatorID++ { - if !GetBit(aggregated.AggregationBits, validatorID) { - continue - } - processVote(validatorID, aggregated.Data) - } - } - - // Serialize justifications back to SSZ form. - sortedRoots := sortedJustificationRoots(justifications) - flatVotes := flattenVotes(sortedRoots, justifications, numValidators) - - out := state.Copy() - out.JustifiedSlots = justifiedSlots - out.LatestJustified = latestJustified - out.LatestFinalized = latestFinalized - out.JustificationsRoots = sortedRoots - out.JustificationsValidators = flatVotes - return out -} - -// sortedJustificationRoots returns the roots in deterministic (lexicographic) order. -func sortedJustificationRoots(justifications map[[32]byte][]bool) [][32]byte { - roots := make([][32]byte, 0, len(justifications)) - for root := range justifications { - roots = append(roots, root) - } - sort.Slice(roots, func(i, j int) bool { - return bytes.Compare(roots[i][:], roots[j][:]) < 0 - }) - return roots -} - -// flattenVotes serializes per-root validator votes into a single SSZ bitlist. -// For each root (in sortedRoots order), numValidators bits are appended. -func flattenVotes(sortedRoots [][32]byte, justifications map[[32]byte][]bool, numValidators uint64) []byte { - totalBits := uint64(len(sortedRoots)) * numValidators - if totalBits == 0 { - return []byte{0x01} // empty bitlist with sentinel - } - - numBytes := (totalBits + 1 + 7) / 8 // +1 for sentinel - bl := make([]byte, numBytes) - - bitPos := uint64(0) - for _, root := range sortedRoots { - votes := justifications[root] - for _, voted := range votes { - if voted { - bl[bitPos/8] |= 1 << (bitPos % 8) - } - bitPos++ - } - } - - // Set sentinel bit at position totalBits. - bl[totalBits/8] |= 1 << (totalBits % 8) - - return bl -} diff --git a/chain/statetransition/process_attestations_test.go b/chain/statetransition/process_attestations_test.go deleted file mode 100644 index 8ed75d8..0000000 --- a/chain/statetransition/process_attestations_test.go +++ /dev/null @@ -1,124 +0,0 @@ -package statetransition - -import ( - "testing" - - "github.com/geanlabs/gean/types" -) - -func TestProcessAttestationsAggregatedSupermajority(t *testing.T) { - sourceRoot := rootWithByte(0x11) - targetRoot := rootWithByte(0x22) - - state := &types.State{ - Config: &types.Config{GenesisTime: 0}, - Slot: 1, - LatestBlockHeader: &types.BlockHeader{}, - LatestJustified: &types.Checkpoint{Root: sourceRoot, Slot: 0}, - LatestFinalized: &types.Checkpoint{Root: sourceRoot, Slot: 0}, - HistoricalBlockHashes: [][32]byte{ - sourceRoot, - targetRoot, - }, - JustifiedSlots: bitlistFromBools(false), - Validators: makeValidators(3), - JustificationsRoots: [][32]byte{}, - JustificationsValidators: []byte{0x01}, - } - - bits := MakeBitlist(2) - bits = SetBit(bits, 0, true) - bits = SetBit(bits, 1, true) - - out := ProcessAttestations(state, []*types.AggregatedAttestation{ - { - AggregationBits: bits, - Data: &types.AttestationData{ - Slot: 1, - Head: &types.Checkpoint{Root: targetRoot, Slot: 1}, - Source: &types.Checkpoint{ - Root: sourceRoot, - Slot: 0, - }, - Target: &types.Checkpoint{ - Root: targetRoot, - Slot: 1, - }, - }, - }, - }) - - if out.LatestJustified.Slot != 1 || out.LatestJustified.Root != targetRoot { - t.Fatalf("latest justified mismatch: got slot=%d root=%x", out.LatestJustified.Slot, out.LatestJustified.Root) - } - if !isSlotJustified(out.JustifiedSlots, out.LatestFinalized.Slot, 1) { - t.Fatalf("target slot not marked justified: %08b", out.JustifiedSlots) - } -} - -func TestProcessAttestationsDeduplicatesValidatorVotes(t *testing.T) { - sourceRoot := rootWithByte(0x31) - targetRoot := rootWithByte(0x32) - - state := &types.State{ - Config: &types.Config{GenesisTime: 0}, - Slot: 1, - LatestBlockHeader: &types.BlockHeader{}, - LatestJustified: &types.Checkpoint{Root: sourceRoot, Slot: 0}, - LatestFinalized: &types.Checkpoint{Root: sourceRoot, Slot: 0}, - HistoricalBlockHashes: [][32]byte{ - sourceRoot, - targetRoot, - }, - JustifiedSlots: bitlistFromBools(false), - Validators: makeValidators(2), - JustificationsRoots: [][32]byte{}, - JustificationsValidators: []byte{0x01}, - } - - singleValidatorBitlist := func(validatorID uint64) []byte { - bits := MakeBitlist(validatorID + 1) - return SetBit(bits, validatorID, true) - } - - data := &types.AttestationData{ - Slot: 1, - Head: &types.Checkpoint{Root: targetRoot, Slot: 1}, - Source: &types.Checkpoint{Root: sourceRoot, Slot: 0}, - Target: &types.Checkpoint{Root: targetRoot, Slot: 1}, - } - - out := ProcessAttestations(state, []*types.AggregatedAttestation{ - {AggregationBits: singleValidatorBitlist(0), Data: data}, - {AggregationBits: singleValidatorBitlist(0), Data: data}, // duplicate voter - }) - - if out.LatestJustified.Slot != 0 || out.LatestJustified.Root != sourceRoot { - t.Fatalf("duplicate vote should not justify target: got slot=%d root=%x", out.LatestJustified.Slot, out.LatestJustified.Root) - } - if isSlotJustified(out.JustifiedSlots, out.LatestFinalized.Slot, 1) { - t.Fatalf("target slot should remain unjustified after duplicate vote: %08b", out.JustifiedSlots) - } -} - -func makeValidators(n int) []*types.Validator { - validators := make([]*types.Validator, n) - for i := 0; i < n; i++ { - validators[i] = &types.Validator{Index: uint64(i)} - } - return validators -} - -func bitlistFromBools(bits ...bool) []byte { - out := []byte{0x01} - for _, bit := range bits { - out = AppendBit(out, bit) - } - return out -} - -func rootWithByte(b byte) [32]byte { - var out [32]byte - out[0] = b - return out -} diff --git a/chain/statetransition/proposer.go b/chain/statetransition/proposer.go deleted file mode 100644 index 96ba005..0000000 --- a/chain/statetransition/proposer.go +++ /dev/null @@ -1,10 +0,0 @@ -package statetransition - -// IsProposer checks if a validator is the proposer for a given slot using -// round-robin selection: slot % numValidators == validatorIndex. -func IsProposer(validatorIndex, slot, numValidators uint64) bool { - if numValidators == 0 { - panic("numValidators must be > 0") - } - return slot%numValidators == validatorIndex -} diff --git a/chain/statetransition/transition.go b/chain/statetransition/transition.go deleted file mode 100644 index 6bd54d8..0000000 --- a/chain/statetransition/transition.go +++ /dev/null @@ -1,134 +0,0 @@ -package statetransition - -import ( - "fmt" - "time" - - "github.com/geanlabs/gean/observability/metrics" - "github.com/geanlabs/gean/types" -) - -// ProcessSlot performs per-slot maintenance. If the latest block header has -// a zero state_root, it caches the current state root into that header. -func ProcessSlot(state *types.State) *types.State { - if state.LatestBlockHeader.StateRoot == types.ZeroHash { - stateRoot, _ := state.HashTreeRoot() - out := state.Copy() - out.LatestBlockHeader.StateRoot = stateRoot - return out - } - return state -} - -// ProcessSlots advances the state through empty slots up to targetSlot. -func ProcessSlots(state *types.State, targetSlot uint64) (*types.State, error) { - if state.Slot >= targetSlot { - return nil, fmt.Errorf("target slot %d must be after current slot %d", targetSlot, state.Slot) - } - s := state - for s.Slot < targetSlot { - s = ProcessSlot(s) - out := s.Copy() - out.Slot = s.Slot + 1 - s = out - } - return s, nil -} - -// ProcessBlockHeader validates the block header and updates header-linked state. -func ProcessBlockHeader(state *types.State, block *types.Block) (*types.State, error) { - if block.Slot != state.Slot { - return nil, fmt.Errorf("block slot %d != state slot %d", block.Slot, state.Slot) - } - if block.Slot <= state.LatestBlockHeader.Slot { - return nil, fmt.Errorf("block slot %d <= latest header slot %d", block.Slot, state.LatestBlockHeader.Slot) - } - if !IsProposer(block.ProposerIndex, state.Slot, uint64(len(state.Validators))) { - return nil, fmt.Errorf("validator %d is not proposer for slot %d", block.ProposerIndex, state.Slot) - } - - expectedParent, _ := state.LatestBlockHeader.HashTreeRoot() - if block.ParentRoot != expectedParent { - return nil, fmt.Errorf("parent root mismatch") - } - - out := state.Copy() - parentRoot := block.ParentRoot - - // First block after genesis: mark genesis as justified and finalized. - if state.LatestBlockHeader.Slot == 0 { - out.LatestJustified = &types.Checkpoint{Root: parentRoot, Slot: state.LatestJustified.Slot} - out.LatestFinalized = &types.Checkpoint{Root: parentRoot, Slot: state.LatestFinalized.Slot} - } - - // Append parent root to historical hashes (already cloned by Copy). - out.HistoricalBlockHashes = append(out.HistoricalBlockHashes, parentRoot) - - // Fill empty slots between parent and this block. - numEmpty := block.Slot - state.LatestBlockHeader.Slot - 1 - for i := uint64(0); i < numEmpty; i++ { - out.HistoricalBlockHashes = append(out.HistoricalBlockHashes, types.ZeroHash) - } - - // Extend justified-slots tracking up to the last materialized slot - // (the parent slot). Tracking is relative to the finalized slot. - lastMaterializedSlot := block.Slot - 1 - out.JustifiedSlots = extendJustifiedSlotsToSlot(out.JustifiedSlots, out.LatestFinalized.Slot, lastMaterializedSlot) - - // Build new latest block header with zero state_root (filled on next process_slot). - bodyRoot, _ := block.Body.HashTreeRoot() - out.LatestBlockHeader = &types.BlockHeader{ - Slot: block.Slot, - ProposerIndex: block.ProposerIndex, - ParentRoot: block.ParentRoot, - BodyRoot: bodyRoot, - StateRoot: types.ZeroHash, - } - - return out, nil -} - -// ProcessBlock applies full block processing: header + attestations. -func ProcessBlock(state *types.State, block *types.Block) (*types.State, error) { - blockStart := time.Now() - - s, err := ProcessBlockHeader(state, block) - if err != nil { - return nil, err - } - attStart := time.Now() - s = ProcessAttestations(s, block.Body.Attestations) - - metrics.STFAttestationsProcessed.Add(float64(len(block.Body.Attestations))) - metrics.STFAttestationsProcessingTime.Observe(time.Since(attStart).Seconds()) - metrics.STFBlockProcessingTime.Observe(time.Since(blockStart).Seconds()) - return s, nil -} - -// StateTransition applies the complete state transition for a block. -// Signature verification must happen externally before calling this function. -func StateTransition(state *types.State, block *types.Block) (*types.State, error) { - // Process intermediate slots. - slotsStart := time.Now() - s, err := ProcessSlots(state, block.Slot) - if err != nil { - return nil, fmt.Errorf("process_slots: %w", err) - } - metrics.STFSlotsProcessed.Add(float64(block.Slot - state.Slot)) - metrics.STFSlotsProcessingTime.Observe(time.Since(slotsStart).Seconds()) - - // Process the block (header + attestations). - - s, err = ProcessBlock(s, block) - if err != nil { - return nil, fmt.Errorf("process_block: %w", err) - } - - // Validate state root. - computedRoot, _ := s.HashTreeRoot() - if block.StateRoot != computedRoot { - return nil, fmt.Errorf("invalid state root: expected %x, got %x", computedRoot, block.StateRoot) - } - - return s, nil -} diff --git a/checkpoint/checkpoint.go b/checkpoint/checkpoint.go new file mode 100644 index 0000000..962f743 --- /dev/null +++ b/checkpoint/checkpoint.go @@ -0,0 +1,145 @@ +package checkpoint + +import ( + "fmt" + "io" + "net/http" + "time" + + "github.com/geanlabs/gean/types" +) + +// Timeouts rs L9-13. +const ( + CheckpointConnectTimeout = 15 * time.Second + CheckpointReadTimeout = 15 * time.Second +) + +// FetchCheckpointState downloads and verifies a finalized state from a peer. +func FetchCheckpointState( + url string, + expectedGenesisTime uint64, + expectedValidators []*types.Validator, +) (*types.State, error) { + client := &http.Client{ + Timeout: CheckpointConnectTimeout + CheckpointReadTimeout, + } + + req, err := http.NewRequest("GET", url, nil) + if err != nil { + return nil, fmt.Errorf("create request: %w", err) + } + req.Header.Set("Accept", "application/octet-stream") + + resp, err := client.Do(req) + if err != nil { + return nil, fmt.Errorf("http request: %w", err) + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("http status: %d", resp.StatusCode) + } + + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("read response: %w", err) + } + + state := &types.State{} + if err := state.UnmarshalSSZ(body); err != nil { + return nil, fmt.Errorf("ssz decode: %w", err) + } + + if err := VerifyCheckpointState(state, expectedGenesisTime, expectedValidators); err != nil { + return nil, fmt.Errorf("verify: %w", err) + } + + return state, nil +} + +// VerifyCheckpointState runs all 12 validation checks. +func VerifyCheckpointState( + state *types.State, + expectedGenesisTime uint64, + expectedValidators []*types.Validator, +) error { + // 1. Slot != 0 + if state.Slot == 0 { + return fmt.Errorf("checkpoint state slot cannot be 0") + } + + // 2. Has validators + if len(state.Validators) == 0 { + return fmt.Errorf("checkpoint state has no validators") + } + + // 3. Genesis time matches + if state.Config.GenesisTime != expectedGenesisTime { + return fmt.Errorf("genesis time mismatch: expected %d, got %d", + expectedGenesisTime, state.Config.GenesisTime) + } + + // 4. Validator count matches + if len(state.Validators) != len(expectedValidators) { + return fmt.Errorf("validator count mismatch: expected %d, got %d", + len(expectedValidators), len(state.Validators)) + } + + // 5. Validator indices sequential + for i, v := range state.Validators { + if v.Index != uint64(i) { + return fmt.Errorf("validator at position %d has non-sequential index: expected %d, got %d", + i, i, v.Index) + } + } + + // 6. Validator pubkeys match + for i, v := range state.Validators { + if v.Pubkey != expectedValidators[i].Pubkey { + return fmt.Errorf("validator %d pubkey mismatch", i) + } + } + + // 7. Finalized slot <= state slot + if state.LatestFinalized.Slot > state.Slot { + return fmt.Errorf("finalized slot %d exceeds state slot %d", + state.LatestFinalized.Slot, state.Slot) + } + + // 8. Justified slot >= finalized slot + if state.LatestJustified.Slot < state.LatestFinalized.Slot { + return fmt.Errorf("justified slot %d precedes finalized slot %d", + state.LatestJustified.Slot, state.LatestFinalized.Slot) + } + + // 9. If justified == finalized slot, roots must match + if state.LatestJustified.Slot == state.LatestFinalized.Slot && + state.LatestJustified.Root != state.LatestFinalized.Root { + return fmt.Errorf("justified and finalized at same slot %d have different roots", + state.LatestJustified.Slot) + } + + // 10. Block header slot <= state slot + if state.LatestBlockHeader.Slot > state.Slot { + return fmt.Errorf("block header slot %d exceeds state slot %d", + state.LatestBlockHeader.Slot, state.Slot) + } + + // 11. If block header slot == finalized slot, roots must match + blockRoot, _ := state.LatestBlockHeader.HashTreeRoot() + if state.LatestBlockHeader.Slot == state.LatestFinalized.Slot && + blockRoot != state.LatestFinalized.Root { + return fmt.Errorf("block header at finalized slot %d has mismatched root", + state.LatestFinalized.Slot) + } + + // 12. If block header slot == justified slot, roots must match + if state.LatestBlockHeader.Slot == state.LatestJustified.Slot && + blockRoot != state.LatestJustified.Root { + return fmt.Errorf("block header at justified slot %d has mismatched root", + state.LatestJustified.Slot) + } + + return nil +} diff --git a/checkpoint/checkpoint_test.go b/checkpoint/checkpoint_test.go new file mode 100644 index 0000000..48a54bd --- /dev/null +++ b/checkpoint/checkpoint_test.go @@ -0,0 +1,160 @@ +package checkpoint + +import ( + "testing" + + "github.com/geanlabs/gean/types" +) + +func makeTestState(slot uint64, genesisTime uint64, numValidators int) *types.State { + validators := make([]*types.Validator, numValidators) + for i := 0; i < numValidators; i++ { + validators[i] = &types.Validator{ + Pubkey: [types.PubkeySize]byte{byte(i + 1)}, + Index: uint64(i), + } + } + + header := &types.BlockHeader{Slot: slot} + + return &types.State{ + Config: &types.ChainConfig{GenesisTime: genesisTime}, + Slot: slot, + LatestBlockHeader: header, + LatestJustified: &types.Checkpoint{Slot: slot - 2}, + LatestFinalized: &types.Checkpoint{Slot: slot - 5}, + Validators: validators, + JustifiedSlots: types.NewBitlistSSZ(0), + JustificationsValidators: types.NewBitlistSSZ(0), + } +} + +func TestVerifyCheckpointStateValid(t *testing.T) { + state := makeTestState(100, 1000, 3) + expectedValidators := state.Validators + + err := VerifyCheckpointState(state, 1000, expectedValidators) + if err != nil { + t.Fatalf("should pass: %v", err) + } +} + +func TestVerifyCheckpointStateSlotZero(t *testing.T) { + state := makeTestState(0, 1000, 3) + err := VerifyCheckpointState(state, 1000, state.Validators) + if err == nil { + t.Fatal("should fail: slot is 0") + } +} + +func TestVerifyCheckpointStateNoValidators(t *testing.T) { + state := makeTestState(100, 1000, 0) + err := VerifyCheckpointState(state, 1000, nil) + if err == nil { + t.Fatal("should fail: no validators") + } +} + +func TestVerifyCheckpointStateGenesisTimeMismatch(t *testing.T) { + state := makeTestState(100, 1000, 3) + err := VerifyCheckpointState(state, 9999, state.Validators) + if err == nil { + t.Fatal("should fail: genesis time mismatch") + } +} + +func TestVerifyCheckpointStateValidatorCountMismatch(t *testing.T) { + state := makeTestState(100, 1000, 3) + twoValidators := state.Validators[:2] + err := VerifyCheckpointState(state, 1000, twoValidators) + if err == nil { + t.Fatal("should fail: validator count mismatch") + } +} + +func TestVerifyCheckpointStateNonSequentialIndex(t *testing.T) { + state := makeTestState(100, 1000, 3) + state.Validators[1].Index = 99 // break sequential + err := VerifyCheckpointState(state, 1000, state.Validators) + if err == nil { + t.Fatal("should fail: non-sequential index") + } +} + +func TestVerifyCheckpointStatePubkeyMismatch(t *testing.T) { + state := makeTestState(100, 1000, 3) + // Different expected validators. + expected := make([]*types.Validator, 3) + for i := 0; i < 3; i++ { + expected[i] = &types.Validator{ + Pubkey: [types.PubkeySize]byte{byte(i + 100)}, // different + Index: uint64(i), + } + } + err := VerifyCheckpointState(state, 1000, expected) + if err == nil { + t.Fatal("should fail: pubkey mismatch") + } +} + +func TestVerifyCheckpointStateFinalizedExceedsState(t *testing.T) { + state := makeTestState(100, 1000, 3) + state.LatestFinalized.Slot = 200 // > state.Slot + err := VerifyCheckpointState(state, 1000, state.Validators) + if err == nil { + t.Fatal("should fail: finalized exceeds state") + } +} + +func TestVerifyCheckpointStateJustifiedPrecedesFinalized(t *testing.T) { + state := makeTestState(100, 1000, 3) + state.LatestJustified.Slot = 90 + state.LatestFinalized.Slot = 95 // justified < finalized + err := VerifyCheckpointState(state, 1000, state.Validators) + if err == nil { + t.Fatal("should fail: justified precedes finalized") + } +} + +func TestVerifyCheckpointStateJustifiedFinalizedRootMismatch(t *testing.T) { + state := makeTestState(100, 1000, 3) + state.LatestJustified.Slot = 50 + state.LatestFinalized.Slot = 50 + state.LatestJustified.Root = [32]byte{1} + state.LatestFinalized.Root = [32]byte{2} // different roots at same slot + err := VerifyCheckpointState(state, 1000, state.Validators) + if err == nil { + t.Fatal("should fail: root mismatch at same slot") + } +} + +func TestVerifyCheckpointStateBlockHeaderExceedsState(t *testing.T) { + state := makeTestState(100, 1000, 3) + state.LatestBlockHeader.Slot = 200 // > state.Slot + err := VerifyCheckpointState(state, 1000, state.Validators) + if err == nil { + t.Fatal("should fail: block header exceeds state") + } +} + +func TestVerifyCheckpointStateBlockHeaderFinalizedRootMismatch(t *testing.T) { + state := makeTestState(100, 1000, 3) + state.LatestBlockHeader.Slot = 50 + state.LatestFinalized.Slot = 50 + state.LatestFinalized.Root = [32]byte{99} // wrong root + err := VerifyCheckpointState(state, 1000, state.Validators) + if err == nil { + t.Fatal("should fail: block header root mismatch at finalized slot") + } +} + +func TestVerifyCheckpointStateBlockHeaderJustifiedRootMismatch(t *testing.T) { + state := makeTestState(100, 1000, 3) + state.LatestBlockHeader.Slot = 90 + state.LatestJustified.Slot = 90 + state.LatestJustified.Root = [32]byte{99} // wrong root + err := VerifyCheckpointState(state, 1000, state.Validators) + if err == nil { + t.Fatal("should fail: block header root mismatch at justified slot") + } +} diff --git a/cmd/gean/main.go b/cmd/gean/main.go index aab47c6..25b8a9e 100644 --- a/cmd/gean/main.go +++ b/cmd/gean/main.go @@ -3,166 +3,256 @@ package main import ( "context" "flag" - "io" - "log" - "log/slog" + "fmt" "os" "os/signal" - "strconv" + "path/filepath" "syscall" "time" - "github.com/geanlabs/gean/config" + "github.com/geanlabs/gean/api" + "github.com/geanlabs/gean/checkpoint" + "github.com/geanlabs/gean/forkchoice" + "github.com/geanlabs/gean/genesis" + "github.com/geanlabs/gean/logger" "github.com/geanlabs/gean/node" - "github.com/geanlabs/gean/observability/logging" + "github.com/geanlabs/gean/p2p" + "github.com/geanlabs/gean/storage" + "github.com/geanlabs/gean/types" + "github.com/geanlabs/gean/xmss" ) func main() { - genesisPath := flag.String("genesis", "", "Path to config.yaml") - bootnodesPath := flag.String("bootnodes", "", "Path to nodes.yaml") - validatorsPath := flag.String("validator-registry-path", "", "Path to validators.yaml") - nodeID := flag.String("node-id", "", "Node name (index into validators.yaml)") - nodeKey := flag.String("node-key", "", "Path to secp256k1 private key file") - validatorKeys := flag.String("validator-keys", "", "Path to directory containing validator keys") - listenAddr := flag.String("listen-addr", "/ip4/0.0.0.0/udp/9000/quic-v1", "QUIC listen address") - metricsPort := flag.Int("metrics-port", 8080, "Prometheus metrics port (0 = disabled)") - apiHost := flag.String("api-host", "0.0.0.0", "API server host") - apiPort := flag.Int("api-port", 5058, "API server port (0 = disabled)") - apiEnabled := flag.Bool("api-enabled", true, "Enable API server") - discoveryPort := flag.Int("discovery-port", 9000, "Discovery v5 UDP port") - dataDir := flag.String("data-dir", ".", "Data directory for node database and keys") - checkpointSyncURL := flag.String("checkpoint-sync-url", "", "URL to fetch finalized checkpoint state from for checkpoint sync") - devnetID := flag.String("devnet-id", "devnet0", "Devnet identifier for gossip topics") - isAggregator := flag.Bool("is-aggregator", false, "Enable aggregator role for this node") - attCommCount := flag.Int("attestation-committee-count", 1, "Number of attestation committees (must be 1 for devnet-3)") - logLevel := flag.String("log-level", "info", "Log level (debug, info, warn, error)") + // CLI flags rs L46-79. + configDir := flag.String("custom-network-config-dir", "", "Config directory (required)") + gossipPort := flag.Int("gossipsub-port", 9000, "P2P listen port (QUIC/UDP)") + httpAddr := flag.String("http-address", "127.0.0.1", "Bind address for API + metrics") + apiPort := flag.Int("api-port", 5052, "API server port") + metricsPort := flag.Int("metrics-port", 5054, "Metrics server port") + nodeKey := flag.String("node-key", "", "Path to hex-encoded secp256k1 private key (required)") + nodeID := flag.String("node-id", "", "Node identifier, e.g. gean_0 (required)") + checkpointURL := flag.String("checkpoint-sync-url", "", "URL for checkpoint sync (optional)") + isAggregator := flag.Bool("is-aggregator", false, "Enable attestation aggregation") + committeeCount := flag.Uint64("attestation-committee-count", 1, "Number of attestation subnets") + _ = flag.String("aggregate-subnet-ids", "", "Comma-separated subnet IDs (requires --is-aggregator)") + dataDir := flag.String("data-dir", "./data", "Pebble database directory") + flag.Parse() - // Initialize structured logger and suppress noisy stdlib log output (quic-go, etc.). - logging.Init(parseLevel(*logLevel)) - log.SetOutput(io.Discard) + // Validate required flags. + if *configDir == "" || *nodeKey == "" || *nodeID == "" { + fmt.Fprintln(os.Stderr, "required flags: --custom-network-config-dir, --node-key, --node-id") + flag.Usage() + os.Exit(1) + } + if *committeeCount < 1 { + fmt.Fprintln(os.Stderr, "--attestation-committee-count must be >= 1") + os.Exit(1) + } - logger := logging.NewComponentLogger(logging.CompNode) + logger.Info(logger.Node, "gean consensus client starting") - if *genesisPath == "" { - logger.Error("--genesis flag is required") + // --- Load configuration --- + + configPath := filepath.Join(*configDir, "config.yaml") + bootnodePath := filepath.Join(*configDir, "nodes.yaml") + validatorsPath := filepath.Join(*configDir, "annotated_validators.yaml") + keysDir := filepath.Join(*configDir, "hash-sig-keys") + + genesisConfig, err := genesis.LoadGenesisConfig(configPath) + if err != nil { + logger.Error(logger.Node, "load genesis config: %v", err) os.Exit(1) } + logger.Info(logger.Node, "genesis: time=%d validators=%d", genesisConfig.GenesisTime, len(genesisConfig.GenesisValidators)) - if *attCommCount != 1 { - logger.Error("--attestation-committee-count must be 1 for devnet-3", "value", *attCommCount) + // Load bootnodes. + bootnodes, err := p2p.LoadBootnodes(bootnodePath) + if err != nil { + logger.Error(logger.Node, "load bootnodes: %v", err) os.Exit(1) } + logger.Info(logger.Node, "bootnodes: %d loaded", len(bootnodes)) - // Print banner first. - logging.Banner(node.Version) - - // Load genesis config. - genCfg, err := config.LoadGenesisConfig(*genesisPath) + // Load validator keys. + keyManager, err := xmss.LoadValidatorKeys(validatorsPath, keysDir, *nodeID) if err != nil { - logger.Error("failed to load genesis config", "err", err) + logger.Error(logger.Node, "load validator keys: %v", err) os.Exit(1) } - logger.Info("genesis config loaded", - "genesis_time", genCfg.GenesisTime, - "validators", len(genCfg.Validators), - ) + defer keyManager.Close() + logger.Info(logger.Node, "validators: %d keys loaded for %s", len(keyManager.ValidatorIDs()), *nodeID) - if genCfg.GenesisTime < uint64(time.Now().Unix()) { - logger.Warn("genesis time is in the past", "genesis_time", genCfg.GenesisTime, "now", time.Now().Unix()) - } + // --- Initialize storage --- - // Load bootnodes. - var bootnodes []string - if *bootnodesPath != "" { - bootnodes, err = config.LoadBootnodes(*bootnodesPath) - if err != nil { - logger.Error("failed to load bootnodes", "err", err) - os.Exit(1) - } - if len(bootnodes) > 0 { - logger.Info("bootnodes loaded", "count", len(bootnodes)) - } + absDataDir, _ := filepath.Abs(*dataDir) + os.MkdirAll(absDataDir, 0755) + logger.Info(logger.Node, "storage: %s", absDataDir) + + backend, err := storage.NewPebbleBackend(absDataDir) + if err != nil { + logger.Error(logger.Node, "open pebble: %v", err) + os.Exit(1) } + defer backend.Close() + + s := node.NewConsensusStore(backend) + + // --- Initialize state (DB restore, checkpoint sync, or genesis) --- + + genesisValidators := genesisConfig.Validators() + + // Check if DB already has a valid head state (restart case). + existingHead := s.Head() + existingHeader := s.GetBlockHeader(existingHead) + existingState := s.GetState(existingHead) - // Load validator assignments. - var validatorIDs []uint64 - if *validatorsPath != "" && *nodeID != "" { - reg, err := config.LoadValidators(*validatorsPath) + if existingHeader != nil && existingState != nil && existingHeader.Slot > 0 { + // DB has valid state — restore from it. + logger.Info(logger.Node, "restoring from database: slot=%d head=%x justified=%d finalized=%d", + existingHeader.Slot, existingHead, + s.LatestJustified().Slot, s.LatestFinalized().Slot) + } else if *checkpointURL != "" { + // Checkpoint sync. + logger.Info(logger.Sync, "checkpoint sync: %s", *checkpointURL) + state, err := checkpoint.FetchCheckpointState(*checkpointURL, genesisConfig.GenesisTime, genesisValidators) if err != nil { - logger.Error("failed to load validators", "err", err) + logger.Error(logger.Sync, "checkpoint sync failed: %v", err) os.Exit(1) } - if err := reg.Validate(uint64(len(genCfg.Validators))); err != nil { - logger.Error("invalid validator config", "err", err) - os.Exit(1) - } - validatorIDs = reg.GetValidatorIndices(*nodeID) - if len(validatorIDs) == 0 { - logger.Warn("no validators found for node", "node_id", *nodeID) - } else { - logger.Info("validator duties loaded", - "node_id", *nodeID, - "validators", strconv.Itoa(len(validatorIDs)), - ) - } + stateRoot, _ := state.HashTreeRoot() + header := state.LatestBlockHeader + blockRoot, _ := header.HashTreeRoot() + logger.Info(logger.Sync, "checkpoint sync: slot=%d finalized_root=%x justified_root=%x head_root=%x parent_root=%x state_root=%x", + state.Slot, state.LatestFinalized.Root, state.LatestJustified.Root, blockRoot, header.ParentRoot, stateRoot) + initStoreFromState(s, state) + } else { + // Genesis. + logger.Info(logger.Node, "initializing from genesis") + genesisState := genesisConfig.GenesisState() + initStoreFromState(s, genesisState) } - if *apiPort == 0 { - *apiEnabled = false - } - nodeCfg := node.Config{ - GenesisTime: genCfg.GenesisTime, - Validators: genCfg.Validators, - ListenAddr: *listenAddr, - NodeKeyPath: *nodeKey, - Bootnodes: bootnodes, - ValidatorIDs: validatorIDs, - ValidatorKeysDir: *validatorKeys, - MetricsPort: *metricsPort, - DiscoveryPort: *discoveryPort, - DataDir: *dataDir, - CheckpointSyncURL: *checkpointSyncURL, - DevnetID: *devnetID, - IsAggregator: *isAggregator, - APIHost: *apiHost, - APIPort: *apiPort, - APIEnabled: *apiEnabled, - } + // --- Initialize fork choice --- + + headRoot := s.Head() + headHeader := s.GetBlockHeader(headRoot) + fc := forkchoice.New(headHeader.Slot, headRoot) - n, err := node.New(nodeCfg) + // --- Initialize P2P --- + + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + + p2pHost, err := p2p.NewHost(ctx, *nodeKey, *gossipPort, *committeeCount) if err != nil { - logger.Error("failed to initialize node", "err", err) + logger.Error(logger.Network, "create p2p host: %v", err) os.Exit(1) } - defer n.Close() + defer p2pHost.Close() - ctx, cancel := context.WithCancel(context.Background()) - defer cancel() + logger.Info(logger.Network, "p2p: peer_id=%s listen_port=%d", p2pHost.PeerID(), *gossipPort) + + // Connect to bootnodes. + p2pHost.ConnectBootnodes(ctx, bootnodes) + p2pHost.StartBootnodeRedial(ctx, bootnodes) + + // --- Initialize engine --- + + n := node.New(s, fc, p2pHost, keyManager, *isAggregator, *committeeCount) + + // Register P2P stream handlers. + p2pHost.RegisterReqRespHandlers( + func() *p2p.StatusMessage { + finalized := s.LatestFinalized() + return &p2p.StatusMessage{ + FinalizedRoot: finalized.Root, + FinalizedSlot: finalized.Slot, + HeadRoot: s.Head(), + HeadSlot: s.HeadSlot(), + } + }, + func(root [32]byte) *types.SignedBlockWithAttestation { + return s.GetSignedBlock(root) + }, + ) + + // Wire gossip handlers — P2P pushes to engine channels. + p2pHost.StartGossipListeners(n) + + // Start engine goroutine. + go n.Run(ctx) + + // --- Start HTTP servers --- + + apiAddr := fmt.Sprintf("%s:%d", *httpAddr, *apiPort) + metricsAddr := fmt.Sprintf("%s:%d", *httpAddr, *metricsPort) + + go func() { + if err := api.StartAPIServer(apiAddr, s); err != nil { + logger.Error(logger.Node, "api server error: %v", err) + } + }() - // Handle signals. - sigCh := make(chan os.Signal, 1) - signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM) go func() { - <-sigCh - cancel() + if err := api.StartMetricsServer(metricsAddr); err != nil { + logger.Error(logger.Node, "metrics server error: %v", err) + } }() - if err := n.Run(ctx); err != nil { - logger.Error("node exited with error", "err", err) - os.Exit(1) - } + logger.Info(logger.Node, "gean started: api=%s metrics=%s aggregator=%v", apiAddr, metricsAddr, *isAggregator) + + // --- Wait for shutdown --- + + sigCh := make(chan os.Signal, 1) + signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM) + <-sigCh + + logger.Info(logger.Node, "shutting down...") + cancel() + // Give engine goroutine time to exit before deferred backend.Close() runs. + time.Sleep(500 * time.Millisecond) } -func parseLevel(s string) slog.Level { - switch s { - case "debug": - return slog.LevelDebug - case "warn": - return slog.LevelWarn - case "error": - return slog.LevelError - default: - return slog.LevelInfo +// initStoreFromState initializes the consensus store from an anchor state. +// +// The anchor state becomes the new latest justified AND latest finalized +// checkpoint — both pointing at the served block at header.Slot. This +// matches the standard checkpoint sync convention: the bootstrapping node +// trusts the served state as the new finalization anchor and starts forward +// sync from there. +// +// Note: state.LatestJustified and state.LatestFinalized inside the served +// state point to EARLIER slots (the finalization status from when the block +// was processed). We deliberately do NOT use those — the served block IS +// the new anchor, regardless of what its internal pointers say. +func initStoreFromState(s *node.ConsensusStore, state *types.State) { + // Compute anchor block root from header. + stateRoot, _ := state.HashTreeRoot() + header := state.LatestBlockHeader + + // Fill state_root if zero (canonical post-state form from checkpoint server). + if header.StateRoot == types.ZeroRoot { + header.StateRoot = stateRoot } + blockRoot, _ := header.HashTreeRoot() + + // Anchor checkpoint: both justified and finalized point at the served block. + anchor := &types.Checkpoint{Root: blockRoot, Slot: header.Slot} + + // Store metadata. + s.SetConfig(state.Config) + s.SetHead(blockRoot) + s.SetSafeTarget(blockRoot) + s.SetLatestJustified(anchor) + s.SetLatestFinalized(anchor) + s.SetTime(0) + + // Store block header and state. + s.InsertBlockHeader(blockRoot, header) + s.InsertState(blockRoot, state) + s.InsertLiveChainEntry(state.Slot, blockRoot, header.ParentRoot) + + logger.Info(logger.Store, "store initialized from anchor: slot=%d head=%x parent_root=%x state_root=%x", + header.Slot, blockRoot, header.ParentRoot, stateRoot) } diff --git a/cmd/keygen/main.go b/cmd/keygen/main.go index 0c744cd..22f9e3e 100644 --- a/cmd/keygen/main.go +++ b/cmd/keygen/main.go @@ -1,62 +1,261 @@ package main +// Keygen generates all config files needed to run a standalone gean devnet. +// +// First run: generates XMSS keys, node keys, and all config files (~40s per validator). +// Subsequent runs: skips key generation, only refreshes config.yaml with new genesis time. +// +// Usage: +// go run ./cmd/keygen --validators 5 --nodes 3 --output testnet + +// #cgo linux LDFLAGS: -L${SRCDIR}/../../xmss/rust/target/release -lhashsig_glue -lmultisig_glue -lm -ldl -lpthread +// #cgo darwin LDFLAGS: -L${SRCDIR}/../../xmss/rust/target/release -lhashsig_glue -lmultisig_glue -lm -ldl -lpthread -framework CoreFoundation -framework SystemConfiguration -framework Security +// #include +// #include +// typedef struct KeyPair KeyPair; +// typedef struct PublicKey PublicKey; +// typedef struct PrivateKey PrivateKey; +// +// KeyPair* hashsig_keypair_generate(const char* seed_phrase, +// size_t activation_epoch, size_t num_active_epochs); +// void hashsig_keypair_free(KeyPair* keypair); +// const PublicKey* hashsig_keypair_get_public_key(const KeyPair* keypair); +// const PrivateKey* hashsig_keypair_get_private_key(const KeyPair* keypair); +// size_t hashsig_public_key_to_bytes(const PublicKey* public_key, uint8_t* buffer, size_t buffer_len); +// size_t hashsig_private_key_to_bytes(const PrivateKey* private_key, uint8_t* buffer, size_t buffer_len); +import "C" + import ( + "crypto/rand" "encoding/hex" + "encoding/json" "flag" "fmt" + "log" "os" "path/filepath" + "time" + "unsafe" - "github.com/geanlabs/gean/xmss/leansig" + libp2pcrypto "github.com/libp2p/go-libp2p/core/crypto" + "github.com/libp2p/go-libp2p/core/peer" + + "github.com/geanlabs/gean/types" ) +// manifest stores generated key info so we can skip regeneration. +type manifest struct { + Validators []validatorInfo `json:"validators"` + Nodes []nodeInfo `json:"nodes"` +} + +type validatorInfo struct { + Index int `json:"index"` + PubkeyHex string `json:"pubkey_hex"` + SkFile string `json:"sk_file"` +} + +type nodeInfo struct { + KeyFile string `json:"key_file"` + PeerID string `json:"peer_id"` +} + func main() { - count := flag.Int("validators", 5, "Number of keys to generate") - outDir := flag.String("keys-dir", "keys", "Output directory for keys") - printYAML := flag.Bool("print-yaml", false, "Print GENESIS_VALIDATORS yaml to stdout") + numValidators := flag.Int("validators", 5, "Number of validators to generate") + numNodes := flag.Int("nodes", 3, "Number of nodes") + outputDir := flag.String("output", "testnet", "Output directory") + basePort := flag.Int("base-port", 9000, "Base P2P port (incremented per node)") + flag.Parse() - if err := os.MkdirAll(*outDir, 0755); err != nil { - fmt.Fprintf(os.Stderr, "failed to create output directory: %v\n", err) - os.Exit(1) + if *numValidators < 1 || *numNodes < 1 { + log.Fatal("need at least 1 validator and 1 node") } - var pubkeys []string + os.MkdirAll(*outputDir, 0755) + keysDir := filepath.Join(*outputDir, "hash-sig-keys") + os.MkdirAll(keysDir, 0755) - fmt.Fprintf(os.Stderr, "Generating %d keys in %s...\n", *count, *outDir) - for i := 0; i < *count; i++ { - // Deterministic seed based on index - seed := uint64(i) - // Activation epoch 0, active for 256 epochs - kp, err := leansig.GenerateKeypair(seed, 0, 256) - if err != nil { - fmt.Fprintf(os.Stderr, "failed to generate keypair %d: %v\n", i, err) - os.Exit(1) + manifestPath := filepath.Join(*outputDir, "manifest.json") + + // Try to load existing manifest (skip key generation if valid). + var m manifest + if existing, err := loadManifest(manifestPath); err == nil && + len(existing.Validators) == *numValidators && + len(existing.Nodes) == *numNodes && + keysExist(keysDir, existing.Validators) && + nodeKeysExist(*outputDir, existing.Nodes) { + + log.Printf("keys already exist (%d validators, %d nodes) — skipping generation", + len(existing.Validators), len(existing.Nodes)) + m = *existing + } else { + // Generate fresh keys. + m = generateKeys(*numValidators, *numNodes, *outputDir, keysDir, *basePort) + saveManifest(manifestPath, &m) + } + + // Always refresh config.yaml with fresh genesis time (30 seconds from now). + genesisTime := uint64(time.Now().Unix()) + 30 + writeConfigYAML(*outputDir, genesisTime, m.Validators) + writeAnnotatedValidatorsYAML(*outputDir, m.Validators, *numNodes) + writeNodesYAML(*outputDir, m.Nodes, *basePort) + + log.Println("---") + log.Printf("output: %s", *outputDir) + log.Printf("genesis time: %d (in 30 seconds: %s)", genesisTime, + time.Unix(int64(genesisTime), 0).Format(time.RFC3339)) + log.Printf("validators: %d, nodes: %d", len(m.Validators), len(m.Nodes)) + log.Println("") + log.Println("run immediately:") + log.Printf(" bin/gean --custom-network-config-dir %s --node-key %s/node0.key --node-id node0 --is-aggregator --data-dir data/node0", + *outputDir, *outputDir) +} + +func generateKeys(numValidators, numNodes int, outputDir, keysDir string, basePort int) manifest { + var m manifest + + // Generate XMSS validator keys. + log.Printf("generating %d XMSS validator keys (this takes ~40s per key)...", numValidators) + for i := 0; i < numValidators; i++ { + seed := fmt.Sprintf("gean-testnet-validator-%d-%d", i, time.Now().UnixNano()) + log.Printf(" generating validator %d/%d...", i+1, numValidators) + + cSeed := C.CString(seed) + kp := C.hashsig_keypair_generate(cSeed, C.size_t(0), C.size_t(1<<18)) + C.free(unsafe.Pointer(cSeed)) + if kp == nil { + log.Fatalf("key generation failed for validator %d", i) } - defer kp.Free() - pkPath := filepath.Join(*outDir, fmt.Sprintf("validator_%d_pk.ssz", i)) - skPath := filepath.Join(*outDir, fmt.Sprintf("validator_%d_sk.ssz", i)) + // Serialize pubkey. + var pkBuf [256]byte + pkLen := C.hashsig_public_key_to_bytes( + C.hashsig_keypair_get_public_key(kp), + (*C.uint8_t)(unsafe.Pointer(&pkBuf[0])), + C.size_t(len(pkBuf)), + ) + if pkLen == 0 || int(pkLen) != types.PubkeySize { + log.Fatalf("pubkey serialization failed for validator %d", i) + } - if err := leansig.SaveKeypair(kp, pkPath, skPath); err != nil { - fmt.Fprintf(os.Stderr, "failed to save keypair %d: %v\n", i, err) - os.Exit(1) + // Serialize private key. + skBuf := make([]byte, 10*1024*1024) + skLen := C.hashsig_private_key_to_bytes( + C.hashsig_keypair_get_private_key(kp), + (*C.uint8_t)(unsafe.Pointer(&skBuf[0])), + C.size_t(len(skBuf)), + ) + if skLen == 0 { + log.Fatalf("private key serialization failed for validator %d", i) } - pkBytes, err := kp.PublicKeyBytes() - if err != nil { - fmt.Fprintf(os.Stderr, "failed to get public key bytes %d: %v\n", i, err) - os.Exit(1) + C.hashsig_keypair_free(kp) + + skFile := fmt.Sprintf("validator_%d_sk.ssz", i) + skPath := filepath.Join(keysDir, skFile) + os.WriteFile(skPath, skBuf[:skLen], 0600) + + pubkeyHex := hex.EncodeToString(pkBuf[:pkLen]) + m.Validators = append(m.Validators, validatorInfo{ + Index: i, + PubkeyHex: pubkeyHex, + SkFile: skFile, + }) + + log.Printf(" validator %d: pubkey=%s...%s sk=%d bytes", + i, pubkeyHex[:8], pubkeyHex[len(pubkeyHex)-8:], skLen) + } + + // Generate node keys. + log.Printf("generating %d node keys...", numNodes) + for i := 0; i < numNodes; i++ { + keyBytes := make([]byte, 32) + rand.Read(keyBytes) + keyHex := hex.EncodeToString(keyBytes) + + keyFile := fmt.Sprintf("node%d.key", i) + keyPath := filepath.Join(outputDir, keyFile) + os.WriteFile(keyPath, []byte(keyHex), 0600) + + privKey, _ := libp2pcrypto.UnmarshalSecp256k1PrivateKey(keyBytes) + peerID, _ := peer.IDFromPrivateKey(privKey) + + m.Nodes = append(m.Nodes, nodeInfo{ + KeyFile: keyFile, + PeerID: peerID.String(), + }) + log.Printf(" node%d: peer_id=%s", i, peerID) + } + + return m +} + +func writeConfigYAML(outputDir string, genesisTime uint64, validators []validatorInfo) { + yaml := fmt.Sprintf("GENESIS_TIME: %d\nGENESIS_VALIDATORS:\n", genesisTime) + for _, v := range validators { + yaml += fmt.Sprintf(" - \"%s\"\n", v.PubkeyHex) + } + os.WriteFile(filepath.Join(outputDir, "config.yaml"), []byte(yaml), 0644) +} + +func writeAnnotatedValidatorsYAML(outputDir string, validators []validatorInfo, numNodes int) { + nodeValidators := make(map[int][]validatorInfo) + for _, v := range validators { + nodeIdx := v.Index % numNodes + nodeValidators[nodeIdx] = append(nodeValidators[nodeIdx], v) + } + yaml := "" + for i := 0; i < numNodes; i++ { + yaml += fmt.Sprintf("node%d:\n", i) + for _, v := range nodeValidators[i] { + yaml += fmt.Sprintf(" - index: %d\n pubkey_hex: %s\n privkey_file: %s\n", + v.Index, v.PubkeyHex, v.SkFile) } - pubkeys = append(pubkeys, hex.EncodeToString(pkBytes)) + } + os.WriteFile(filepath.Join(outputDir, "annotated_validators.yaml"), []byte(yaml), 0644) +} + +func writeNodesYAML(outputDir string, nodes []nodeInfo, basePort int) { + yaml := "" + for i, node := range nodes { + port := basePort + i + yaml += fmt.Sprintf("- \"/ip4/127.0.0.1/udp/%d/quic-v1/p2p/%s\"\n", port, node.PeerID) + } + os.WriteFile(filepath.Join(outputDir, "nodes.yaml"), []byte(yaml), 0644) +} + +func loadManifest(path string) (*manifest, error) { + data, err := os.ReadFile(path) + if err != nil { + return nil, err + } + var m manifest + if err := json.Unmarshal(data, &m); err != nil { + return nil, err + } + return &m, nil +} + +func saveManifest(path string, m *manifest) { + data, _ := json.MarshalIndent(m, "", " ") + os.WriteFile(path, data, 0644) +} - fmt.Fprintf(os.Stderr, "Generated keypair %d\n", i) +func keysExist(keysDir string, validators []validatorInfo) bool { + for _, v := range validators { + if _, err := os.Stat(filepath.Join(keysDir, v.SkFile)); err != nil { + return false + } } + return true +} - if *printYAML { - fmt.Println("GENESIS_VALIDATORS:") - for _, pk := range pubkeys { - fmt.Printf(" - \"0x%s\"\n", pk) +func nodeKeysExist(outputDir string, nodes []nodeInfo) bool { + for _, n := range nodes { + if _, err := os.Stat(filepath.Join(outputDir, n.KeyFile)); err != nil { + return false } } + return true } diff --git a/config/genesis.go b/config/genesis.go deleted file mode 100644 index b101c81..0000000 --- a/config/genesis.go +++ /dev/null @@ -1,60 +0,0 @@ -package config - -import ( - "encoding/hex" - "fmt" - "os" - "strings" - - "github.com/geanlabs/gean/types" - "gopkg.in/yaml.v3" -) - -// GenesisConfig represents the parsed config.yaml for genesis. -type GenesisConfig struct { - GenesisTime uint64 `yaml:"GENESIS_TIME"` - Validators []*types.Validator // populated from GENESIS_VALIDATORS -} - -// rawGenesisConfig is the on-disk YAML shape. -type rawGenesisConfig struct { - GenesisTime uint64 `yaml:"GENESIS_TIME"` - GenesisValidators []string `yaml:"GENESIS_VALIDATORS"` -} - -// LoadGenesisConfig loads and parses a genesis config YAML file. -func LoadGenesisConfig(path string) (*GenesisConfig, error) { - data, err := os.ReadFile(path) - if err != nil { - return nil, fmt.Errorf("read config: %w", err) - } - - var raw rawGenesisConfig - if err := yaml.Unmarshal(data, &raw); err != nil { - return nil, fmt.Errorf("parse config: %w", err) - } - - if len(raw.GenesisValidators) == 0 { - return nil, fmt.Errorf("GENESIS_VALIDATORS must not be empty") - } - - validators := make([]*types.Validator, len(raw.GenesisValidators)) - for i, hexStr := range raw.GenesisValidators { - hexStr = strings.TrimPrefix(hexStr, "0x") - pubkeyBytes, err := hex.DecodeString(hexStr) - if err != nil { - return nil, fmt.Errorf("invalid pubkey hex at index %d: %w", i, err) - } - if len(pubkeyBytes) != 52 { - return nil, fmt.Errorf("pubkey at index %d is %d bytes, want 52", i, len(pubkeyBytes)) - } - var pubkey [52]byte - copy(pubkey[:], pubkeyBytes) - validators[i] = &types.Validator{Pubkey: pubkey, Index: uint64(i)} - } - - return &GenesisConfig{ - GenesisTime: raw.GenesisTime, - Validators: validators, - }, nil -} diff --git a/config/genesis_config_test.go b/config/genesis_config_test.go deleted file mode 100644 index 4f5bed0..0000000 --- a/config/genesis_config_test.go +++ /dev/null @@ -1,111 +0,0 @@ -package config_test - -import ( - "os" - "path/filepath" - "testing" - - "github.com/geanlabs/gean/config" -) - -func TestLoadGenesisConfigParsesValidators(t *testing.T) { - yaml := ` -GENESIS_TIME: 1704085200 -GENESIS_VALIDATORS: - - "e2a03c16122c7e0f940e2301aa460c54a2e1e8343968bb2782f26636f051e65ec589c858b9c7980b276ebe550056b23f0bdc3b5a" - - "0767e65924063f79ae92ee1953685f06718b1756cc665a299bd61b4b82055e377237595d9a27887421b5233d09a50832db2f303d" - - "d4355005bc37f76f390dcd2bcc51677d8c6ab44e0cc64913fb84ad459789a31105bd9a69afd2690ffd737d22ec6e3b31d47a642f" -` - path := writeTempYAML(t, yaml) - cfg, err := config.LoadGenesisConfig(path) - if err != nil { - t.Fatalf("LoadGenesisConfig: %v", err) - } - - if cfg.GenesisTime != 1704085200 { - t.Fatalf("GenesisTime = %d, want 1704085200", cfg.GenesisTime) - } - if len(cfg.Validators) != 3 { - t.Fatalf("len(Validators) = %d, want 3", len(cfg.Validators)) - } - for i, v := range cfg.Validators { - if v.Index != uint64(i) { - t.Errorf("Validators[%d].Index = %d, want %d", i, v.Index, i) - } - if v.Pubkey == [52]byte{} { - t.Errorf("Validators[%d].Pubkey is zero", i) - } - } - - // First byte of first pubkey should be 0xe2. - if cfg.Validators[0].Pubkey[0] != 0xe2 { - t.Errorf("Validators[0].Pubkey[0] = %x, want e2", cfg.Validators[0].Pubkey[0]) - } -} - -func TestLoadGenesisConfigAccepts0xPrefix(t *testing.T) { - yaml := ` -GENESIS_TIME: 1000 -GENESIS_VALIDATORS: - - "0xe2a03c16122c7e0f940e2301aa460c54a2e1e8343968bb2782f26636f051e65ec589c858b9c7980b276ebe550056b23f0bdc3b5a" -` - path := writeTempYAML(t, yaml) - cfg, err := config.LoadGenesisConfig(path) - if err != nil { - t.Fatalf("LoadGenesisConfig: %v", err) - } - if len(cfg.Validators) != 1 { - t.Fatalf("len(Validators) = %d, want 1", len(cfg.Validators)) - } - if cfg.Validators[0].Pubkey[0] != 0xe2 { - t.Errorf("Validators[0].Pubkey[0] = %x, want e2", cfg.Validators[0].Pubkey[0]) - } -} - -func TestLoadGenesisConfigRejectsEmptyValidators(t *testing.T) { - yaml := ` -GENESIS_TIME: 1000 -GENESIS_VALIDATORS: [] -` - path := writeTempYAML(t, yaml) - _, err := config.LoadGenesisConfig(path) - if err == nil { - t.Fatal("expected error for empty validators") - } -} - -func TestLoadGenesisConfigRejectsWrongPubkeyLength(t *testing.T) { - yaml := ` -GENESIS_TIME: 1000 -GENESIS_VALIDATORS: - - "aabbcc" -` - path := writeTempYAML(t, yaml) - _, err := config.LoadGenesisConfig(path) - if err == nil { - t.Fatal("expected error for wrong pubkey length") - } -} - -func TestLoadGenesisConfigRejectsInvalidHex(t *testing.T) { - yaml := ` -GENESIS_TIME: 1000 -GENESIS_VALIDATORS: - - "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz" -` - path := writeTempYAML(t, yaml) - _, err := config.LoadGenesisConfig(path) - if err == nil { - t.Fatal("expected error for invalid hex") - } -} - -func writeTempYAML(t *testing.T, content string) string { - t.Helper() - dir := t.TempDir() - path := filepath.Join(dir, "config.yaml") - if err := os.WriteFile(path, []byte(content), 0644); err != nil { - t.Fatal(err) - } - return path -} diff --git a/config/nodes.go b/config/nodes.go deleted file mode 100644 index 738b0aa..0000000 --- a/config/nodes.go +++ /dev/null @@ -1,43 +0,0 @@ -package config - -import ( - "fmt" - "os" - - "gopkg.in/yaml.v3" -) - -// bootnodeEntry represents a bootnode with named fields (legacy format). -type bootnodeEntry struct { - Multiaddr string `yaml:"multiaddr"` -} - -// LoadBootnodes loads a nodes.yaml file and returns raw bootnode strings. -// Supports both formats: -// - Legacy: [{multiaddr: "/ip4/..."}] -// - ENR: ["enr:-IW4Q..."] -func LoadBootnodes(path string) ([]string, error) { - data, err := os.ReadFile(path) - if err != nil { - return nil, fmt.Errorf("read nodes: %w", err) - } - - // Try legacy struct format first. - var entries []bootnodeEntry - if err := yaml.Unmarshal(data, &entries); err == nil && len(entries) > 0 && entries[0].Multiaddr != "" { - out := make([]string, 0, len(entries)) - for _, e := range entries { - if e.Multiaddr != "" { - out = append(out, e.Multiaddr) - } - } - return out, nil - } - - // Fall back to plain string list (ENR or multiaddr strings). - var strs []string - if err := yaml.Unmarshal(data, &strs); err != nil { - return nil, fmt.Errorf("parse nodes: %w", err) - } - return strs, nil -} diff --git a/config/validators.go b/config/validators.go deleted file mode 100644 index c2c094d..0000000 --- a/config/validators.go +++ /dev/null @@ -1,74 +0,0 @@ -package config - -import ( - "fmt" - "os" - - "gopkg.in/yaml.v3" -) - -// ValidatorAssignment maps a node name to its validator indices. -type ValidatorAssignment struct { - NodeName string `yaml:"node_name"` - Validators []uint64 `yaml:"validators"` -} - -// ValidatorRegistry is the parsed validators.yaml. -type ValidatorRegistry struct { - Assignments []ValidatorAssignment `yaml:"assignments"` -} - -// LoadValidators loads and parses a validators.yaml file. -func LoadValidators(path string) (*ValidatorRegistry, error) { - data, err := os.ReadFile(path) - if err != nil { - return nil, fmt.Errorf("read validators: %w", err) - } - - var nodeMap map[string][]uint64 - if err := yaml.Unmarshal(data, &nodeMap); err == nil && len(nodeMap) > 0 { - reg := &ValidatorRegistry{} - for name, indices := range nodeMap { - reg.Assignments = append(reg.Assignments, ValidatorAssignment{ - NodeName: name, - Validators: indices, - }) - } - return reg, nil - } - - // Fall back to legacy struct format. - var reg ValidatorRegistry - if err := yaml.Unmarshal(data, ®); err != nil { - return nil, fmt.Errorf("parse validators: %w", err) - } - - return ®, nil -} - -// Validate checks for overlapping assignments and out-of-range indices. -func (r *ValidatorRegistry) Validate(numGenesisValidators uint64) error { - seen := make(map[uint64]string) - for _, a := range r.Assignments { - for _, idx := range a.Validators { - if idx >= numGenesisValidators { - return fmt.Errorf("validator %d in %s out of range (genesis has %d)", idx, a.NodeName, numGenesisValidators) - } - if prev, ok := seen[idx]; ok { - return fmt.Errorf("validator %d assigned to both %s and %s", idx, prev, a.NodeName) - } - seen[idx] = a.NodeName - } - } - return nil -} - -// GetValidatorIndices returns the validator indices for a given node name. -func (r *ValidatorRegistry) GetValidatorIndices(nodeName string) []uint64 { - for _, a := range r.Assignments { - if a.NodeName == nodeName { - return a.Validators - } - } - return nil -} diff --git a/config/validators_test.go b/config/validators_test.go deleted file mode 100644 index f3c3ed2..0000000 --- a/config/validators_test.go +++ /dev/null @@ -1,136 +0,0 @@ -package config - -import ( - "os" - "path/filepath" - "strings" - "testing" -) - -func TestValidateHappyPath(t *testing.T) { - reg := &ValidatorRegistry{ - Assignments: []ValidatorAssignment{ - {NodeName: "node-a", Validators: []uint64{0, 1}}, - {NodeName: "node-b", Validators: []uint64{2, 3}}, - }, - } - if err := reg.Validate(5); err != nil { - t.Fatalf("expected nil, got %v", err) - } -} - -func TestValidateOutOfRange(t *testing.T) { - reg := &ValidatorRegistry{ - Assignments: []ValidatorAssignment{ - {NodeName: "node-a", Validators: []uint64{0, 5}}, - }, - } - err := reg.Validate(5) - if err == nil { - t.Fatal("expected error for out-of-range validator index") - } - if !strings.Contains(err.Error(), "out of range") { - t.Fatalf("expected 'out of range' in error, got: %v", err) - } -} - -func TestValidateOverlap(t *testing.T) { - reg := &ValidatorRegistry{ - Assignments: []ValidatorAssignment{ - {NodeName: "node-a", Validators: []uint64{0, 1}}, - {NodeName: "node-b", Validators: []uint64{1, 2}}, - }, - } - err := reg.Validate(5) - if err == nil { - t.Fatal("expected error for overlapping validator assignments") - } - if !strings.Contains(err.Error(), "assigned to both") { - t.Fatalf("expected 'assigned to both' in error, got: %v", err) - } -} - -func TestValidateEmptyAssignments(t *testing.T) { - reg := &ValidatorRegistry{ - Assignments: []ValidatorAssignment{}, - } - if err := reg.Validate(5); err != nil { - t.Fatalf("expected nil for empty assignments, got %v", err) - } -} - -func TestValidateZeroGenesisValidators(t *testing.T) { - reg := &ValidatorRegistry{ - Assignments: []ValidatorAssignment{ - {NodeName: "node-a", Validators: []uint64{0}}, - }, - } - err := reg.Validate(0) - if err == nil { - t.Fatal("expected error when numGenesisValidators is 0") - } - if !strings.Contains(err.Error(), "out of range") { - t.Fatalf("expected 'out of range' in error, got: %v", err) - } -} - -func TestGetValidatorIndicesKnownNode(t *testing.T) { - reg := &ValidatorRegistry{ - Assignments: []ValidatorAssignment{ - {NodeName: "node-a", Validators: []uint64{0, 1}}, - {NodeName: "node-b", Validators: []uint64{2, 3}}, - }, - } - got := reg.GetValidatorIndices("node-b") - if len(got) != 2 || got[0] != 2 || got[1] != 3 { - t.Fatalf("expected [2, 3], got %v", got) - } -} - -func TestGetValidatorIndicesUnknownNode(t *testing.T) { - reg := &ValidatorRegistry{ - Assignments: []ValidatorAssignment{ - {NodeName: "node-a", Validators: []uint64{0, 1}}, - }, - } - got := reg.GetValidatorIndices("node-z") - if got != nil { - t.Fatalf("expected nil for unknown node, got %v", got) - } -} - -func TestLoadValidatorsFlatMap(t *testing.T) { - yaml := "ream_0:\n - 0\n - 1\nzeam_0:\n - 2\n - 3\n" - path := filepath.Join(t.TempDir(), "validators.yaml") - if err := os.WriteFile(path, []byte(yaml), 0644); err != nil { - t.Fatal(err) - } - reg, err := LoadValidators(path) - if err != nil { - t.Fatalf("LoadValidators: %v", err) - } - got := reg.GetValidatorIndices("ream_0") - if len(got) != 2 || got[0] != 0 || got[1] != 1 { - t.Fatalf("expected [0, 1] for ream_0, got %v", got) - } - got = reg.GetValidatorIndices("zeam_0") - if len(got) != 2 || got[0] != 2 || got[1] != 3 { - t.Fatalf("expected [2, 3] for zeam_0, got %v", got) - } -} - -func TestLoadValidatorsLegacy(t *testing.T) { - yaml := "assignments:\n - node_name: node0\n validators: [0, 1]\n" - path := filepath.Join(t.TempDir(), "validators.yaml") - if err := os.WriteFile(path, []byte(yaml), 0644); err != nil { - t.Fatal(err) - } - reg, err := LoadValidators(path) - if err != nil { - t.Fatalf("LoadValidators: %v", err) - } - got := reg.GetValidatorIndices("node0") - if len(got) != 2 || got[0] != 0 || got[1] != 1 { - t.Fatalf("expected [0, 1] for node0, got %v", got) - } -} diff --git a/forkchoice/forkchoice.go b/forkchoice/forkchoice.go new file mode 100644 index 0000000..c316999 --- /dev/null +++ b/forkchoice/forkchoice.go @@ -0,0 +1,132 @@ +package forkchoice + +// ForkChoice wraps a ProtoArray and VoteStore for LMD GHOST head selection. +type ForkChoice struct { + Array *ProtoArray + Votes *VoteStore +} + +// New creates a ForkChoice initialized with an anchor block. +func New(anchorSlot uint64, anchorRoot [32]byte) *ForkChoice { + return &ForkChoice{ + Array: NewProtoArray(anchorSlot, anchorRoot), + Votes: NewVoteStore(), + } +} + +// OnBlock registers a new block. +func (fc *ForkChoice) OnBlock(slot uint64, root, parentRoot [32]byte) { + fc.Array.OnBlock(slot, root, parentRoot) +} + +// UpdateHead computes the LMD GHOST head using known attestations. +// Returns the head root. +func (fc *ForkChoice) UpdateHead(justifiedRoot [32]byte) [32]byte { + deltas := ComputeDeltas(fc.Array.Len(), fc.Votes, true) + fc.Array.ApplyScoreChanges(deltas, 0) + return fc.Array.FindHead(justifiedRoot) +} + +// UpdateSafeTarget computes the head using a 2/3 supermajority threshold. +// Uses all attestations (both known and new merged) — fromKnown=false reads LatestNew +// which at call time should contain the merged pool. +func (fc *ForkChoice) UpdateSafeTarget(justifiedRoot [32]byte, numValidators uint64) [32]byte { + minScore := int64((2*numValidators + 2) / 3) // ceil(2n/3) + deltas := ComputeDeltas(fc.Array.Len(), fc.Votes, false) + fc.Array.ApplyScoreChanges(deltas, minScore) + return fc.Array.FindHead(justifiedRoot) +} + +// Prune removes nodes below the finalized root. +func (fc *ForkChoice) Prune(finalizedRoot [32]byte) { + fc.Array.Prune(finalizedRoot) +} + +// NodeIndex returns the proto-array index for a root, or -1 if not found. +func (fc *ForkChoice) NodeIndex(root [32]byte) int { + if idx, ok := fc.Array.indices[root]; ok { + return idx + } + return -1 +} + +// GetCanonicalAnalysis identifies canonical and non-canonical roots relative to an anchor. +// Returns (canonical, nonCanonical) where canonical[0] is the anchor root. +// Walks the proto-array tree to separate canonical from non-canonical blocks. +func (fc *ForkChoice) GetCanonicalAnalysis(anchorRoot [32]byte) (canonical, nonCanonical [][32]byte) { + anchorIdx, ok := fc.Array.indices[anchorRoot] + if !ok { + return nil, nil + } + + // Phase 1: Build canonical view by walking parent pointers from head to anchor. + canonicalSet := make(map[[32]byte]bool) + + // Walk backwards from the last node to find canonical chain through anchor. + // Start from the highest-index node that descends from anchor. + for i := len(fc.Array.nodes) - 1; i >= anchorIdx; i-- { + node := &fc.Array.nodes[i] + // Check if this node is on the canonical path by walking up to anchor. + if i == anchorIdx { + canonicalSet[node.Root] = true + break + } + } + + // Walk from anchor forwards: a node is canonical if its parent is canonical. + canonicalSet[fc.Array.nodes[anchorIdx].Root] = true + for i := anchorIdx + 1; i < len(fc.Array.nodes); i++ { + node := &fc.Array.nodes[i] + if node.Parent >= anchorIdx { + parentRoot := fc.Array.nodes[node.Parent].Root + if canonicalSet[parentRoot] { + canonicalSet[node.Root] = true + } + } + } + + // Phase 2: Segregate into canonical (at/below anchor slot) and non-canonical. + anchorSlot := fc.Array.nodes[anchorIdx].Slot + + for i := anchorIdx; i < len(fc.Array.nodes); i++ { + node := &fc.Array.nodes[i] + if canonicalSet[node.Root] { + if node.Slot <= anchorSlot { + canonical = append(canonical, node.Root) + } + // Descendants above anchor slot are kept (still live) + } else { + nonCanonical = append(nonCanonical, node.Root) + } + } + + return canonical, nonCanonical +} + +// GetCanonicalAncestorAtDepth returns the canonical block at depth steps back from head. +// Walks parent pointers from head backwards by depth steps. +func (fc *ForkChoice) GetCanonicalAncestorAtDepth(depth int) (root [32]byte, slot uint64, ok bool) { + if len(fc.Array.nodes) == 0 { + return [32]byte{}, 0, false + } + + // Start from the last node (head) and walk back. + idx := len(fc.Array.nodes) - 1 + remaining := depth + if idx < remaining { + idx = 0 + remaining = 0 + } + + for remaining > 0 && idx > 0 { + parentIdx := fc.Array.nodes[idx].Parent + if parentIdx < 0 { + break + } + idx = parentIdx + remaining-- + } + + node := &fc.Array.nodes[idx] + return node.Root, node.Slot, true +} diff --git a/forkchoice/forkchoice_test.go b/forkchoice/forkchoice_test.go new file mode 100644 index 0000000..03839c1 --- /dev/null +++ b/forkchoice/forkchoice_test.go @@ -0,0 +1,326 @@ +package forkchoice + +import ( + "testing" + + "github.com/geanlabs/gean/types" +) + +func root(b byte) [32]byte { + var r [32]byte + r[0] = b + return r +} + +func makeAttData(headRoot [32]byte, slot uint64) *types.AttestationData { + return &types.AttestationData{ + Slot: slot, + Head: &types.Checkpoint{Root: headRoot, Slot: slot}, + Target: &types.Checkpoint{}, + Source: &types.Checkpoint{}, + } +} + +// --- Spec implementation tests (rs tests) --- + +func TestSpecComputeBlockWeights(t *testing.T) { + // Chain: root_a (slot 0) -> root_b (slot 1) -> root_c (slot 2) + rootA, rootB, rootC := root(1), root(2), root(3) + blocks := map[[32]byte]BlockInfo{ + rootA: {Slot: 0, ParentRoot: [32]byte{}}, + rootB: {Slot: 1, ParentRoot: rootA}, + rootC: {Slot: 2, ParentRoot: rootB}, + } + attestations := map[uint64]*types.AttestationData{ + 0: makeAttData(rootC, 2), + 1: makeAttData(rootB, 1), + } + + weights := SpecComputeBlockWeights(0, blocks, attestations) + + // rootC: 1 vote (validator 0) + if weights[rootC] != 1 { + t.Fatalf("rootC weight: expected 1, got %d", weights[rootC]) + } + // rootB: 2 votes (validator 0 walks through + validator 1 direct) + if weights[rootB] != 2 { + t.Fatalf("rootB weight: expected 2, got %d", weights[rootB]) + } + // rootA: at slot 0 = start_slot, not counted + if weights[rootA] != 0 { + t.Fatalf("rootA weight: expected 0, got %d", weights[rootA]) + } +} + +func TestSpecComputeBlockWeightsEmpty(t *testing.T) { + weights := SpecComputeBlockWeights(0, nil, nil) + if len(weights) != 0 { + t.Fatal("expected empty weights") + } +} + +func TestSpecLMDGhostLinearChain(t *testing.T) { + rootA, rootB, rootC := root(1), root(2), root(3) + blocks := map[[32]byte]BlockInfo{ + rootA: {Slot: 0, ParentRoot: [32]byte{}}, + rootB: {Slot: 1, ParentRoot: rootA}, + rootC: {Slot: 2, ParentRoot: rootB}, + } + attestations := map[uint64]*types.AttestationData{ + 0: makeAttData(rootC, 2), + } + + head, _ := SpecComputeLMDGhostHead(rootA, blocks, attestations, 0) + if head != rootC { + t.Fatalf("expected rootC, got %x", head[:4]) + } +} + +func TestSpecLMDGhostForkHeavier(t *testing.T) { + rootA, rootB, rootC := root(1), root(2), root(3) + blocks := map[[32]byte]BlockInfo{ + rootA: {Slot: 0, ParentRoot: [32]byte{}}, + rootB: {Slot: 1, ParentRoot: rootA}, + rootC: {Slot: 1, ParentRoot: rootA}, + } + // 2 votes for rootB, 1 for rootC -> rootB wins + attestations := map[uint64]*types.AttestationData{ + 0: makeAttData(rootB, 1), + 1: makeAttData(rootB, 1), + 2: makeAttData(rootC, 1), + } + + head, _ := SpecComputeLMDGhostHead(rootA, blocks, attestations, 0) + if head != rootB { + t.Fatalf("expected rootB (heavier), got %x", head[:4]) + } +} + +func TestSpecLMDGhostTiebreakLexicographic(t *testing.T) { + rootA := root(1) + rootB := root(2) // smaller + rootC := root(3) // larger -> wins tiebreak + blocks := map[[32]byte]BlockInfo{ + rootA: {Slot: 0, ParentRoot: [32]byte{}}, + rootB: {Slot: 1, ParentRoot: rootA}, + rootC: {Slot: 1, ParentRoot: rootA}, + } + // Equal weight: 1 vote each -> lexicographic tiebreak, rootC > rootB + attestations := map[uint64]*types.AttestationData{ + 0: makeAttData(rootB, 1), + 1: makeAttData(rootC, 1), + } + + head, _ := SpecComputeLMDGhostHead(rootA, blocks, attestations, 0) + if head != rootC { + t.Fatalf("expected rootC (lexicographic tiebreak), got %x", head[:4]) + } +} + +// --- Proto-array tests --- + +func TestProtoArrayLinearChain(t *testing.T) { + rootA, rootB, rootC := root(1), root(2), root(3) + fc := New(0, rootA) + fc.OnBlock(1, rootB, rootA) + fc.OnBlock(2, rootC, rootB) + + // Validator 0 attests to rootC + fc.Votes.SetKnown(0, fc.NodeIndex(rootC), 2, makeAttData(rootC, 2)) + + head := fc.UpdateHead(rootA) + if head != rootC { + t.Fatalf("expected rootC, got %x", head[:4]) + } +} + +func TestProtoArrayForkHeavier(t *testing.T) { + rootA, rootB, rootC := root(1), root(2), root(3) + fc := New(0, rootA) + fc.OnBlock(1, rootB, rootA) + fc.OnBlock(1, rootC, rootA) + + // 2 votes for rootB, 1 for rootC + fc.Votes.SetKnown(0, fc.NodeIndex(rootB), 1, makeAttData(rootB, 1)) + fc.Votes.SetKnown(1, fc.NodeIndex(rootB), 1, makeAttData(rootB, 1)) + fc.Votes.SetKnown(2, fc.NodeIndex(rootC), 1, makeAttData(rootC, 1)) + + head := fc.UpdateHead(rootA) + if head != rootB { + t.Fatalf("expected rootB (heavier), got %x", head[:4]) + } +} + +func TestProtoArrayTiebreakLexicographic(t *testing.T) { + rootA := root(1) + rootB := root(2) + rootC := root(3) // larger -> wins + fc := New(0, rootA) + fc.OnBlock(1, rootB, rootA) + fc.OnBlock(1, rootC, rootA) + + fc.Votes.SetKnown(0, fc.NodeIndex(rootB), 1, makeAttData(rootB, 1)) + fc.Votes.SetKnown(1, fc.NodeIndex(rootC), 1, makeAttData(rootC, 1)) + + head := fc.UpdateHead(rootA) + if head != rootC { + t.Fatalf("expected rootC (tiebreak), got %x", head[:4]) + } +} + +func TestProtoArrayNoAttestations(t *testing.T) { + rootA := root(1) + fc := New(0, rootA) + head := fc.UpdateHead(rootA) + if head != rootA { + t.Fatalf("expected rootA with no attestations, got %x", head[:4]) + } +} + +func TestProtoArrayVoteChange(t *testing.T) { + rootA, rootB, rootC := root(1), root(2), root(3) + fc := New(0, rootA) + fc.OnBlock(1, rootB, rootA) + fc.OnBlock(1, rootC, rootA) + + // Initially vote for rootB + fc.Votes.SetKnown(0, fc.NodeIndex(rootB), 1, makeAttData(rootB, 1)) + head := fc.UpdateHead(rootA) + if head != rootB { + t.Fatalf("expected rootB initially, got %x", head[:4]) + } + + // Change vote to rootC + fc.Votes.SetKnown(0, fc.NodeIndex(rootC), 1, makeAttData(rootC, 1)) + head = fc.UpdateHead(rootA) + if head != rootC { + t.Fatalf("expected rootC after vote change, got %x", head[:4]) + } +} + +func TestProtoArrayPrune(t *testing.T) { + rootA, rootB, rootC := root(1), root(2), root(3) + fc := New(0, rootA) + fc.OnBlock(1, rootB, rootA) + fc.OnBlock(2, rootC, rootB) + + if fc.Array.Len() != 3 { + t.Fatalf("expected 3 nodes, got %d", fc.Array.Len()) + } + + fc.Prune(rootB) + + if fc.Array.Len() != 2 { + t.Fatalf("expected 2 nodes after prune, got %d", fc.Array.Len()) + } + if fc.NodeIndex(rootA) != -1 { + t.Fatal("rootA should be pruned") + } + if fc.NodeIndex(rootB) < 0 { + t.Fatal("rootB should still exist") + } +} + +func TestProtoArrayDeepChain(t *testing.T) { + roots := make([][32]byte, 10) + for i := range roots { + roots[i] = root(byte(i + 1)) + } + fc := New(0, roots[0]) + for i := 1; i < 10; i++ { + fc.OnBlock(uint64(i), roots[i], roots[i-1]) + } + + // Attest to tip + fc.Votes.SetKnown(0, fc.NodeIndex(roots[9]), 9, makeAttData(roots[9], 9)) + head := fc.UpdateHead(roots[0]) + if head != roots[9] { + t.Fatalf("expected root[9], got %x", head[:4]) + } +} + +// --- Debug oracle: verify proto-array matches spec --- + +func TestDebugOracleLinearChain(t *testing.T) { + rootA, rootB, rootC := root(1), root(2), root(3) + + // Spec + blocks := map[[32]byte]BlockInfo{ + rootA: {Slot: 0, ParentRoot: [32]byte{}}, + rootB: {Slot: 1, ParentRoot: rootA}, + rootC: {Slot: 2, ParentRoot: rootB}, + } + attestations := map[uint64]*types.AttestationData{ + 0: makeAttData(rootC, 2), + 1: makeAttData(rootB, 1), + } + specHead, _ := SpecComputeLMDGhostHead(rootA, blocks, attestations, 0) + + // Proto-array + fc := New(0, rootA) + fc.OnBlock(1, rootB, rootA) + fc.OnBlock(2, rootC, rootB) + fc.Votes.SetKnown(0, fc.NodeIndex(rootC), 2, makeAttData(rootC, 2)) + fc.Votes.SetKnown(1, fc.NodeIndex(rootB), 1, makeAttData(rootB, 1)) + protoHead := fc.UpdateHead(rootA) + + if specHead != protoHead { + t.Fatalf("ORACLE MISMATCH: spec=%x proto=%x", specHead[:4], protoHead[:4]) + } +} + +func TestDebugOracleFork(t *testing.T) { + rootA, rootB, rootC := root(1), root(2), root(3) + + blocks := map[[32]byte]BlockInfo{ + rootA: {Slot: 0, ParentRoot: [32]byte{}}, + rootB: {Slot: 1, ParentRoot: rootA}, + rootC: {Slot: 1, ParentRoot: rootA}, + } + attestations := map[uint64]*types.AttestationData{ + 0: makeAttData(rootB, 1), + 1: makeAttData(rootB, 1), + 2: makeAttData(rootC, 1), + } + specHead, _ := SpecComputeLMDGhostHead(rootA, blocks, attestations, 0) + + fc := New(0, rootA) + fc.OnBlock(1, rootB, rootA) + fc.OnBlock(1, rootC, rootA) + fc.Votes.SetKnown(0, fc.NodeIndex(rootB), 1, makeAttData(rootB, 1)) + fc.Votes.SetKnown(1, fc.NodeIndex(rootB), 1, makeAttData(rootB, 1)) + fc.Votes.SetKnown(2, fc.NodeIndex(rootC), 1, makeAttData(rootC, 1)) + protoHead := fc.UpdateHead(rootA) + + if specHead != protoHead { + t.Fatalf("ORACLE MISMATCH: spec=%x proto=%x", specHead[:4], protoHead[:4]) + } +} + +func TestDebugOracleTiebreak(t *testing.T) { + rootA := root(1) + rootB := root(2) + rootC := root(3) + + blocks := map[[32]byte]BlockInfo{ + rootA: {Slot: 0, ParentRoot: [32]byte{}}, + rootB: {Slot: 1, ParentRoot: rootA}, + rootC: {Slot: 1, ParentRoot: rootA}, + } + attestations := map[uint64]*types.AttestationData{ + 0: makeAttData(rootB, 1), + 1: makeAttData(rootC, 1), + } + specHead, _ := SpecComputeLMDGhostHead(rootA, blocks, attestations, 0) + + fc := New(0, rootA) + fc.OnBlock(1, rootB, rootA) + fc.OnBlock(1, rootC, rootA) + fc.Votes.SetKnown(0, fc.NodeIndex(rootB), 1, makeAttData(rootB, 1)) + fc.Votes.SetKnown(1, fc.NodeIndex(rootC), 1, makeAttData(rootC, 1)) + protoHead := fc.UpdateHead(rootA) + + if specHead != protoHead { + t.Fatalf("ORACLE MISMATCH: spec=%x proto=%x", specHead[:4], protoHead[:4]) + } +} diff --git a/forkchoice/protoarray.go b/forkchoice/protoarray.go new file mode 100644 index 0000000..37cd665 --- /dev/null +++ b/forkchoice/protoarray.go @@ -0,0 +1,192 @@ +package forkchoice + +import "bytes" + +// ProtoNode is a single block in the proto-array tree. +type ProtoNode struct { + Slot uint64 + Root [32]byte + ParentRoot [32]byte + Parent int // index into nodes, -1 if none + Weight int64 // accumulated attestation weight + BestChild int // index, -1 if none + BestDescendant int // index, -1 if none +} + +// ProtoArray is a flat array representing the block tree for O(n) fork choice. +type ProtoArray struct { + nodes []ProtoNode + indices map[[32]byte]int // root -> index +} + +// NewProtoArray creates a proto-array with an anchor block. +func NewProtoArray(anchorSlot uint64, anchorRoot [32]byte) *ProtoArray { + pa := &ProtoArray{ + indices: make(map[[32]byte]int), + } + pa.nodes = append(pa.nodes, ProtoNode{ + Slot: anchorSlot, + Root: anchorRoot, + ParentRoot: [32]byte{}, + Parent: -1, + Weight: 0, + BestChild: -1, + BestDescendant: -1, + }) + pa.indices[anchorRoot] = 0 + return pa +} + +// OnBlock registers a new block in the proto-array. + +func (pa *ProtoArray) OnBlock(slot uint64, root, parentRoot [32]byte) { + if _, exists := pa.indices[root]; exists { + return // already registered + } + nodeIndex := len(pa.nodes) + parentIdx := -1 + if idx, ok := pa.indices[parentRoot]; ok { + parentIdx = idx + } + + pa.nodes = append(pa.nodes, ProtoNode{ + Slot: slot, + Root: root, + ParentRoot: parentRoot, + Parent: parentIdx, + Weight: 0, + BestChild: -1, + BestDescendant: -1, + }) + pa.indices[root] = nodeIndex +} + +// ApplyScoreChanges propagates weight deltas backward through the array +// and recalculates bestChild/bestDescendant. + +func (pa *ProtoArray) ApplyScoreChanges(deltas []int64, cutoffWeight int64) { + if len(deltas) != len(pa.nodes) { + return + } + + // Pass 1: iterate backward, apply deltas and propagate to parents. + for i := len(pa.nodes) - 1; i >= 0; i-- { + pa.nodes[i].Weight += deltas[i] + if pa.nodes[i].Parent >= 0 { + deltas[pa.nodes[i].Parent] += deltas[i] + } + } + + // Pass 2: iterate backward, recalculate bestChild and bestDescendant. + for i := len(pa.nodes) - 1; i >= 0; i-- { + parentIdx := pa.nodes[i].Parent + if parentIdx < 0 { + continue + } + + // This node's best descendant, or itself if it meets the cutoff. + nodeBestDesc := pa.nodes[i].BestDescendant + if nodeBestDesc < 0 { + if pa.nodes[i].Weight >= cutoffWeight { + nodeBestDesc = i + } else { + nodeBestDesc = -1 + } + } + + parent := &pa.nodes[parentIdx] + shouldUpdate := false + + if parent.BestChild == i { + // Already best child — just update descendant if changed. + if parent.BestDescendant != nodeBestDesc { + shouldUpdate = true + } + } else if parent.BestChild >= 0 { + bestChild := &pa.nodes[parent.BestChild] + if bestChild.Weight < pa.nodes[i].Weight { + shouldUpdate = true + } else if bestChild.Weight == pa.nodes[i].Weight { + // Tie-break: lexicographically larger root wins (leanSpec-compatible). + if bytes.Compare(bestChild.Root[:], pa.nodes[i].Root[:]) < 0 { + shouldUpdate = true + } + } + } else { + // No best child yet. + shouldUpdate = true + } + + if shouldUpdate { + parent.BestChild = i + parent.BestDescendant = nodeBestDesc + } + } +} + +// FindHead returns the head root by walking the bestDescendant chain from justifiedRoot. + +func (pa *ProtoArray) FindHead(justifiedRoot [32]byte) [32]byte { + idx, ok := pa.indices[justifiedRoot] + if !ok { + return justifiedRoot + } + bestDesc := pa.nodes[idx].BestDescendant + if bestDesc < 0 { + return justifiedRoot + } + return pa.nodes[bestDesc].Root +} + +// FindHeadWithThreshold is like FindHead but with a minimum weight cutoff. +// Used for safe target computation (2/3 threshold). +func (pa *ProtoArray) FindHeadWithThreshold(justifiedRoot [32]byte, minScore int64) [32]byte { + return pa.FindHead(justifiedRoot) // cutoff applied during ApplyScoreChanges +} + +// Prune removes all nodes below the finalized root. +func (pa *ProtoArray) Prune(finalizedRoot [32]byte) { + finalizedIdx, ok := pa.indices[finalizedRoot] + if !ok || finalizedIdx == 0 { + return + } + + // Remove pruned nodes from indices. + for i := 0; i < finalizedIdx; i++ { + delete(pa.indices, pa.nodes[i].Root) + } + + // Shift nodes. + pa.nodes = pa.nodes[finalizedIdx:] + + // Rebuild indices. + newIndices := make(map[[32]byte]int, len(pa.nodes)) + for i := range pa.nodes { + newIndices[pa.nodes[i].Root] = i + // Adjust parent pointers. + if pa.nodes[i].Parent >= 0 { + pa.nodes[i].Parent -= finalizedIdx + if pa.nodes[i].Parent < 0 { + pa.nodes[i].Parent = -1 + } + } + if pa.nodes[i].BestChild >= 0 { + pa.nodes[i].BestChild -= finalizedIdx + if pa.nodes[i].BestChild < 0 { + pa.nodes[i].BestChild = -1 + } + } + if pa.nodes[i].BestDescendant >= 0 { + pa.nodes[i].BestDescendant -= finalizedIdx + if pa.nodes[i].BestDescendant < 0 { + pa.nodes[i].BestDescendant = -1 + } + } + } + pa.indices = newIndices +} + +// Len returns the number of nodes. +func (pa *ProtoArray) Len() int { + return len(pa.nodes) +} diff --git a/forkchoice/spec.go b/forkchoice/spec.go new file mode 100644 index 0000000..b215841 --- /dev/null +++ b/forkchoice/spec.go @@ -0,0 +1,119 @@ +package forkchoice + +import "github.com/geanlabs/gean/types" + +// Spec-compliant LMD GHOST implementation for testing. +// Used as debug oracle to validate proto-array produces identical results. + +// SpecComputeBlockWeights computes per-block attestation weights. +// For each attestation, walks backward from head through parent chain, +// incrementing weight for each block above startSlot. +func SpecComputeBlockWeights( + startSlot uint64, + blocks map[[32]byte]BlockInfo, + attestations map[uint64]*types.AttestationData, +) map[[32]byte]uint64 { + weights := make(map[[32]byte]uint64) + + for _, data := range attestations { + current := data.Head.Root + for { + info, ok := blocks[current] + if !ok || info.Slot <= startSlot { + break + } + weights[current]++ + current = info.ParentRoot + } + } + + return weights +} + +// SpecComputeLMDGhostHead computes the LMD GHOST head. +func SpecComputeLMDGhostHead( + startRoot [32]byte, + blocks map[[32]byte]BlockInfo, + attestations map[uint64]*types.AttestationData, + minScore uint64, +) ([32]byte, map[[32]byte]uint64) { + if len(blocks) == 0 { + return startRoot, nil + } + + // If start root is zero, use the block with the lowest slot. + if startRoot == [32]byte{} { + var minSlot uint64 = ^uint64(0) + for root, info := range blocks { + if info.Slot < minSlot { + minSlot = info.Slot + startRoot = root + } + } + } + + startInfo, ok := blocks[startRoot] + if !ok { + return startRoot, nil + } + startSlot := startInfo.Slot + + weights := SpecComputeBlockWeights(startSlot, blocks, attestations) + + // Build children map, filtering by min_score. + children := make(map[[32]byte][][32]byte) + for root, info := range blocks { + if info.ParentRoot == [32]byte{} { + continue + } + if minScore > 0 { + w := weights[root] + if w < minScore { + continue + } + } + children[info.ParentRoot] = append(children[info.ParentRoot], root) + } + + // Greedy descent: pick best child (most weight, then lexicographic). + head := startRoot + for { + kids, ok := children[head] + if !ok || len(kids) == 0 { + break + } + best := kids[0] + bestWeight := weights[best] + for _, kid := range kids[1:] { + w := weights[kid] + if w > bestWeight { + best = kid + bestWeight = w + } else if w == bestWeight { + // Lexicographic tiebreak: larger root wins. + if rootGreaterThan(kid, best) { + best = kid + bestWeight = w + } + } + } + head = best + } + + return head, weights +} + +// BlockInfo is the minimal block data for spec fork choice. +type BlockInfo struct { + Slot uint64 + ParentRoot [32]byte +} + +func rootGreaterThan(a, b [32]byte) bool { + for i := 0; i < 32; i++ { + if a[i] != b[i] { + return a[i] > b[i] + } + } + return false +} diff --git a/forkchoice/votes.go b/forkchoice/votes.go new file mode 100644 index 0000000..40cac55 --- /dev/null +++ b/forkchoice/votes.go @@ -0,0 +1,93 @@ +package forkchoice + +import "github.com/geanlabs/gean/types" + +// VoteTracker tracks per-validator attestation targets for delta computation. + +type VoteTracker struct { + AppliedIndex int // index of last applied vote, -1 if none + LatestKnown *VoteTarget + LatestNew *VoteTarget +} + +// VoteTarget is a resolved attestation pointing to a proto-array index. +type VoteTarget struct { + Index int // proto-array node index + Slot uint64 + Data *types.AttestationData +} + +// VoteStore holds per-validator vote trackers. +type VoteStore struct { + Votes map[uint64]*VoteTracker // validator_id -> tracker +} + +// NewVoteStore creates an empty vote store. +func NewVoteStore() *VoteStore { + return &VoteStore{Votes: make(map[uint64]*VoteTracker)} +} + +// SetKnown records a known (on-chain) attestation for a validator. +func (vs *VoteStore) SetKnown(validatorID uint64, nodeIndex int, slot uint64, data *types.AttestationData) { + tracker := vs.getOrCreate(validatorID) + tracker.LatestKnown = &VoteTarget{Index: nodeIndex, Slot: slot, Data: data} +} + +// SetNew records a new (gossip-received) attestation for a validator. +func (vs *VoteStore) SetNew(validatorID uint64, nodeIndex int, slot uint64, data *types.AttestationData) { + tracker := vs.getOrCreate(validatorID) + tracker.LatestNew = &VoteTarget{Index: nodeIndex, Slot: slot, Data: data} +} + +// PromoteNewToKnown moves all new votes to known. +func (vs *VoteStore) PromoteNewToKnown() { + for _, tracker := range vs.Votes { + if tracker.LatestNew != nil { + tracker.LatestKnown = tracker.LatestNew + tracker.LatestNew = nil + } + } +} + +func (vs *VoteStore) getOrCreate(validatorID uint64) *VoteTracker { + t, ok := vs.Votes[validatorID] + if !ok { + t = &VoteTracker{AppliedIndex: -1} + vs.Votes[validatorID] = t + } + return t +} + +// ComputeDeltas computes weight deltas from vote changes. + +// For each validator: +// - Remove weight from previously applied index (if any) +// - Add weight to current target index (from known or new pool) +// +// Each validator has weight 1. +func ComputeDeltas(numNodes int, votes *VoteStore, fromKnown bool) []int64 { + deltas := make([]int64, numNodes) + + for _, tracker := range votes.Votes { + // Remove previous vote. + if tracker.AppliedIndex >= 0 && tracker.AppliedIndex < numNodes { + deltas[tracker.AppliedIndex]-- + } + tracker.AppliedIndex = -1 + + // Apply current vote. + var target *VoteTarget + if fromKnown { + target = tracker.LatestKnown + } else { + target = tracker.LatestNew + } + + if target != nil && target.Index < numNodes { + deltas[target.Index]++ + tracker.AppliedIndex = target.Index + } + } + + return deltas +} diff --git a/genesis/config.go b/genesis/config.go new file mode 100644 index 0000000..b7d99b8 --- /dev/null +++ b/genesis/config.go @@ -0,0 +1,82 @@ +package genesis + +import ( + "encoding/hex" + "fmt" + "os" + "strings" + + "github.com/geanlabs/gean/types" + "gopkg.in/yaml.v3" +) + +// GenesisConfig is parsed from config.yaml. +// Parsed from config.yaml. +type GenesisConfig struct { + GenesisTime uint64 `yaml:"GENESIS_TIME"` + GenesisValidators []string `yaml:"GENESIS_VALIDATORS"` +} + +// Validators converts hex pubkey strings to typed Validators with sequential indices. +func (gc *GenesisConfig) Validators() []*types.Validator { + validators := make([]*types.Validator, len(gc.GenesisValidators)) + for i, hexStr := range gc.GenesisValidators { + hexStr = strings.TrimPrefix(strings.TrimSpace(hexStr), "0x") + pkBytes, err := hex.DecodeString(hexStr) + if err != nil || len(pkBytes) != types.PubkeySize { + panic(fmt.Sprintf("GENESIS_VALIDATORS[%d] invalid: %s", i, hexStr)) + } + var pubkey [types.PubkeySize]byte + copy(pubkey[:], pkBytes) + validators[i] = &types.Validator{ + Pubkey: pubkey, + Index: uint64(i), + } + } + return validators +} + +// GenesisState creates the genesis state from config. +func (gc *GenesisConfig) GenesisState() *types.State { + validators := gc.Validators() + + // Genesis block header with empty body root. + emptyBody := &types.BlockBody{} + bodyRoot, _ := emptyBody.HashTreeRoot() + + return &types.State{ + Config: &types.ChainConfig{GenesisTime: gc.GenesisTime}, + Slot: 0, + LatestBlockHeader: &types.BlockHeader{ + Slot: 0, + ProposerIndex: 0, + ParentRoot: types.ZeroRoot, + StateRoot: types.ZeroRoot, + BodyRoot: bodyRoot, + }, + LatestJustified: &types.Checkpoint{Root: types.ZeroRoot, Slot: 0}, + LatestFinalized: &types.Checkpoint{Root: types.ZeroRoot, Slot: 0}, + Validators: validators, + JustifiedSlots: types.NewBitlistSSZ(0), + JustificationsValidators: types.NewBitlistSSZ(0), + } +} + +// LoadGenesisConfig reads and parses config.yaml. +func LoadGenesisConfig(path string) (*GenesisConfig, error) { + data, err := os.ReadFile(path) + if err != nil { + return nil, fmt.Errorf("read config.yaml: %w", err) + } + var config GenesisConfig + if err := yaml.Unmarshal(data, &config); err != nil { + return nil, fmt.Errorf("parse config.yaml: %w", err) + } + if config.GenesisTime == 0 { + return nil, fmt.Errorf("GENESIS_TIME is 0 or missing") + } + if len(config.GenesisValidators) == 0 { + return nil, fmt.Errorf("GENESIS_VALIDATORS is empty") + } + return &config, nil +} diff --git a/genesis/genesis_test.go b/genesis/genesis_test.go new file mode 100644 index 0000000..7c90a5f --- /dev/null +++ b/genesis/genesis_test.go @@ -0,0 +1,82 @@ +package genesis + +import ( + "os" + "testing" + + "github.com/geanlabs/gean/types" +) + +const testConfigYAML = `GENESIS_TIME: 1770407233 +GENESIS_VALIDATORS: + - "cd323f232b34ab26d6db7402c886e74ca81cfd3a0c659d2fe022356f25592f7d2d25ca7b19604f5a180037046cf2a02e1da4a800" + - "b7b0f72e24801b02bda64073cb4de6699a416b37dfead227d7ca3922647c940fa03e4c012e8a0e656b731934aeac124a5337e333" + - "8d9cbc508b20ef43e165f8559c1bdd18aaeda805ef565a4f9ffd6e4fbed01c05e143e305017847445859650d6dd06e6efb3f8410" +` + +func TestLoadGenesisConfig(t *testing.T) { + tmpFile := t.TempDir() + "/config.yaml" + os.WriteFile(tmpFile, []byte(testConfigYAML), 0644) + + config, err := LoadGenesisConfig(tmpFile) + if err != nil { + t.Fatalf("load: %v", err) + } + if config.GenesisTime != 1770407233 { + t.Fatalf("genesis time: expected 1770407233, got %d", config.GenesisTime) + } + if len(config.GenesisValidators) != 3 { + t.Fatalf("validators: expected 3, got %d", len(config.GenesisValidators)) + } +} + +func TestValidators(t *testing.T) { + tmpFile := t.TempDir() + "/config.yaml" + os.WriteFile(tmpFile, []byte(testConfigYAML), 0644) + + config, _ := LoadGenesisConfig(tmpFile) + validators := config.Validators() + + if len(validators) != 3 { + t.Fatalf("expected 3 validators, got %d", len(validators)) + } + for i, v := range validators { + if v.Index != uint64(i) { + t.Fatalf("validator %d index: expected %d, got %d", i, i, v.Index) + } + if v.Pubkey == [types.PubkeySize]byte{} { + t.Fatalf("validator %d has zero pubkey", i) + } + } +} + +func TestGenesisState(t *testing.T) { + tmpFile := t.TempDir() + "/config.yaml" + os.WriteFile(tmpFile, []byte(testConfigYAML), 0644) + + config, _ := LoadGenesisConfig(tmpFile) + state := config.GenesisState() + + if state.Slot != 0 { + t.Fatalf("genesis slot should be 0, got %d", state.Slot) + } + if state.Config.GenesisTime != 1770407233 { + t.Fatal("genesis time mismatch") + } + if len(state.Validators) != 3 { + t.Fatalf("expected 3 validators, got %d", len(state.Validators)) + } + if !types.IsZeroRoot(state.LatestJustified.Root) { + t.Fatal("justified root should be zero at genesis") + } + if !types.IsZeroRoot(state.LatestFinalized.Root) { + t.Fatal("finalized root should be zero at genesis") + } +} + +func TestLoadGenesisConfigMissing(t *testing.T) { + _, err := LoadGenesisConfig("/nonexistent/config.yaml") + if err == nil { + t.Fatal("should error on missing file") + } +} diff --git a/go.mod b/go.mod index 32c6b22..24dc590 100644 --- a/go.mod +++ b/go.mod @@ -1,47 +1,53 @@ module github.com/geanlabs/gean -go 1.24.6 - -toolchain go1.24.12 +go 1.25.7 require ( - github.com/ethereum/go-ethereum v1.17.0 + github.com/cockroachdb/pebble v1.1.5 github.com/ferranbt/fastssz v1.0.0 github.com/golang/snappy v1.0.0 - github.com/libp2p/go-libp2p v0.46.0 + github.com/libp2p/go-libp2p v0.48.0 github.com/libp2p/go-libp2p-pubsub v0.15.0 github.com/multiformats/go-multiaddr v0.16.0 - github.com/prometheus/client_golang v1.22.0 gopkg.in/yaml.v3 v3.0.1 ) require ( - github.com/ProjectZKM/Ziren/crates/go-runtime/zkvm_runtime v0.0.0-20251001021608-1fe7b43fc4d6 // indirect - github.com/StackExchange/wmi v1.2.1 // indirect + filippo.io/bigmod v0.1.1-0.20260103110540-f8a47775ebe5 // indirect + filippo.io/keygen v0.0.0-20260114151900-8e2790ea4c5b // indirect + github.com/DataDog/zstd v1.4.5 // indirect github.com/benbjohnson/clock v1.3.5 // indirect github.com/beorn7/perks v1.0.1 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/cockroachdb/errors v1.11.3 // indirect + github.com/cockroachdb/fifo v0.0.0-20240606204812-0bbfbd93a7ce // indirect + github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b // indirect + github.com/cockroachdb/redact v1.1.5 // indirect + github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06 // indirect github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c // indirect - github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect + github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.1 // indirect + github.com/dunglas/httpsfv v1.1.0 // indirect github.com/emicklei/dot v1.6.2 // indirect github.com/flynn/noise v1.1.0 // indirect - github.com/go-ole/go-ole v1.3.0 // indirect + github.com/getsentry/sentry-go v0.27.0 // indirect github.com/gogo/protobuf v1.3.2 // indirect github.com/google/uuid v1.6.0 // indirect github.com/gorilla/websocket v1.5.3 // indirect github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect - github.com/holiman/uint256 v1.3.2 // indirect github.com/huin/goupnp v1.3.0 // indirect github.com/ipfs/go-cid v0.5.0 // indirect github.com/jackpal/go-nat-pmp v1.0.2 // indirect github.com/jbenet/go-temp-err-catcher v0.1.0 // indirect + github.com/klauspost/compress v1.18.0 // indirect github.com/klauspost/cpuid/v2 v2.2.10 // indirect github.com/koron/go-ssdp v0.0.6 // indirect + github.com/kr/pretty v0.3.1 // indirect + github.com/kr/text v0.2.0 // indirect github.com/libp2p/go-buffer-pool v0.1.0 // indirect github.com/libp2p/go-flow-metrics v0.2.0 // indirect github.com/libp2p/go-libp2p-asn-util v0.4.1 // indirect github.com/libp2p/go-msgio v0.3.0 // indirect - github.com/libp2p/go-netroute v0.3.0 // indirect + github.com/libp2p/go-netroute v0.4.0 // indirect github.com/libp2p/go-reuseport v0.4.0 // indirect github.com/libp2p/go-yamux/v5 v5.0.1 // indirect github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd // indirect @@ -63,52 +69,48 @@ require ( github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 // indirect github.com/pion/datachannel v1.5.10 // indirect - github.com/pion/dtls/v2 v2.2.12 // indirect - github.com/pion/dtls/v3 v3.0.6 // indirect + github.com/pion/dtls/v3 v3.1.2 // indirect github.com/pion/ice/v4 v4.0.10 // indirect github.com/pion/interceptor v0.1.40 // indirect - github.com/pion/logging v0.2.3 // indirect + github.com/pion/logging v0.2.4 // indirect github.com/pion/mdns/v2 v2.0.7 // indirect github.com/pion/randutil v0.1.0 // indirect - github.com/pion/rtcp v1.2.15 // indirect + github.com/pion/rtcp v1.2.16 // indirect github.com/pion/rtp v1.8.19 // indirect github.com/pion/sctp v1.8.39 // indirect - github.com/pion/sdp/v3 v3.0.13 // indirect + github.com/pion/sdp/v3 v3.0.18 // indirect github.com/pion/srtp/v3 v3.0.6 // indirect - github.com/pion/stun v0.6.1 // indirect - github.com/pion/stun/v3 v3.0.0 // indirect - github.com/pion/transport/v2 v2.2.10 // indirect + github.com/pion/stun/v3 v3.1.1 // indirect github.com/pion/transport/v3 v3.0.7 // indirect + github.com/pion/transport/v4 v4.0.1 // indirect github.com/pion/turn/v4 v4.0.2 // indirect github.com/pion/webrtc/v4 v4.1.2 // indirect + github.com/pkg/errors v0.9.1 // indirect + github.com/prometheus/client_golang v1.22.0 // indirect github.com/prometheus/client_model v0.6.2 // indirect github.com/prometheus/common v0.64.0 // indirect github.com/prometheus/procfs v0.16.1 // indirect github.com/quic-go/qpack v0.6.0 // indirect - github.com/quic-go/quic-go v0.57.1 // indirect - github.com/quic-go/webtransport-go v0.9.0 // indirect - github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible // indirect + github.com/quic-go/quic-go v0.59.0 // indirect + github.com/quic-go/webtransport-go v0.10.0 // indirect + github.com/rogpeppe/go-internal v1.14.1 // indirect github.com/spaolacci/murmur3 v1.1.0 // indirect - github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 // indirect - github.com/tklauser/go-sysconf v0.3.12 // indirect - github.com/tklauser/numcpus v0.6.1 // indirect github.com/wlynxg/anet v0.0.5 // indirect - go.etcd.io/bbolt v1.4.3 // indirect go.uber.org/dig v1.19.0 // indirect go.uber.org/fx v1.24.0 // indirect go.uber.org/mock v0.5.2 // indirect go.uber.org/multierr v1.11.0 // indirect go.uber.org/zap v1.27.0 // indirect - golang.org/x/crypto v0.44.0 // indirect + golang.org/x/crypto v0.48.0 // indirect golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476 // indirect - golang.org/x/mod v0.29.0 // indirect - golang.org/x/net v0.47.0 // indirect - golang.org/x/sync v0.18.0 // indirect - golang.org/x/sys v0.39.0 // indirect - golang.org/x/telemetry v0.0.0-20251008203120-078029d740a8 // indirect - golang.org/x/text v0.31.0 // indirect + golang.org/x/mod v0.32.0 // indirect + golang.org/x/net v0.50.0 // indirect + golang.org/x/sync v0.19.0 // indirect + golang.org/x/sys v0.41.0 // indirect + golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2 // indirect + golang.org/x/text v0.34.0 // indirect golang.org/x/time v0.12.0 // indirect - golang.org/x/tools v0.38.0 // indirect + golang.org/x/tools v0.41.0 // indirect google.golang.org/protobuf v1.36.11 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect lukechampine.com/blake3 v1.4.1 // indirect diff --git a/go.sum b/go.sum index f963408..45fdaf5 100644 --- a/go.sum +++ b/go.sum @@ -1,52 +1,56 @@ -github.com/ProjectZKM/Ziren/crates/go-runtime/zkvm_runtime v0.0.0-20251001021608-1fe7b43fc4d6 h1:1zYrtlhrZ6/b6SAjLSfKzWtdgqK0U+HtH/VcBWh1BaU= -github.com/ProjectZKM/Ziren/crates/go-runtime/zkvm_runtime v0.0.0-20251001021608-1fe7b43fc4d6/go.mod h1:ioLG6R+5bUSO1oeGSDxOV3FADARuMoytZCSX6MEMQkI= -github.com/StackExchange/wmi v1.2.1 h1:VIkavFPXSjcnS+O8yTq7NI32k0R5Aj+v39y29VYDOSA= -github.com/StackExchange/wmi v1.2.1/go.mod h1:rcmrprowKIVzvc+NUiLncP2uuArMWLCbu9SBzvHz7e8= +filippo.io/bigmod v0.1.1-0.20260103110540-f8a47775ebe5 h1:JA0fFr+kxpqTdxR9LOBiTWpGNchqmkcsgmdeJZRclZ0= +filippo.io/bigmod v0.1.1-0.20260103110540-f8a47775ebe5/go.mod h1:OjOXDNlClLblvXdwgFFOQFJEocLhhtai8vGLy0JCZlI= +filippo.io/keygen v0.0.0-20260114151900-8e2790ea4c5b h1:REI1FbdW71yO56Are4XAxD+OS/e+BQsB3gE4mZRQEXY= +filippo.io/keygen v0.0.0-20260114151900-8e2790ea4c5b/go.mod h1:9nnw1SlYHYuPSo/3wjQzNjSbeHlq2NsKo5iEtfJPWP0= +github.com/DataDog/zstd v1.4.5 h1:EndNeuB0l9syBZhut0wns3gV1hL8zX8LIu6ZiVHWLIQ= +github.com/DataDog/zstd v1.4.5/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo= github.com/benbjohnson/clock v1.3.5 h1:VvXlSJBzZpA/zum6Sj74hxwYI2DIxRWuNIoXAzHZz5o= github.com/benbjohnson/clock v1.3.5/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/canonical/go-sp800.90a-drbg v0.0.0-20210314144037-6eeb1040d6c3 h1:oe6fCvaEpkhyW3qAicT0TnGtyht/UrgvOwMcEgLb7Aw= +github.com/canonical/go-sp800.90a-drbg v0.0.0-20210314144037-6eeb1040d6c3/go.mod h1:qdP0gaj0QtgX2RUZhnlVrceJ+Qln8aSlDyJwelLLFeM= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/cockroachdb/datadriven v1.0.3-0.20230413201302-be42291fc80f h1:otljaYPt5hWxV3MUfO5dFPFiOXg9CyG5/kCfayTqsJ4= +github.com/cockroachdb/datadriven v1.0.3-0.20230413201302-be42291fc80f/go.mod h1:a9RdTaap04u637JoCzcUoIcDmvwSUtcUFtT/C3kJlTU= +github.com/cockroachdb/errors v1.11.3 h1:5bA+k2Y6r+oz/6Z/RFlNeVCesGARKuC6YymtcDrbC/I= +github.com/cockroachdb/errors v1.11.3/go.mod h1:m4UIW4CDjx+R5cybPsNrRbreomiFqt8o1h1wUVazSd8= +github.com/cockroachdb/fifo v0.0.0-20240606204812-0bbfbd93a7ce h1:giXvy4KSc/6g/esnpM7Geqxka4WSqI1SZc7sMJFd3y4= +github.com/cockroachdb/fifo v0.0.0-20240606204812-0bbfbd93a7ce/go.mod h1:9/y3cnZ5GKakj/H4y9r9GTjCvAFta7KLgSHPJJYc52M= +github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b h1:r6VH0faHjZeQy818SGhaone5OnYfxFR/+AzdY3sf5aE= +github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b/go.mod h1:Vz9DsVWQQhf3vs21MhPMZpMGSht7O/2vFW2xusFUVOs= +github.com/cockroachdb/pebble v1.1.5 h1:5AAWCBWbat0uE0blr8qzufZP5tBjkRyy/jWe1QWLnvw= +github.com/cockroachdb/pebble v1.1.5/go.mod h1:17wO9el1YEigxkP/YtV8NtCivQDgoCyBg5c4VR/eOWo= +github.com/cockroachdb/redact v1.1.5 h1:u1PMllDkdFfPWaNGMyLD1+so+aq3uUItthCFqzwPJ30= +github.com/cockroachdb/redact v1.1.5/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg= +github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06 h1:zuQyyAKVxetITBuuhv3BI9cMrmStnpT18zmgmTxunpo= +github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06/go.mod h1:7nc4anLGjupUW/PeY5qiNYsdNXj7zopG+eqsS7To5IQ= +github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c h1:pFUpOrbxDR6AkioZ1ySsx5yxlDQZ8stG2b88gTPxgJU= github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c/go.mod h1:6UhI8N9EjYm1c2odKpFpAYeR8dsBeM7PtzQhRgxRr9U= github.com/decred/dcrd/crypto/blake256 v1.1.0 h1:zPMNGQCm0g4QTY27fOCorQW7EryeQ/U0x++OzVrdms8= github.com/decred/dcrd/crypto/blake256 v1.1.0/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo= -github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 h1:NMZiJj8QnKe1LgsbDayM4UoHwbvwDRwnI3hwNaAHRnc= -github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40= +github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.1 h1:5RVFMOWjMyRy8cARdy79nAmgYw3hK/4HUq48LQ6Wwqo= +github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.1/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40= +github.com/dunglas/httpsfv v1.1.0 h1:Jw76nAyKWKZKFrpMMcL76y35tOpYHqQPzHQiwDvpe54= +github.com/dunglas/httpsfv v1.1.0/go.mod h1:zID2mqw9mFsnt7YC3vYQ9/cjq30q41W+1AnDwH8TiMg= github.com/emicklei/dot v1.6.2 h1:08GN+DD79cy/tzN6uLCT84+2Wk9u+wvqP+Hkx/dIR8A= github.com/emicklei/dot v1.6.2/go.mod h1:DeV7GvQtIw4h2u73RKBkkFdvVAz0D9fzeJrgPW6gy/s= -github.com/ethereum/go-ethereum v1.17.0 h1:2D+1Fe23CwZ5tQoAS5DfwKFNI1HGcTwi65/kRlAVxes= -github.com/ethereum/go-ethereum v1.17.0/go.mod h1:2W3msvdosS/MCWytpqTcqgFiRYbTH59FxDJzqah120o= github.com/ferranbt/fastssz v1.0.0 h1:9EXXYsracSqQRBQiHeaVsG/KQeYblPf40hsQPb9Dzk8= github.com/ferranbt/fastssz v1.0.0/go.mod h1:Ea3+oeoRGGLGm5shYAeDgu6PGUlcvQhE2fILyD9+tGg= github.com/flynn/noise v1.1.0 h1:KjPQoQCEFdZDiP03phOvGi11+SVVhBG2wOWAorLsstg= github.com/flynn/noise v1.1.0/go.mod h1:xbMo+0i6+IGbYdJhF31t2eR1BIU0CYc12+BNAKwUTag= -github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= -github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= -github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY= -github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw= -github.com/go-ole/go-ole v1.2.5/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0= -github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE= -github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78= +github.com/getsentry/sentry-go v0.27.0 h1:Pv98CIbtB3LkMWmXi4Joa5OOcwbmnX88sF5qbK3r3Ps= +github.com/getsentry/sentry-go v0.27.0/go.mod h1:lc76E2QywIyW8WuBnwl8Lc4bkmQH4+w1gwTf25trprY= +github.com/go-errors/errors v1.4.2 h1:J6MZopCL4uSllY1OfXM374weqZFFItUbrImctkmUxIA= +github.com/go-errors/errors v1.4.2/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3BopGUQ5a5Og= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= -github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= -github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= -github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= -github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= -github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= github.com/golang/snappy v1.0.0 h1:Oy607GVXHs7RtbggtPBnr2RmDArIsAefDwvrdWvRhGs= github.com/golang/snappy v1.0.0/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= @@ -55,9 +59,6 @@ github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aN github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= -github.com/holiman/uint256 v1.3.2 h1:a9EgMPSC1AAaj1SZL5zIQD3WbwTuHrMGOerLjGmM/TA= -github.com/holiman/uint256 v1.3.2/go.mod h1:EOMSn4q6Nyt9P6efbI3bueV4e1b3dGlUCXeiRV4ng7E= -github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/huin/goupnp v1.3.0 h1:UvLUlWDNpoUdYzb2TCn+MuTWtcjXKSza2n6CBdQ0xXc= github.com/huin/goupnp v1.3.0/go.mod h1:gnGPsThkYa7bFi/KWmEysQRf48l2dvR5bxr2OFckNX8= github.com/ipfs/go-cid v0.5.0 h1:goEKKhaGm0ul11IHA7I6p1GmKz8kEYniqFopaB5Otwg= @@ -81,14 +82,12 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= -github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= -github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= github.com/libp2p/go-buffer-pool v0.1.0 h1:oK4mSFcQz7cTQIfqbe4MIj9gLW+mnanjyFtc6cdF0Y8= github.com/libp2p/go-buffer-pool v0.1.0/go.mod h1:N+vh8gMqimBzdKkSMVuydVDq+UV5QTWy5HSiZacSbPg= github.com/libp2p/go-flow-metrics v0.2.0 h1:EIZzjmeOE6c8Dav0sNv35vhZxATIXWZg6j/C08XmmDw= github.com/libp2p/go-flow-metrics v0.2.0/go.mod h1:st3qqfu8+pMfh+9Mzqb2GTiwrAGjIPszEjZmtksN8Jc= -github.com/libp2p/go-libp2p v0.46.0 h1:0T2yvIKpZ3DVYCuPOFxPD1layhRU486pj9rSlGWYnDM= -github.com/libp2p/go-libp2p v0.46.0/go.mod h1:TbIDnpDjBLa7isdgYpbxozIVPBTmM/7qKOJP4SFySrQ= +github.com/libp2p/go-libp2p v0.48.0 h1:h2BrLAgrj7X8bEN05K7qmrjpNHYA+6tnsGRdprjTnvo= +github.com/libp2p/go-libp2p v0.48.0/go.mod h1:Q1fBZNdmC2Hf82husCTfkKJVfHm2we5zk+NWmOGEmWk= github.com/libp2p/go-libp2p-asn-util v0.4.1 h1:xqL7++IKD9TBFMgnLPZR6/6iYhawHKHl950SO9L6n94= github.com/libp2p/go-libp2p-asn-util v0.4.1/go.mod h1:d/NI6XZ9qxw67b4e+NgpQexCIiFYJjErASrYW4PFDN8= github.com/libp2p/go-libp2p-pubsub v0.15.0 h1:cG7Cng2BT82WttmPFMi50gDNV+58K626m/wR00vGL1o= @@ -97,14 +96,14 @@ github.com/libp2p/go-libp2p-testing v0.12.0 h1:EPvBb4kKMWO29qP4mZGyhVzUyR25dvfUI github.com/libp2p/go-libp2p-testing v0.12.0/go.mod h1:KcGDRXyN7sQCllucn1cOOS+Dmm7ujhfEyXQL5lvkcPg= github.com/libp2p/go-msgio v0.3.0 h1:mf3Z8B1xcFN314sWX+2vOTShIE0Mmn2TXn3YCUQGNj0= github.com/libp2p/go-msgio v0.3.0/go.mod h1:nyRM819GmVaF9LX3l03RMh10QdOroF++NBbxAb0mmDM= -github.com/libp2p/go-netroute v0.3.0 h1:nqPCXHmeNmgTJnktosJ/sIef9hvwYCrsLxXmfNks/oc= -github.com/libp2p/go-netroute v0.3.0/go.mod h1:Nkd5ShYgSMS5MUKy/MU2T57xFoOKvvLR92Lic48LEyA= +github.com/libp2p/go-netroute v0.4.0 h1:sZZx9hyANYUx9PZyqcgE/E1GUG3iEtTZHUEvdtXT7/Q= +github.com/libp2p/go-netroute v0.4.0/go.mod h1:Nkd5ShYgSMS5MUKy/MU2T57xFoOKvvLR92Lic48LEyA= github.com/libp2p/go-reuseport v0.4.0 h1:nR5KU7hD0WxXCJbmw7r2rhRYruNRl2koHw8fQscQm2s= github.com/libp2p/go-reuseport v0.4.0/go.mod h1:ZtI03j/wO5hZVDFo2jKywN6bYKWLOy8Se6DrI2E1cLU= github.com/libp2p/go-yamux/v5 v5.0.1 h1:f0WoX/bEF2E8SbE4c/k1Mo+/9z0O4oC/hWEA+nfYRSg= github.com/libp2p/go-yamux/v5 v5.0.1/go.mod h1:en+3cdX51U0ZslwRdRLrvQsdayFt3TSUKvBGErzpWbU= -github.com/marcopolo/simnet v0.0.1 h1:rSMslhPz6q9IvJeFWDoMGxMIrlsbXau3NkuIXHGJxfg= -github.com/marcopolo/simnet v0.0.1/go.mod h1:WDaQkgLAjqDUEBAOXz22+1j6wXKfGlC5sD5XWt3ddOs= +github.com/marcopolo/simnet v0.0.4 h1:50Kx4hS9kFGSRIbrt9xUS3NJX33EyPqHVmpXvaKLqrY= +github.com/marcopolo/simnet v0.0.4/go.mod h1:tfQF1u2DmaB6WHODMtQaLtClEf3a296CKQLq5gAsIS0= github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd h1:br0buuQ854V8u83wA0rVZ8ttrq5CpaPZdvrK0LP2lOk= github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd/go.mod h1:QuCEs1Nt24+FYQEqAAncTDPJIuGs+LxK1MCiFL25pMU= github.com/miekg/dns v1.1.66 h1:FeZXOS3VCVsKnEAd+wBkjMC3D2K+ww66Cq3VnCINuJE= @@ -148,61 +147,47 @@ github.com/multiformats/go-varint v0.0.7 h1:sWSGR+f/eu5ABZA2ZpYKBILXTTs9JWpdEM/n github.com/multiformats/go-varint v0.0.7/go.mod h1:r8PUYw/fD/SjBCiKOoDlGF6QawOELpZAu9eioSos/OU= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= -github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78= -github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A= -github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= -github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk= -github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY= -github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE= -github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU= -github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY= -github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo= -github.com/onsi/gomega v1.17.0 h1:9Luw4uT5HTjHTN8+aNcSThgH1vdXnmdJ8xIfZ4wyTRE= -github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY= github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 h1:onHthvaw9LFnH4t2DcNVpwGmV9E1BkGknEliJkfwQj0= github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58/go.mod h1:DXv8WO4yhMYhSNPKjeNKa5WY9YCIEBRbNzFFPJbWO6Y= +github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4= +github.com/pingcap/errors v0.11.4/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8= github.com/pion/datachannel v1.5.10 h1:ly0Q26K1i6ZkGf42W7D4hQYR90pZwzFOjTq5AuCKk4o= github.com/pion/datachannel v1.5.10/go.mod h1:p/jJfC9arb29W7WrxyKbepTU20CFgyx5oLo8Rs4Py/M= -github.com/pion/dtls/v2 v2.2.7/go.mod h1:8WiMkebSHFD0T+dIU+UeBaoV7kDhOW5oDCzZ7WZ/F9s= -github.com/pion/dtls/v2 v2.2.12 h1:KP7H5/c1EiVAAKUmXyCzPiQe5+bCJrpOeKg/L05dunk= -github.com/pion/dtls/v2 v2.2.12/go.mod h1:d9SYc9fch0CqK90mRk1dC7AkzzpwJj6u2GU3u+9pqFE= -github.com/pion/dtls/v3 v3.0.6 h1:7Hkd8WhAJNbRgq9RgdNh1aaWlZlGpYTzdqjy9x9sK2E= -github.com/pion/dtls/v3 v3.0.6/go.mod h1:iJxNQ3Uhn1NZWOMWlLxEEHAN5yX7GyPvvKw04v9bzYU= +github.com/pion/dtls/v3 v3.1.2 h1:gqEdOUXLtCGW+afsBLO0LtDD8GnuBBjEy6HRtyofZTc= +github.com/pion/dtls/v3 v3.1.2/go.mod h1:Hw/igcX4pdY69z1Hgv5x7wJFrUkdgHwAn/Q/uo7YHRo= github.com/pion/ice/v4 v4.0.10 h1:P59w1iauC/wPk9PdY8Vjl4fOFL5B+USq1+xbDcN6gT4= github.com/pion/ice/v4 v4.0.10/go.mod h1:y3M18aPhIxLlcO/4dn9X8LzLLSma84cx6emMSu14FGw= github.com/pion/interceptor v0.1.40 h1:e0BjnPcGpr2CFQgKhrQisBU7V3GXK6wrfYrGYaU6Jq4= github.com/pion/interceptor v0.1.40/go.mod h1:Z6kqH7M/FYirg3frjGJ21VLSRJGBXB/KqaTIrdqnOic= -github.com/pion/logging v0.2.2/go.mod h1:k0/tDVsRCX2Mb2ZEmTqNa7CWsQPc+YYCB7Q+5pahoms= -github.com/pion/logging v0.2.3 h1:gHuf0zpoh1GW67Nr6Gj4cv5Z9ZscU7g/EaoC/Ke/igI= -github.com/pion/logging v0.2.3/go.mod h1:z8YfknkquMe1csOrxK5kc+5/ZPAzMxbKLX5aXpbpC90= +github.com/pion/logging v0.2.4 h1:tTew+7cmQ+Mc1pTBLKH2puKsOvhm32dROumOZ655zB8= +github.com/pion/logging v0.2.4/go.mod h1:DffhXTKYdNZU+KtJ5pyQDjvOAh/GsNSyv1lbkFbe3so= github.com/pion/mdns/v2 v2.0.7 h1:c9kM8ewCgjslaAmicYMFQIde2H9/lrZpjBkN8VwoVtM= github.com/pion/mdns/v2 v2.0.7/go.mod h1:vAdSYNAT0Jy3Ru0zl2YiW3Rm/fJCwIeM0nToenfOJKA= github.com/pion/randutil v0.1.0 h1:CFG1UdESneORglEsnimhUjf33Rwjubwj6xfiOXBa3mA= github.com/pion/randutil v0.1.0/go.mod h1:XcJrSMMbbMRhASFVOlj/5hQial/Y8oH/HVo7TBZq+j8= -github.com/pion/rtcp v1.2.15 h1:LZQi2JbdipLOj4eBjK4wlVoQWfrZbh3Q6eHtWtJBZBo= -github.com/pion/rtcp v1.2.15/go.mod h1:jlGuAjHMEXwMUHK78RgX0UmEJFV4zUKOFHR7OP+D3D0= +github.com/pion/rtcp v1.2.16 h1:fk1B1dNW4hsI78XUCljZJlC4kZOPk67mNRuQ0fcEkSo= +github.com/pion/rtcp v1.2.16/go.mod h1:/as7VKfYbs5NIb4h6muQ35kQF/J0ZVNz2Z3xKoCBYOo= github.com/pion/rtp v1.8.19 h1:jhdO/3XhL/aKm/wARFVmvTfq0lC/CvN1xwYKmduly3c= github.com/pion/rtp v1.8.19/go.mod h1:bAu2UFKScgzyFqvUKmbvzSdPr+NGbZtv6UB2hesqXBk= github.com/pion/sctp v1.8.39 h1:PJma40vRHa3UTO3C4MyeJDQ+KIobVYRZQZ0Nt7SjQnE= github.com/pion/sctp v1.8.39/go.mod h1:cNiLdchXra8fHQwmIoqw0MbLLMs+f7uQ+dGMG2gWebE= -github.com/pion/sdp/v3 v3.0.13 h1:uN3SS2b+QDZnWXgdr69SM8KB4EbcnPnPf2Laxhty/l4= -github.com/pion/sdp/v3 v3.0.13/go.mod h1:88GMahN5xnScv1hIMTqLdu/cOcUkj6a9ytbncwMCq2E= +github.com/pion/sdp/v3 v3.0.18 h1:l0bAXazKHpepazVdp+tPYnrsy9dfh7ZbT8DxesH5ZnI= +github.com/pion/sdp/v3 v3.0.18/go.mod h1:ZREGo6A9ZygQ9XkqAj5xYCQtQpif0i6Pa81HOiAdqQ8= github.com/pion/srtp/v3 v3.0.6 h1:E2gyj1f5X10sB/qILUGIkL4C2CqK269Xq167PbGCc/4= github.com/pion/srtp/v3 v3.0.6/go.mod h1:BxvziG3v/armJHAaJ87euvkhHqWe9I7iiOy50K2QkhY= -github.com/pion/stun v0.6.1 h1:8lp6YejULeHBF8NmV8e2787BogQhduZugh5PdhDyyN4= -github.com/pion/stun v0.6.1/go.mod h1:/hO7APkX4hZKu/D0f2lHzNyvdkTGtIy3NDmLR7kSz/8= -github.com/pion/stun/v3 v3.0.0 h1:4h1gwhWLWuZWOJIJR9s2ferRO+W3zA/b6ijOI6mKzUw= -github.com/pion/stun/v3 v3.0.0/go.mod h1:HvCN8txt8mwi4FBvS3EmDghW6aQJ24T+y+1TKjB5jyU= -github.com/pion/transport/v2 v2.2.1/go.mod h1:cXXWavvCnFF6McHTft3DWS9iic2Mftcz1Aq29pGcU5g= -github.com/pion/transport/v2 v2.2.4/go.mod h1:q2U/tf9FEfnSBGSW6w5Qp5PFWRLRj3NjLhCCgpRK4p0= -github.com/pion/transport/v2 v2.2.10 h1:ucLBLE8nuxiHfvkFKnkDQRYWYfp8ejf4YBOPfaQpw6Q= -github.com/pion/transport/v2 v2.2.10/go.mod h1:sq1kSLWs+cHW9E+2fJP95QudkzbK7wscs8yYgQToO5E= +github.com/pion/stun/v3 v3.1.1 h1:CkQxveJ4xGQjulGSROXbXq94TAWu8gIX2dT+ePhUkqw= +github.com/pion/stun/v3 v3.1.1/go.mod h1:qC1DfmcCTQjl9PBaMa5wSn3x9IPmKxSdcCsxBcDBndM= github.com/pion/transport/v3 v3.0.7 h1:iRbMH05BzSNwhILHoBoAPxoB9xQgOaJk+591KC9P1o0= github.com/pion/transport/v3 v3.0.7/go.mod h1:YleKiTZ4vqNxVwh77Z0zytYi7rXHl7j6uPLGhhz9rwo= +github.com/pion/transport/v4 v4.0.1 h1:sdROELU6BZ63Ab7FrOLn13M6YdJLY20wldXW2Cu2k8o= +github.com/pion/transport/v4 v4.0.1/go.mod h1:nEuEA4AD5lPdcIegQDpVLgNoDGreqM/YqmEx3ovP4jM= github.com/pion/turn/v4 v4.0.2 h1:ZqgQ3+MjP32ug30xAbD6Mn+/K4Sxi3SdNOTFf+7mpps= github.com/pion/turn/v4 v4.0.2/go.mod h1:pMMKP/ieNAG/fN5cZiN4SDuyKsXtNTr0ccN7IToA1zs= github.com/pion/webrtc/v4 v4.1.2 h1:mpuUo/EJ1zMNKGE79fAdYNFZBX790KE7kQQpLMjjR54= github.com/pion/webrtc/v4 v4.1.2/go.mod h1:xsCXiNAmMEjIdFxAYU0MbB3RwRieJsegSB2JZsGN+8U= +github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= +github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= +github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q= @@ -217,39 +202,21 @@ github.com/prysmaticlabs/gohashtree v0.0.4-beta h1:H/EbCuXPeTV3lpKeXGPpEV9gsUpkq github.com/prysmaticlabs/gohashtree v0.0.4-beta/go.mod h1:BFdtALS+Ffhg3lGQIHv9HDWuHS8cTvHZzrHWxwOtGOs= github.com/quic-go/qpack v0.6.0 h1:g7W+BMYynC1LbYLSqRt8PBg5Tgwxn214ZZR34VIOjz8= github.com/quic-go/qpack v0.6.0/go.mod h1:lUpLKChi8njB4ty2bFLX2x4gzDqXwUpaO1DP9qMDZII= -github.com/quic-go/quic-go v0.57.1 h1:25KAAR9QR8KZrCZRThWMKVAwGoiHIrNbT72ULHTuI10= -github.com/quic-go/quic-go v0.57.1/go.mod h1:ly4QBAjHA2VhdnxhojRsCUOeJwKYg+taDlos92xb1+s= -github.com/quic-go/webtransport-go v0.9.0 h1:jgys+7/wm6JarGDrW+lD/r9BGqBAmqY/ssklE09bA70= -github.com/quic-go/webtransport-go v0.9.0/go.mod h1:4FUYIiUc75XSsF6HShcLeXXYZJ9AGwo/xh3L8M/P1ao= +github.com/quic-go/quic-go v0.59.0 h1:OLJkp1Mlm/aS7dpKgTc6cnpynnD2Xg7C1pwL6vy/SAw= +github.com/quic-go/quic-go v0.59.0/go.mod h1:upnsH4Ju1YkqpLXC305eW3yDZ4NfnNbmQRCMWS58IKU= +github.com/quic-go/webtransport-go v0.10.0 h1:LqXXPOXuETY5Xe8ITdGisBzTYmUOy5eSj+9n4hLTjHI= +github.com/quic-go/webtransport-go v0.10.0/go.mod h1:LeGIXr5BQKE3UsynwVBeQrU1TPrbh73MGoC6jd+V7ow= +github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= -github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible h1:Bn1aCHHRnjv4Bl16T8rcaFjYSrGrIZvpiGO6P3Q4GpU= -github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA= github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI= github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= -github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= -github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= -github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= -github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= -github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 h1:epCh84lMvA70Z7CTTCmYQn2CKbY8j86K7/FAIr141uY= -github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7/go.mod h1:q4W45IWZaF22tdD+VEXcAWRA037jwmWEB5VWYORlTpc= -github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU= -github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI= -github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk= -github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY= -github.com/wlynxg/anet v0.0.3/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA= github.com/wlynxg/anet v0.0.5 h1:J3VJGi1gvo0JwZ/P1/Yc/8p63SoW98B5dHkYDmpgvvU= github.com/wlynxg/anet v0.0.5/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= -go.etcd.io/bbolt v1.4.3 h1:dEadXpI6G79deX5prL3QRNP6JB8UxVkqo4UPnHaNXJo= -go.etcd.io/bbolt v1.4.3/go.mod h1:tKQlpPaYCVFctUIgFKFnAlvbmB3tpy1vkTnDWohtc0E= go.uber.org/dig v1.19.0 h1:BACLhebsYdpQ7IROQ1AGPjrXcP5dF80U3gKoFzbaq/4= go.uber.org/dig v1.19.0/go.mod h1:Us0rSJiThwCv2GteUN0Q7OKvU7n5J4dxZ9JKUXozFdE= go.uber.org/fx v1.24.0 h1:wE8mruvpg2kiiL1Vqd0CC+tr0/24XIB10Iwp2lLWzkg= @@ -268,122 +235,62 @@ golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8U golang.org/x/crypto v0.0.0-20200602180216-279210d13fed/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= -golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= -golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE= -golang.org/x/crypto v0.12.0/go.mod h1:NF0Gs7EO5K4qLn+Ylc+fih8BSTeIjAP05siRnAh98yw= -golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg= -golang.org/x/crypto v0.44.0 h1:A97SsFvM3AIwEEmTBiaxPPTYpDC47w720rdiiUvgoAU= -golang.org/x/crypto v0.44.0/go.mod h1:013i+Nw79BMiQiMsOPcVCB5ZIJbYkerPrGnOa00tvmc= +golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts= +golang.org/x/crypto v0.48.0/go.mod h1:r0kV5h3qnFPlQnBSrULhlsRfryS2pmewsg+XfMgkVos= golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476 h1:bsqhLWFR6G6xiQcb+JoGqdKdRU6WzPWmK8E0jxTjzo4= golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476/go.mod h1:3//PLf8L/X+8b4vuAfHzxeRUl04Adcb341+IGKfnqS8= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= -golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= -golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA= -golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w= -golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/mod v0.32.0 h1:9F4d3PHLljb6x//jOyokMv3eX+YDeepZSEo3mFJy93c= +golang.org/x/mod v0.32.0/go.mod h1:SgipZ/3h2Ci89DlEtEXWUk/HteuRin+HHhN+WbNhguU= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200813134508-3edf25e44fcc/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= -golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= -golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= -golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns= -golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= -golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI= -golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY= -golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY= -golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU= -golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/net v0.50.0 h1:ucWh9eiCGyDR3vtzso0WMQinm2Dnt8cFMuQa9K33J60= +golang.org/x/net v0.50.0/go.mod h1:UgoSli3F/pBgdJBHCTc+tp3gmrU4XswgGRgtnwWTfyM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I= -golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= -golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= +golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200602225109-6fdc65e7d980/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200814200057-3d37ad5750ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= -golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= -golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= -golang.org/x/telemetry v0.0.0-20251008203120-078029d740a8 h1:LvzTn0GQhWuvKH/kVRS3R3bVAsdQWI7hvfLHGgh9+lU= -golang.org/x/telemetry v0.0.0-20251008203120-078029d740a8/go.mod h1:Pi4ztBfryZoJEkyFTI5/Ocsu2jXyDr6iSdgJiYE/uwE= +golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k= +golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2 h1:O1cMQHRfwNpDfDJerqRoE2oD+AFlyid87D40L/OkkJo= +golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2/go.mod h1:b7fPSJ0pKZ3ccUh8gnTONJxhn3c/PS6tyzQvyqw4iA8= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= -golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= -golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= -golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY= -golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= -golang.org/x/term v0.11.0/go.mod h1:zC9APTIj3jG3FdV/Ons+XE1riIZXG4aZ4GTHiPZJPIU= -golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= -golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= -golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= -golang.org/x/text v0.12.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= -golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= -golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM= -golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM= +golang.org/x/text v0.34.0 h1:oL/Qq0Kdaqxa1KbNeMKwQq0reLCCaFtqu2eNuSeNHbk= +golang.org/x/text v0.34.0/go.mod h1:homfLqTYRFyVYemLBFl5GgL/DWEiH5wcsQ5gSh1yziA= golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE= golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= -golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= -golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ= -golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs= +golang.org/x/tools v0.41.0 h1:a9b8iMweWG+S0OBnlU36rzLp20z1Rp10w+IY2czHTQc= +golang.org/x/tools v0.41.0/go.mod h1:XSY6eDqxVNiYgezAVqqCeihT4j1U2CCsqvH3WhQpnlg= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= -google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= -google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= -google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= -google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= -google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE= google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= -gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= -gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ= -gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= -gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= lukechampine.com/blake3 v1.4.1 h1:I3Smz7gso8w4/TunLKec6K2fn+kyKtDxr/xcQEN84Wg= diff --git a/logger/logger.go b/logger/logger.go new file mode 100644 index 0000000..6171475 --- /dev/null +++ b/logger/logger.go @@ -0,0 +1,72 @@ +package logger + +import ( + "fmt" + "os" + "time" +) + +// ANSI color codes. +const ( + reset = "\033[0m" + dim = "\033[2m" + red = "\033[31m" + green = "\033[32m" + yellow = "\033[33m" + blue = "\033[34m" + magenta = "\033[35m" + cyan = "\033[36m" + white = "\033[37m" + bold = "\033[1m" +) + +// Quiet suppresses all log output when true. Used during tests. +var Quiet bool + +// Component tags matching lean client conventions. +const ( + Chain = "chain" + Validator = "validator" + Gossip = "gossip" + Network = "network" + Signature = "signature" + Forkchoice = "forkchoice" + Sync = "sync" + Node = "node" + State = "state" + Store = "store" +) + +func timestamp() string { + return time.Now().Format("2006-01-02T15:04:05.000Z") +} + +// Info logs an info-level message with a component tag. +func Info(component, format string, args ...any) { + if Quiet { + return + } + msg := fmt.Sprintf(format, args...) + fmt.Fprintf(os.Stderr, "%s%s%s %s%sINFO%s %s[%s]%s %s\n", + dim, timestamp(), reset, bold, green, reset, cyan, component, reset, msg) +} + +// Warn logs a warning-level message with a component tag. +func Warn(component, format string, args ...any) { + if Quiet { + return + } + msg := fmt.Sprintf(format, args...) + fmt.Fprintf(os.Stderr, "%s%s%s %s%sWARN%s %s[%s]%s %s\n", + dim, timestamp(), reset, bold, yellow, reset, cyan, component, reset, msg) +} + +// Error logs an error-level message with a component tag. +func Error(component, format string, args ...any) { + if Quiet { + return + } + msg := fmt.Sprintf(format, args...) + fmt.Fprintf(os.Stderr, "%s%s%s %s%sERROR%s %s[%s]%s %s\n", + dim, timestamp(), reset, bold, red, reset, cyan, component, reset, msg) +} diff --git a/network/gossipsub/encoding.go b/network/gossipsub/encoding.go deleted file mode 100644 index 3ac8b2a..0000000 --- a/network/gossipsub/encoding.go +++ /dev/null @@ -1,73 +0,0 @@ -package gossipsub - -import ( - "context" - "crypto/sha256" - "encoding/binary" - - "github.com/golang/snappy" - pubsub "github.com/libp2p/go-libp2p-pubsub" - pb "github.com/libp2p/go-libp2p-pubsub/pb" - - "github.com/geanlabs/gean/types" -) - -// Message domains for ID computation. -var ( - DomainValidSnappy = []byte{0x01, 0x00, 0x00, 0x00} - DomainInvalidSnappy = []byte{0x00, 0x00, 0x00, 0x00} -) - -// PublishBlock SSZ-encodes, snappy-compresses, and publishes a signed block. -func PublishBlock(ctx context.Context, topic *pubsub.Topic, sb *types.SignedBlockWithAttestation) error { - data, err := sb.MarshalSSZ() - if err != nil { - return err - } - return topic.Publish(ctx, snappy.Encode(nil, data)) -} - -// PublishAttestation SSZ-encodes, snappy-compresses, and publishes a signed attestation. -func PublishAttestation(ctx context.Context, topic *pubsub.Topic, sa *types.SignedAttestation) error { - data, err := sa.MarshalSSZ() - if err != nil { - return err - } - return topic.Publish(ctx, snappy.Encode(nil, data)) -} - -// PublishAggregatedAttestation SSZ-encodes, snappy-compresses, and publishes a signed aggregated attestation. -func PublishAggregatedAttestation(ctx context.Context, topic *pubsub.Topic, saa *types.SignedAggregatedAttestation) error { - data, err := saa.MarshalSSZ() - if err != nil { - return err - } - return topic.Publish(ctx, snappy.Encode(nil, data)) -} - -// ComputeMessageID computes SHA256(domain + uint64_le(topic_len) + topic + data)[:20]. -func ComputeMessageID(pmsg *pb.Message) string { - topic := pmsg.GetTopic() - data := pmsg.GetData() - - // Try snappy decompress to determine domain. - domain := DomainInvalidSnappy - msgData := data - if decoded, err := snappy.Decode(nil, data); err == nil { - domain = DomainValidSnappy - msgData = decoded - } - - topicBytes := []byte(topic) - var topicLen [8]byte - binary.LittleEndian.PutUint64(topicLen[:], uint64(len(topicBytes))) - - h := sha256.New() - h.Write(domain) - h.Write(topicLen[:]) - h.Write(topicBytes) - h.Write(msgData) - digest := h.Sum(nil) - - return string(digest[:20]) -} diff --git a/network/gossipsub/gossip.go b/network/gossipsub/gossip.go deleted file mode 100644 index 05e2919..0000000 --- a/network/gossipsub/gossip.go +++ /dev/null @@ -1,79 +0,0 @@ -package gossipsub - -import ( - "context" - "fmt" - "time" - - pubsub "github.com/libp2p/go-libp2p-pubsub" - "github.com/libp2p/go-libp2p/core/host" - "github.com/libp2p/go-libp2p/core/peer" -) - -// Gossip topic names. -const ( - BlockTopicFmt = "/leanconsensus/%s/block/ssz_snappy" - SubnetAttestationTopicFmt = "/leanconsensus/%s/attestation_%d/ssz_snappy" - AggregationTopicFmt = "/leanconsensus/%s/aggregation/ssz_snappy" -) - -// Topics holds subscribed gossipsub topics. -type Topics struct { - Block *pubsub.Topic - SubnetAttestation *pubsub.Topic - Aggregation *pubsub.Topic -} - -// NewGossipSub creates a configured gossipsub instance. -// directPeers are always messaged regardless of mesh or subscription state (used for bootnodes). -func NewGossipSub(ctx context.Context, h host.Host, directPeers []peer.AddrInfo) (*pubsub.PubSub, error) { - return pubsub.NewGossipSub(ctx, h, - pubsub.WithMessageSignaturePolicy(pubsub.StrictNoSign), - pubsub.WithNoAuthor(), // Omit author (From) and sequence number for anonymous mode compatibility - pubsub.WithGossipSubParams(pubsub.GossipSubParams{ - D: 8, - Dlo: 6, - Dhi: 12, - Dlazy: 6, - HeartbeatInterval: 700 * time.Millisecond, - FanoutTTL: 60 * time.Second, - HistoryLength: 6, - HistoryGossip: 3, - GossipFactor: 0.25, - PruneBackoff: time.Minute, - UnsubscribeBackoff: 10 * time.Second, - Connectors: 8, - MaxPendingConnections: 128, - ConnectionTimeout: 30 * time.Second, - DirectConnectTicks: 300, - DirectConnectInitialDelay: time.Second, - OpportunisticGraftTicks: 60, - OpportunisticGraftPeers: 2, - GraftFloodThreshold: 10 * time.Second, - MaxIHaveLength: 5000, - MaxIHaveMessages: 10, - IWantFollowupTime: 3 * time.Second, - }), - pubsub.WithSeenMessagesTTL(24*time.Second), - pubsub.WithMessageIdFn(ComputeMessageID), - pubsub.WithFloodPublish(true), // Send to all subscribed peers, bypassing mesh for small devnets - pubsub.WithDirectPeers(directPeers), // Always message bootnodes regardless of mesh/subscription state - ) -} - -// JoinTopics joins the devnet-3 block, subnet attestation, and aggregation gossip topics. -func JoinTopics(ps *pubsub.PubSub, devnetID string, subnetID uint64) (*Topics, error) { - blockTopic, err := ps.Join(fmt.Sprintf(BlockTopicFmt, devnetID)) - if err != nil { - return nil, fmt.Errorf("join block topic: %w", err) - } - subnetAttTopic, err := ps.Join(fmt.Sprintf(SubnetAttestationTopicFmt, devnetID, subnetID)) - if err != nil { - return nil, fmt.Errorf("join subnet attestation topic: %w", err) - } - aggTopic, err := ps.Join(fmt.Sprintf(AggregationTopicFmt, devnetID)) - if err != nil { - return nil, fmt.Errorf("join aggregation topic: %w", err) - } - return &Topics{Block: blockTopic, SubnetAttestation: subnetAttTopic, Aggregation: aggTopic}, nil -} diff --git a/network/gossipsub/gossip_test.go b/network/gossipsub/gossip_test.go deleted file mode 100644 index 7a5927a..0000000 --- a/network/gossipsub/gossip_test.go +++ /dev/null @@ -1,107 +0,0 @@ -package gossipsub_test - -import ( - "crypto/sha256" - "encoding/binary" - "encoding/hex" - "testing" - - "github.com/golang/snappy" - pb "github.com/libp2p/go-libp2p-pubsub/pb" - - "github.com/geanlabs/gean/network/gossipsub" -) - -func TestComputeMessageID(t *testing.T) { - topicStr := "/leanconsensus/devnet0/block/ssz_snappy" - data := []byte("test data") - - // Snappy block-encode so ComputeMessageID's Decode succeeds (valid domain). - compressed := snappy.Encode(nil, data) - - // Expected: SHA256(DomainValidSnappy + le64(topicLen) + topic + decompressedData)[:20] - var topicLen [8]byte - binary.LittleEndian.PutUint64(topicLen[:], uint64(len(topicStr))) - - h := sha256.New() - h.Write(gossipsub.DomainValidSnappy) - h.Write(topicLen[:]) - h.Write([]byte(topicStr)) - h.Write(data) - expected := string(h.Sum(nil)[:20]) - - msg := &pb.Message{ - Topic: &topicStr, - Data: compressed, - } - - got := gossipsub.ComputeMessageID(msg) - if got != expected { - t.Fatalf("ComputeMessageID mismatch:\n got: %x\n expect: %x", []byte(got), []byte(expected)) - } -} - -func TestComputeMessageIDInvalidSnappy(t *testing.T) { - topicStr := "/leanconsensus/devnet0/block/ssz_snappy" - data := []byte("not valid snappy data") - - // Expected: SHA256(DomainInvalidSnappy + le64(topicLen) + topic + rawData)[:20] - var topicLen [8]byte - binary.LittleEndian.PutUint64(topicLen[:], uint64(len(topicStr))) - - h := sha256.New() - h.Write(gossipsub.DomainInvalidSnappy) - h.Write(topicLen[:]) - h.Write([]byte(topicStr)) - h.Write(data) - expected := string(h.Sum(nil)[:20]) - - msg := &pb.Message{ - Topic: &topicStr, - Data: data, - } - - got := gossipsub.ComputeMessageID(msg) - if got != expected { - t.Fatalf("ComputeMessageID mismatch for invalid snappy:\n got: %x\n expect: %x", []byte(got), []byte(expected)) - } -} - -// Test vectors from zeam (Zig client) at zeam/rust/src/libp2p_bridge.rs. - -func TestComputeMessageIDValidSnappyVectors(t *testing.T) { - // zeam test: snappy-compress "hello", topic "test" - // Expected: "2e40c861545cc5b46d2220062e7440b9190bc383" - compressed := snappy.Encode(nil, []byte("hello")) - topic := "test" - - msg := &pb.Message{ - Data: compressed, - Topic: &topic, - } - - id := gossipsub.ComputeMessageID(msg) - got := hex.EncodeToString([]byte(id)) - expected := "2e40c861545cc5b46d2220062e7440b9190bc383" - if got != expected { - t.Errorf("valid snappy message ID mismatch:\n got: %s\n want: %s", got, expected) - } -} - -func TestComputeMessageIDInvalidSnappyVectors(t *testing.T) { - // zeam test: raw "hello" (not snappy compressed), topic "test" - // Expected: "a7f41aaccd241477955c981714eb92244c2efc98" - topic := "test" - - msg := &pb.Message{ - Data: []byte("hello"), - Topic: &topic, - } - - id := gossipsub.ComputeMessageID(msg) - got := hex.EncodeToString([]byte(id)) - expected := "a7f41aaccd241477955c981714eb92244c2efc98" - if got != expected { - t.Errorf("invalid snappy message ID mismatch:\n got: %s\n want: %s", got, expected) - } -} diff --git a/network/gossipsub/handler.go b/network/gossipsub/handler.go deleted file mode 100644 index 07dad1f..0000000 --- a/network/gossipsub/handler.go +++ /dev/null @@ -1,147 +0,0 @@ -package gossipsub - -import ( - "context" - "time" - - "github.com/golang/snappy" - pubsub "github.com/libp2p/go-libp2p-pubsub" - - "github.com/geanlabs/gean/observability/logging" - "github.com/geanlabs/gean/types" -) - -var gossipLog = logging.NewComponentLogger(logging.CompGossip) - -// GossipHandler processes decoded gossip messages. -type GossipHandler struct { - OnBlock func(*types.SignedBlockWithAttestation) - OnAttestation func(*types.SignedAttestation) - OnAggregatedAttestation func(*types.SignedAggregatedAttestation) -} - -// SubscribeTopics subscribes to topics and dispatches messages to handler. -func SubscribeTopics(ctx context.Context, topics *Topics, handler *GossipHandler) error { - blockSub, err := topics.Block.Subscribe() - if err != nil { - gossipLog.Error("failed to subscribe to block topic", "err", err) - return err - } - attSub, err := topics.SubnetAttestation.Subscribe() - if err != nil { - gossipLog.Error("failed to subscribe to attestation topic", "err", err) - return err - } - aggSub, err := topics.Aggregation.Subscribe() - if err != nil { - gossipLog.Error("failed to subscribe to aggregation topic", "err", err) - return err - } - - gossipLog.Info("subscribed to gossip topics", - "block_topic", topics.Block.String(), - ) - go readBlockMessages(ctx, blockSub, topics.Block, handler) - go readAttestationMessages(ctx, attSub, topics.SubnetAttestation, handler) - go readAggregatedAttestationMessages(ctx, aggSub, handler) - return nil -} - -func readBlockMessages(ctx context.Context, sub *pubsub.Subscription, topic *pubsub.Topic, handler *GossipHandler) { - // Log mesh peers periodically to diagnose gossip issues. - meshLogTicker := time.NewTicker(12 * time.Second) - defer meshLogTicker.Stop() - - for { - select { - case <-meshLogTicker.C: - peers := topic.ListPeers() - gossipLog.Info("block topic mesh state", - "mesh_peers", len(peers), - "topic", topic.String(), - ) - default: - } - - msg, err := sub.Next(ctx) - if err != nil { - gossipLog.Error("block subscription ended", "err", err) - return - } - - // Log source peer to help debug mesh issues. - fromPeer := msg.ReceivedFrom.String() - - decoded, err := snappy.Decode(nil, msg.Data) - if err != nil { - gossipLog.Warn("failed to snappy decode block", "from", fromPeer, "err", err) - continue - } - block := new(types.SignedBlockWithAttestation) - if err := block.UnmarshalSSZ(decoded); err != nil { - gossipLog.Warn("failed to unmarshal block", "from", fromPeer, "err", err) - continue - } - - gossipLog.Debug("block message received", "from", fromPeer, "slot", block.Message.Block.Slot) - - if handler.OnBlock != nil { - handler.OnBlock(block) - } - } -} - -func readAttestationMessages(ctx context.Context, sub *pubsub.Subscription, topic *pubsub.Topic, handler *GossipHandler) { - meshLogTicker := time.NewTicker(12 * time.Second) - defer meshLogTicker.Stop() - - for { - select { - case <-meshLogTicker.C: - peers := topic.ListPeers() - gossipLog.Info("attestation topic mesh state", - "mesh_peers", len(peers), - "topic", topic.String(), - ) - default: - } - - msg, err := sub.Next(ctx) - if err != nil { - gossipLog.Error("attestation subscription ended", "err", err) - return - } - decoded, err := snappy.Decode(nil, msg.Data) - if err != nil { - continue - } - att := new(types.SignedAttestation) - if err := att.UnmarshalSSZ(decoded); err != nil { - continue - } - if handler.OnAttestation != nil { - handler.OnAttestation(att) - } - } -} - -func readAggregatedAttestationMessages(ctx context.Context, sub *pubsub.Subscription, handler *GossipHandler) { - for { - msg, err := sub.Next(ctx) - if err != nil { - gossipLog.Error("aggregation subscription ended", "err", err) - return - } - decoded, err := snappy.Decode(nil, msg.Data) - if err != nil { - continue - } - agg := new(types.SignedAggregatedAttestation) - if err := agg.UnmarshalSSZ(decoded); err != nil { - continue - } - if handler.OnAggregatedAttestation != nil { - handler.OnAggregatedAttestation(agg) - } - } -} diff --git a/network/host.go b/network/host.go deleted file mode 100644 index 911fc34..0000000 --- a/network/host.go +++ /dev/null @@ -1,257 +0,0 @@ -package network - -import ( - "context" - "crypto/rand" - "encoding/hex" - "errors" - "fmt" - "os" - "strings" - - "github.com/libp2p/go-libp2p" - pubsub "github.com/libp2p/go-libp2p-pubsub" - "github.com/libp2p/go-libp2p/core/control" - "github.com/libp2p/go-libp2p/core/crypto" - "github.com/libp2p/go-libp2p/core/host" - libp2pnetwork "github.com/libp2p/go-libp2p/core/network" - "github.com/libp2p/go-libp2p/core/peer" - rcmgr "github.com/libp2p/go-libp2p/p2p/host/resource-manager" - "github.com/multiformats/go-multiaddr" - - "github.com/geanlabs/gean/network/gossipsub" - "github.com/geanlabs/gean/network/p2p" - "github.com/geanlabs/gean/observability/logging" - "github.com/geanlabs/gean/observability/metrics" -) - -var netLog = logging.NewComponentLogger(logging.CompNetwork) - -// ErrUnsupportedKeyFormat is returned when a node key file cannot be parsed. -var ErrUnsupportedKeyFormat = errors.New("unsupported key format") - -// allowAllGater is a connection gater that allows all connections (devnet: no filtering). -type allowAllGater struct{} - -func (g *allowAllGater) InterceptPeerDial(p peer.ID) bool { return true } -func (g *allowAllGater) InterceptAddrDial(id peer.ID, m multiaddr.Multiaddr) bool { return true } -func (g *allowAllGater) InterceptAccept(cm libp2pnetwork.ConnMultiaddrs) bool { return true } -func (g *allowAllGater) InterceptSecured(d libp2pnetwork.Direction, id peer.ID, cm libp2pnetwork.ConnMultiaddrs) bool { - return true -} -func (g *allowAllGater) InterceptUpgraded(c libp2pnetwork.Conn) (bool, control.DisconnectReason) { - return true, 0 -} - -const nodeKeyFilePerms = 0600 - -// Host wraps a libp2p host with gossipsub and protocol handlers. -type Host struct { - P2P host.Host - PubSub *pubsub.PubSub - Ctx context.Context - Cancel context.CancelFunc -} - -// NewHost creates a libp2p host with QUIC transport and secp256k1 identity. -func NewHost(listenAddr string, nodeKeyPath string, bootnodes []string) (*Host, error) { - ctx, cancel := context.WithCancel(context.Background()) - - privKey, err := loadOrGenerateKey(nodeKeyPath) - if err != nil { - cancel() - return nil, fmt.Errorf("load key: %w", err) - } - - addr, err := multiaddr.NewMultiaddr(listenAddr) - if err != nil { - cancel() - return nil, fmt.Errorf("parse listen addr: %w", err) - } - - // Configure resource manager with no limits (for devnet compatibility) - rmgr, err := rcmgr.NewResourceManager(rcmgr.NewFixedLimiter(rcmgr.InfiniteLimits)) - if err != nil { - cancel() - return nil, fmt.Errorf("create resource manager: %w", err) - } - - h, err := libp2p.New( - libp2p.Identity(privKey), - libp2p.ListenAddrs(addr), - libp2p.DefaultTransports, - libp2p.ResourceManager(rmgr), - libp2p.ConnectionGater(&allowAllGater{}), - libp2p.DisableRelay(), - ) - if err != nil { - cancel() - return nil, fmt.Errorf("new host: %w", err) - } - - // Log actual listen addresses for debugging - for _, a := range h.Addrs() { - netLog.Info("listening on", "addr", a.String()) - } - - var directPeers []peer.AddrInfo - for _, addr := range bootnodes { - pi, err := parseBootnode(addr) - if err != nil || pi.ID == h.ID() { - continue - } - directPeers = append(directPeers, *pi) - } - - gs, err := gossipsub.NewGossipSub(ctx, h, directPeers) - if err != nil { - h.Close() - cancel() - return nil, fmt.Errorf("gossipsub: %w", err) - } - - // Register peer connection/disconnection notification handler for metrics and logging. - h.Network().Notify(&libp2pnetwork.NotifyBundle{ - ConnectedF: func(n libp2pnetwork.Network, conn libp2pnetwork.Conn) { - dir := "inbound" - if conn.Stat().Direction == libp2pnetwork.DirOutbound { - dir = "outbound" - } - metrics.PeerConnectionEventsTotal.WithLabelValues(dir, "success").Inc() - netLog.Info("peer connected", - "peer_id", conn.RemotePeer().String(), - "direction", dir, - "remote_addr", conn.RemoteMultiaddr().String(), - "peers", len(n.Peers()), - ) - }, - DisconnectedF: func(n libp2pnetwork.Network, conn libp2pnetwork.Conn) { - dir := "inbound" - if conn.Stat().Direction == libp2pnetwork.DirOutbound { - dir = "outbound" - } - metrics.PeerDisconnectionEventsTotal.WithLabelValues(dir, "remote_close").Inc() - netLog.Info("peer disconnected", - "peer_id", conn.RemotePeer().String(), - "direction", dir, - "remote_addr", conn.RemoteMultiaddr().String(), - "peers", len(n.Peers()), - ) - }, - }) - - return &Host{P2P: h, PubSub: gs, Ctx: ctx, Cancel: cancel}, nil -} - -// Close shuts down the host. -func (h *Host) Close() error { - h.Cancel() - return h.P2P.Close() -} - -// ConnectBootnodes dials the given addresses (multiaddr or ENR) and connects to them sequentially. -func ConnectBootnodes(ctx context.Context, h host.Host, addrs []string) { - for _, addr := range addrs { - pi, err := parseBootnode(addr) - if err != nil { - netLog.Warn("invalid bootnode", "addr", addr, "err", err) - continue - } - if pi.ID == h.ID() { - continue - } - - if err := h.Connect(ctx, *pi); err != nil { - result := "error" - if ctx.Err() != nil { - result = "timeout" - } - metrics.PeerConnectionEventsTotal.WithLabelValues("outbound", result).Inc() - netLog.Warn("failed to connect to bootnode", - "peer_id", pi.ID.String(), - "addr", addr, - "err", err, - ) - continue - } - netLog.Info("connected to bootnode", - "peer_id", pi.ID.String(), - "addr", addr, - ) - } -} - -func parseBootnode(addr string) (*peer.AddrInfo, error) { - if strings.HasPrefix(addr, "enr:") { - return p2p.ENRToAddrInfo(addr) - } - ma, err := multiaddr.NewMultiaddr(addr) - if err != nil { - return nil, err - } - return peer.AddrInfoFromP2pAddr(ma) -} - -// loadOrGenerateKey tries to read a node identity key from disk, or generates -// and saves a new one if it does not exist. -func loadOrGenerateKey(path string) (crypto.PrivKey, error) { - if path == "" { - priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader) - return priv, err - } - - if _, err := os.Stat(path); err == nil { - return loadKey(path) - } else if !os.IsNotExist(err) { - return nil, fmt.Errorf("stat key file: %w", err) - } - - return generateAndSaveKey(path) -} - -// loadKey reads an existing key from disk, attempting to decode it as protobuf -// or raw hex. -func loadKey(path string) (crypto.PrivKey, error) { - data, err := os.ReadFile(path) - if err != nil { - return nil, fmt.Errorf("read key file: %w", err) - } - - // Try protobuf format first (native libp2p format). - if priv, err := crypto.UnmarshalPrivateKey(data); err == nil { - return priv, nil - } - - // Fall back to hex-encoded raw secp256k1 key (generated by generate-genesis.sh). - hexStr := strings.TrimSpace(string(data)) - raw, hexErr := hex.DecodeString(hexStr) - if hexErr == nil && len(raw) == 32 { - priv, err := crypto.UnmarshalSecp256k1PrivateKey(raw) - if err != nil { - return nil, fmt.Errorf("unmarshal hex key: %w", err) - } - return priv, nil - } - - return nil, fmt.Errorf("%w in %s", ErrUnsupportedKeyFormat, path) -} - -// generateAndSaveKey creates a new secp256k1 private key and writes it -// safely to disk. -func generateAndSaveKey(path string) (crypto.PrivKey, error) { - priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader) - if err != nil { - return nil, fmt.Errorf("generate secp256k1 key: %w", err) - } - - raw, err := crypto.MarshalPrivateKey(priv) - if err != nil { - return nil, err - } - - if writeErr := os.WriteFile(path, raw, nodeKeyFilePerms); writeErr != nil { - return nil, fmt.Errorf("save key: %w", writeErr) - } - - return priv, nil -} diff --git a/network/p2p/discovery.go b/network/p2p/discovery.go deleted file mode 100644 index e786b33..0000000 --- a/network/p2p/discovery.go +++ /dev/null @@ -1,91 +0,0 @@ -package p2p - -import ( - "fmt" - "net" - - "github.com/ethereum/go-ethereum/p2p/discover" - "github.com/ethereum/go-ethereum/p2p/enode" - - "github.com/geanlabs/gean/observability/logging" -) - -// DiscoveryService manages peer discovery using Discv5. -type DiscoveryService struct { - manager *LocalNodeManager - udp *discover.UDPv5 - port int -} - -// NewDiscoveryService starts a Discv5 service. -func NewDiscoveryService(manager *LocalNodeManager, port int, bootnodes []string) (*DiscoveryService, error) { - log := logging.NewComponentLogger(logging.CompNetwork) - - // 1. Parse Bootnodes - var boots []*enode.Node - for _, url := range bootnodes { - if url == "" { - continue - } - node, err := enode.Parse(enode.ValidSchemes, url) - if err != nil { - log.Warn("invalid bootnode URL", "url", url, "err", err) - continue - } - boots = append(boots, node) - } - - // 2. Start UDP Listener - addr := fmt.Sprintf("0.0.0.0:%d", port) - localAddr, err := net.ResolveUDPAddr("udp", addr) - if err != nil { - return nil, fmt.Errorf("failed to resolve udp addr %s: %w", addr, err) - } - conn, err := net.ListenUDP("udp", localAddr) - if err != nil { - return nil, fmt.Errorf("failed to listen on udp %s: %w", addr, err) - } - - cfg := discover.Config{ - PrivateKey: manager.PrivateKey(), - Bootnodes: boots, - } - - // 3. Start Discovery - udp, err := discover.ListenV5(conn, manager.local, cfg) - if err != nil { - return nil, fmt.Errorf("failed to start discv5: %w", err) - } - - log.Info("discovery service started", - "enr", manager.Node().String(), - "id", manager.Node().ID().String(), - ) - - return &DiscoveryService{ - manager: manager, - udp: udp, - port: port, - }, nil -} - -func (s *DiscoveryService) Close() { - s.udp.Close() -} - -// LookupRandom finds random nodes in the DHT. -func (s *DiscoveryService) LookupRandom() []*enode.Node { - iter := s.udp.RandomNodes() - defer iter.Close() - - var nodes []*enode.Node - for i := 0; i < 16 && iter.Next(); i++ { - nodes = append(nodes, iter.Node()) - } - return nodes -} - -// Peers returns all nodes in the local table. -func (s *DiscoveryService) Peers() []*enode.Node { - return s.udp.AllNodes() -} diff --git a/network/p2p/enr.go b/network/p2p/enr.go deleted file mode 100644 index a02a508..0000000 --- a/network/p2p/enr.go +++ /dev/null @@ -1,159 +0,0 @@ -package p2p - -import ( - "crypto/ecdsa" - "fmt" - "net" - "os" - - "github.com/ethereum/go-ethereum/crypto" - "github.com/ethereum/go-ethereum/p2p/enode" - "github.com/ethereum/go-ethereum/p2p/enr" - libp2p_crypto "github.com/libp2p/go-libp2p/core/crypto" - "github.com/libp2p/go-libp2p/core/peer" - ma "github.com/multiformats/go-multiaddr" -) - -// LocalNodeManager manages the local node's ENR and identity. -type LocalNodeManager struct { - db *enode.DB - local *enode.LocalNode - privKey *ecdsa.PrivateKey -} - -// NewLocalNodeManager creates a new local node manager. -// It loads the node key from the given path (or generates one) and opens the node DB. -func NewLocalNodeManager(dbPath string, nodeKeyPath string, ip net.IP, udpPort int, tcpPort int, quicPort int) (*LocalNodeManager, error) { - // 1. Load or generate node key - privKey, err := loadOrGenerateNodeKey(nodeKeyPath) - if err != nil { - return nil, fmt.Errorf("failed to load node key: %w", err) - } - - // 2. Initialize Node DB - db, err := enode.OpenDB(dbPath) - if err != nil { - return nil, fmt.Errorf("failed to open node db: %w", err) - } - - // 3. Create Local Node - local := enode.NewLocalNode(db, privKey) - - // 4. Set ENR entries - local.Set(enr.IP(ip)) - local.Set(enr.UDP(udpPort)) - // We might use TCP for libp2p later, or just for compat - if tcpPort != 0 { - local.Set(enr.TCP(tcpPort)) - } - // Advertise QUIC port for libp2p QUIC transport (required for inbound connections) - if quicPort != 0 { - local.Set(enr.QUIC(quicPort)) - } - - return &LocalNodeManager{ - db: db, - local: local, - privKey: privKey, - }, nil -} - -func (m *LocalNodeManager) Node() *enode.Node { - return m.local.Node() -} - -// LocalNode exposes the underlying enode.LocalNode for setting ENR entries. -func (m *LocalNodeManager) LocalNode() *enode.LocalNode { - return m.local -} - -func (m *LocalNodeManager) Database() *enode.DB { - return m.db -} - -func (m *LocalNodeManager) PrivateKey() *ecdsa.PrivateKey { - return m.privKey -} - -func (m *LocalNodeManager) Close() { - m.db.Close() -} - -// ENRToAddrInfo parses an ENR string and returns a libp2p AddrInfo with a QUIC multiaddr. -func ENRToAddrInfo(enrStr string) (*peer.AddrInfo, error) { - node, err := enode.Parse(enode.ValidSchemes, enrStr) - if err != nil { - return nil, fmt.Errorf("parse enr: %w", err) - } - - ip := node.IP() - if ip == nil { - return nil, fmt.Errorf("enr has no IP") - } - - var quicPort enr.QUIC - if err := node.Record().Load(&quicPort); err != nil { - return nil, fmt.Errorf("enr has no quic port: %w", err) - } - - pubkey := node.Pubkey() - if pubkey == nil { - return nil, fmt.Errorf("enr has no public key") - } - compressed := crypto.CompressPubkey(pubkey) - libp2pKey, err := libp2p_crypto.UnmarshalSecp256k1PublicKey(compressed) - if err != nil { - return nil, fmt.Errorf("convert pubkey: %w", err) - } - pid, err := peer.IDFromPublicKey(libp2pKey) - if err != nil { - return nil, fmt.Errorf("derive peer id: %w", err) - } - - addr, err := ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/udp/%d/quic-v1", ip, quicPort)) - if err != nil { - return nil, fmt.Errorf("build multiaddr: %w", err) - } - - return &peer.AddrInfo{ID: pid, Addrs: []ma.Multiaddr{addr}}, nil -} - -// loadOrGenerateNodeKey loads a secp256k1 key from file or generates a new one. -func loadOrGenerateNodeKey(path string) (*ecdsa.PrivateKey, error) { - if _, err := os.Stat(path); os.IsNotExist(err) { - key, err := crypto.GenerateKey() - if err != nil { - return nil, err - } - if err := crypto.SaveECDSA(path, key); err != nil { - return nil, err - } - return key, nil - } - key, err := crypto.LoadECDSA(path) - if err == nil { - return key, nil - } - - // Try loading as raw binary (32 bytes) or Libp2p marshaled key - data, err := os.ReadFile(path) - if err != nil { - return nil, fmt.Errorf("failed to read key file: %w", err) - } - - if len(data) == 32 { - return crypto.ToECDSA(data) - } - - // Try unmarshaling as Libp2p key - sk, err := libp2p_crypto.UnmarshalPrivateKey(data) - if err == nil { - raw, err := sk.Raw() - if err != nil { - return nil, fmt.Errorf("failed to get raw key bytes: %w", err) - } - return crypto.ToECDSA(raw) - } - - return nil, fmt.Errorf("invalid key format (hex, binary, or libp2p): %w", err) -} diff --git a/network/p2p/enr_aggregator.go b/network/p2p/enr_aggregator.go deleted file mode 100644 index bcca722..0000000 --- a/network/p2p/enr_aggregator.go +++ /dev/null @@ -1,30 +0,0 @@ -package p2p - -import ( - "io" - - "github.com/ethereum/go-ethereum/rlp" -) - -// AggregatorEntry advertises whether the node is an aggregator. -// ENR key: "is_aggregator" with value 0x01 (true) or 0x00 (false). -type AggregatorEntry bool - -func (e AggregatorEntry) ENRKey() string { return "is_aggregator" } - -func (e AggregatorEntry) EncodeRLP(w io.Writer) error { - var v byte - if e { - v = 0x01 - } - return rlp.Encode(w, v) -} - -func (e *AggregatorEntry) DecodeRLP(s *rlp.Stream) error { - var v byte - if err := s.Decode(&v); err != nil { - return err - } - *e = AggregatorEntry(v == 0x01) - return nil -} diff --git a/network/reqresp/client.go b/network/reqresp/client.go deleted file mode 100644 index 2616c79..0000000 --- a/network/reqresp/client.go +++ /dev/null @@ -1,183 +0,0 @@ -package reqresp - -import ( - "bytes" - "context" - "encoding/binary" - "errors" - "fmt" - "io" - - "github.com/libp2p/go-libp2p/core/host" - "github.com/libp2p/go-libp2p/core/peer" - "github.com/libp2p/go-libp2p/core/protocol" - - "github.com/geanlabs/gean/types" -) - -// ErrNoStatusResponse indicates that the remote peer closed the status stream -// without sending any response bytes. -var ErrNoStatusResponse = errors.New("status response missing") - -// RequestStatus sends a status request to a peer and returns their response. -func RequestStatus(ctx context.Context, h host.Host, pid peer.ID, status Status) (*Status, error) { - ctx, cancel := context.WithTimeout(ctx, reqRespTimeout) - defer cancel() - - s, err := h.NewStream(ctx, pid, protocol.ID(StatusProtocol)) - if err != nil { - return nil, fmt.Errorf("open stream: %w", err) - } - defer s.Close() - - if err := WriteStatus(s, status); err != nil { - return nil, fmt.Errorf("write status: %w", err) - } - if err := s.CloseWrite(); err != nil { - return nil, fmt.Errorf("close write: %w", err) - } - - firstByte, err := ReadResponseCode(s) - if err != nil { - if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) { - return nil, ErrNoStatusResponse - } - return nil, fmt.Errorf("read response code: %w", err) - } - - // Interop fallback: some peers may send status payloads without the - // response-code prefix. - if !isKnownResponseCode(firstByte) { - resp, err := ReadStatus(io.MultiReader(bytes.NewReader([]byte{firstByte}), s)) - if err != nil { - return nil, fmt.Errorf("read response (no status code mode): %w", err) - } - return &resp, nil - } - if firstByte != ResponseSuccess { - return nil, fmt.Errorf("peer returned error code %d", firstByte) - } - - resp, err := ReadStatus(s) - if err != nil { - return nil, fmt.Errorf("read response: %w", err) - } - return &resp, nil -} - -// RequestBlocksByRoot requests blocks by their roots from a peer. -func RequestBlocksByRoot(ctx context.Context, h host.Host, pid peer.ID, roots [][32]byte) ([]*types.SignedBlockWithAttestation, error) { - return requestBlocksByRootWithPayload(ctx, h, pid, encodeBlocksByRootRequest(roots)) -} - -func requestBlocksByRootWithPayload( - ctx context.Context, - h host.Host, - pid peer.ID, - payload []byte, -) ([]*types.SignedBlockWithAttestation, error) { - ctx, cancel := context.WithTimeout(ctx, reqRespTimeout) - defer cancel() - - s, err := h.NewStream(ctx, pid, protocol.ID(BlocksByRootProtocol), protocol.ID(BlocksByRootProtocolLegacy)) - if err != nil { - return nil, fmt.Errorf("open stream: %w", err) - } - defer s.Close() - - // Write pre-encoded request payload. - if err := WriteSnappyFrame(s, payload); err != nil { - return nil, fmt.Errorf("write roots: %w", err) - } - if err := s.CloseWrite(); err != nil { - return nil, fmt.Errorf("close write: %w", err) - } - - // Read block responses until EOF. Each response is prefixed with a status byte. - var blocks []*types.SignedBlockWithAttestation - firstCode, err := ReadResponseCode(s) - if err != nil { - if err == io.EOF { - return blocks, nil - } - return blocks, fmt.Errorf("read response code: %w", err) - } - - // Interop fallback: some peers stream raw snappy frames without - // per-chunk response codes. If the first byte is not a known response code, - // treat it as the first byte of the frame varint length prefix. - if !isKnownResponseCode(firstCode) { - blocks, err := readFramedBlocks(io.MultiReader(bytes.NewReader([]byte{firstCode}), s)) - if err != nil { - return nil, fmt.Errorf("read framed blocks (no status byte mode): %w", err) - } - return blocks, nil - } - - code := firstCode - for { - if code != ResponseSuccess { - return blocks, fmt.Errorf("peer returned blocks_by_root error code %d", code) - } - data, err := ReadSnappyFrame(s) - if err != nil { - return blocks, fmt.Errorf("read block: %w", err) - } - block := new(types.SignedBlockWithAttestation) - if err := block.UnmarshalSSZ(data); err == nil { - blocks = append(blocks, block) - } - - code, err = ReadResponseCode(s) - if err != nil { - if err == io.EOF { - break - } - return blocks, fmt.Errorf("read response code: %w", err) - } - } - return blocks, nil -} - -// encodeBlocksByRootRequest SSZ-encodes a BlocksByRootRequest container. -// The spec defines BlocksByRootRequest as a single-field Container: -// -// class BlocksByRootRequest(Container): -// roots: RequestedBlockRoots # SSZList[Bytes32] -// -// A variable-size field in an SSZ container is preceded by a 4-byte -// little-endian offset. With one field the offset is always 4. -// Wire layout: [offset=4 (4 bytes LE)][root_0 (32 bytes)]...[root_N (32 bytes)] -func encodeBlocksByRootRequest(roots [][32]byte) []byte { - out := make([]byte, 4+len(roots)*32) - binary.LittleEndian.PutUint32(out[:4], 4) - for i, r := range roots { - copy(out[4+i*32:], r[:]) - } - return out -} - -func readFramedBlocks(r io.Reader) ([]*types.SignedBlockWithAttestation, error) { - var blocks []*types.SignedBlockWithAttestation - for { - data, err := ReadSnappyFrame(r) - if err != nil { - if err == io.EOF { - break - } - return blocks, err - } - block := new(types.SignedBlockWithAttestation) - if err := block.UnmarshalSSZ(data); err == nil { - blocks = append(blocks, block) - } - } - return blocks, nil -} - -func isKnownResponseCode(code byte) bool { - return code == ResponseSuccess || - code == ResponseInvalidRequest || - code == ResponseServerError || - code == ResponseResourceUnavailable -} diff --git a/network/reqresp/codec.go b/network/reqresp/codec.go deleted file mode 100644 index f9ba627..0000000 --- a/network/reqresp/codec.go +++ /dev/null @@ -1,200 +0,0 @@ -package reqresp - -import ( - "bytes" - "encoding/binary" - "fmt" - "io" - - "github.com/golang/snappy" - - "github.com/geanlabs/gean/types" -) - -// ReadStatus reads and decodes a snappy-framed status message. -func ReadStatus(r io.Reader) (Status, error) { - data, err := ReadSnappyFrame(r) - if err != nil { - return Status{}, err - } - if len(data) != 80 { - return Status{}, fmt.Errorf("invalid status length: %d", len(data)) - } - finalized := &types.Checkpoint{Slot: binary.LittleEndian.Uint64(data[32:40])} - copy(finalized.Root[:], data[0:32]) - head := &types.Checkpoint{Slot: binary.LittleEndian.Uint64(data[72:80])} - copy(head.Root[:], data[40:72]) - return Status{Finalized: finalized, Head: head}, nil -} - -// WriteStatus encodes and writes a snappy-framed status message. -func WriteStatus(w io.Writer, status Status) error { - var buf [80]byte - copy(buf[0:32], status.Finalized.Root[:]) - binary.LittleEndian.PutUint64(buf[32:40], status.Finalized.Slot) - copy(buf[40:72], status.Head.Root[:]) - binary.LittleEndian.PutUint64(buf[72:80], status.Head.Slot) - return WriteSnappyFrame(w, buf[:]) -} - -func writeSignedBlock(w io.Writer, block *types.SignedBlockWithAttestation) error { - data, err := block.MarshalSSZ() - if err != nil { - return err - } - return WriteSnappyFrame(w, data) -} - -// readBlocksByRootRequest decodes a BlocksByRootRequest from the wire. -// The spec defines BlocksByRootRequest as an SSZ Container with one -// variable-size field, so the encoding is always: -// -// [offset=4 (4 bytes LE)][root_0 (32 bytes)]...[root_N (32 bytes)] -func readBlocksByRootRequest(r io.Reader) ([][32]byte, error) { - data, err := ReadSnappyFrame(r) - if err != nil { - return nil, err - } - if len(data) < 4 { - return nil, fmt.Errorf("BlocksByRootRequest too short: %d bytes", len(data)) - } - offset := binary.LittleEndian.Uint32(data[:4]) - if offset != 4 { - return nil, fmt.Errorf("BlocksByRootRequest invalid offset: got %d, want 4", offset) - } - rootsData := data[4:] - if len(rootsData)%32 != 0 { - return nil, fmt.Errorf("BlocksByRootRequest roots length %d is not a multiple of 32", len(rootsData)) - } - return decodeRootsRaw(rootsData) -} - -func decodeRootsRaw(data []byte) ([][32]byte, error) { - n := len(data) / 32 - if n > types.MaxRequestBlocks { - return nil, fmt.Errorf("too many roots: %d", n) - } - roots := make([][32]byte, n) - for i := range roots { - copy(roots[i][:], data[i*32:(i+1)*32]) - } - return roots, nil -} - -// ReadResponseCode reads a single response status byte. -func ReadResponseCode(r io.Reader) (byte, error) { - var buf [1]byte - _, err := io.ReadFull(r, buf[:]) - return buf[0], err -} - -// ReadSnappyFrame reads a varint-length-prefixed snappy frame encoded message. -// Wire format: varint(uncompressed_len) + snappy_framed(data) -// The varint encodes the expected uncompressed byte length. -func ReadSnappyFrame(r io.Reader) ([]byte, error) { - uncompressedLen, err := binary.ReadUvarint(byteReader{r}) - if err != nil { - return nil, err - } - if uncompressedLen > 10*1024*1024 { - return nil, fmt.Errorf("message too large: %d", uncompressedLen) - } - - framed, err := readSnappyFramedStream(r, int(uncompressedLen)) - if err != nil { - return nil, err - } - sr := snappy.NewReader(bytes.NewReader(framed)) - decoded, err := io.ReadAll(sr) - if err != nil { - return nil, fmt.Errorf("snappy frame decode: %w", err) - } - if len(decoded) != int(uncompressedLen) { - return nil, fmt.Errorf("decoded length mismatch: got %d want %d", len(decoded), uncompressedLen) - } - return decoded, nil -} - -// WriteSnappyFrame writes a varint-length-prefixed snappy frame encoded message. -// Wire format: varint(uncompressed_len) + snappy_framed(data) -// The varint encodes the uncompressed byte length. -func WriteSnappyFrame(w io.Writer, data []byte) error { - var buf bytes.Buffer - sw := snappy.NewBufferedWriter(&buf) - if _, err := sw.Write(data); err != nil { - return err - } - if err := sw.Close(); err != nil { - return err - } - var lenBuf [binary.MaxVarintLen64]byte - n := binary.PutUvarint(lenBuf[:], uint64(len(data))) - if _, err := w.Write(lenBuf[:n]); err != nil { - return err - } - _, err := w.Write(buf.Bytes()) - return err -} - -func readSnappyFramedStream(r io.Reader, expectedUncompressed int) ([]byte, error) { - var framed bytes.Buffer - produced := 0 - - for produced < expectedUncompressed { - var hdr [4]byte - if _, err := io.ReadFull(r, hdr[:]); err != nil { - return nil, fmt.Errorf("read snappy chunk header: %w", err) - } - chunkType := hdr[0] - chunkLen := int(hdr[1]) | int(hdr[2])<<8 | int(hdr[3])<<16 - if chunkLen < 0 || chunkLen > 1<<20 { - return nil, fmt.Errorf("invalid snappy chunk length: %d", chunkLen) - } - - chunk := make([]byte, chunkLen) - if _, err := io.ReadFull(r, chunk); err != nil { - return nil, fmt.Errorf("read snappy chunk payload: %w", err) - } - - framed.Write(hdr[:]) - framed.Write(chunk) - - switch chunkType { - case 0x00: // compressed data chunk - if chunkLen < 4 { - return nil, fmt.Errorf("compressed snappy chunk too short") - } - decodedLen, err := snappy.DecodedLen(chunk[4:]) - if err != nil { - return nil, fmt.Errorf("snappy decoded length: %w", err) - } - produced += decodedLen - case 0x01: // uncompressed data chunk - if chunkLen < 4 { - return nil, fmt.Errorf("uncompressed snappy chunk too short") - } - produced += chunkLen - 4 - case 0xff: // stream identifier - // no produced increment - default: - // 0x80-0xfe are skippable by spec; others are unsupported here. - if chunkType >= 0x80 { - // no produced increment - continue - } - return nil, fmt.Errorf("unknown unskippable snappy chunk type: 0x%02x", chunkType) - } - } - return framed.Bytes(), nil -} - -// byteReader wraps an io.Reader to implement io.ByteReader. -type byteReader struct { - io.Reader -} - -func (br byteReader) ReadByte() (byte, error) { - var buf [1]byte - _, err := io.ReadFull(br.Reader, buf[:]) - return buf[0], err -} diff --git a/network/reqresp/messages.go b/network/reqresp/messages.go deleted file mode 100644 index 7e69350..0000000 --- a/network/reqresp/messages.go +++ /dev/null @@ -1,36 +0,0 @@ -package reqresp - -import ( - "time" - - "github.com/geanlabs/gean/types" -) - -// Protocol IDs matching the leanSpec networking specification. -const ( - StatusProtocol = "/leanconsensus/req/status/1/ssz_snappy" - BlocksByRootProtocol = "/leanconsensus/req/blocks_by_root/1/ssz_snappy" - BlocksByRootProtocolLegacy = "/leanconsensus/req/lean_blocks_by_root/1/ssz_snappy" -) - -// Response status codes. -const ( - ResponseSuccess = 0x00 - ResponseInvalidRequest = 0x01 - ResponseServerError = 0x02 - ResponseResourceUnavailable = 0x03 -) - -const reqRespTimeout = 10 * time.Second - -// Status is the status message exchanged between peers. -type Status struct { - Finalized *types.Checkpoint - Head *types.Checkpoint -} - -// ReqRespHandler processes incoming request/response messages. -type ReqRespHandler struct { - OnStatus func(Status) Status - OnBlocksByRoot func([][32]byte) []*types.SignedBlockWithAttestation -} diff --git a/network/reqresp/protocol_test.go b/network/reqresp/protocol_test.go deleted file mode 100644 index e06207f..0000000 --- a/network/reqresp/protocol_test.go +++ /dev/null @@ -1,20 +0,0 @@ -package reqresp_test - -import ( - "testing" - - "github.com/geanlabs/gean/network/reqresp" -) - -func TestReqRespProtocolIDsMatchCrossClient(t *testing.T) { - if reqresp.StatusProtocol != "/leanconsensus/req/status/1/ssz_snappy" { - t.Fatalf("status protocol mismatch: got %q", reqresp.StatusProtocol) - } - // BlocksByRootProtocol must match the leanSpec-defined protocol ID. - if reqresp.BlocksByRootProtocol != "/leanconsensus/req/blocks_by_root/1/ssz_snappy" { - t.Fatalf("blocks_by_root protocol mismatch: got %q", reqresp.BlocksByRootProtocol) - } - if reqresp.BlocksByRootProtocolLegacy != "/leanconsensus/req/lean_blocks_by_root/1/ssz_snappy" { - t.Fatalf("blocks_by_root legacy protocol mismatch: got %q", reqresp.BlocksByRootProtocolLegacy) - } -} diff --git a/network/reqresp/reqresp_test.go b/network/reqresp/reqresp_test.go deleted file mode 100644 index e591862..0000000 --- a/network/reqresp/reqresp_test.go +++ /dev/null @@ -1,98 +0,0 @@ -package reqresp_test - -import ( - "bytes" - "testing" - - "github.com/geanlabs/gean/network/reqresp" - "github.com/geanlabs/gean/types" -) - -func TestStatusSSZRoundTrip(t *testing.T) { - var finalizedRoot, headRoot [32]byte - for i := range finalizedRoot { - finalizedRoot[i] = 0xaa - headRoot[i] = 0xbb - } - - in := reqresp.Status{ - Finalized: &types.Checkpoint{Root: finalizedRoot, Slot: 3}, - Head: &types.Checkpoint{Root: headRoot, Slot: 7}, - } - - var buf bytes.Buffer - if err := reqresp.WriteStatus(&buf, in); err != nil { - t.Fatalf("writeStatus: %v", err) - } - - out, err := reqresp.ReadStatus(&buf) - if err != nil { - t.Fatalf("readStatus: %v", err) - } - - if out.Finalized.Slot != in.Finalized.Slot || out.Finalized.Root != in.Finalized.Root { - t.Fatalf("finalized mismatch: got (%d,%x), want (%d,%x)", - out.Finalized.Slot, out.Finalized.Root, in.Finalized.Slot, in.Finalized.Root) - } - if out.Head.Slot != in.Head.Slot || out.Head.Root != in.Head.Root { - t.Fatalf("head mismatch: got (%d,%x), want (%d,%x)", - out.Head.Slot, out.Head.Root, in.Head.Slot, in.Head.Root) - } -} - -func TestResponseCodeRoundTrip(t *testing.T) { - var buf bytes.Buffer - - // Write success + status payload (simulates server response). - buf.WriteByte(reqresp.ResponseSuccess) - in := reqresp.Status{ - Finalized: &types.Checkpoint{Root: [32]byte{0x01}, Slot: 1}, - Head: &types.Checkpoint{Root: [32]byte{0x02}, Slot: 2}, - } - if err := reqresp.WriteStatus(&buf, in); err != nil { - t.Fatalf("writeStatus: %v", err) - } - - // Read back: code then payload (simulates client). - code, err := reqresp.ReadResponseCode(&buf) - if err != nil { - t.Fatalf("readResponseCode: %v", err) - } - if code != reqresp.ResponseSuccess { - t.Fatalf("expected success code 0x00, got 0x%02x", code) - } - out, err := reqresp.ReadStatus(&buf) - if err != nil { - t.Fatalf("readStatus: %v", err) - } - if out.Finalized.Slot != 1 || out.Head.Slot != 2 { - t.Fatal("status payload mismatch after response code") - } -} - -func TestResponseCodeError(t *testing.T) { - var buf bytes.Buffer - buf.WriteByte(reqresp.ResponseServerError) - - code, err := reqresp.ReadResponseCode(&buf) - if err != nil { - t.Fatalf("readResponseCode: %v", err) - } - if code != reqresp.ResponseServerError { - t.Fatalf("expected error code 0x02, got 0x%02x", code) - } -} - -func TestReadStatusRejectsInvalidLength(t *testing.T) { - for _, n := range []int{79, 81} { - var buf bytes.Buffer - payload := make([]byte, n) - if err := reqresp.WriteSnappyFrame(&buf, payload); err != nil { - t.Fatalf("writeSnappyFrame(%d): %v", n, err) - } - - if _, err := reqresp.ReadStatus(&buf); err == nil { - t.Fatalf("expected readStatus error for payload length %d", n) - } - } -} diff --git a/network/reqresp/server.go b/network/reqresp/server.go deleted file mode 100644 index d69a5b4..0000000 --- a/network/reqresp/server.go +++ /dev/null @@ -1,57 +0,0 @@ -package reqresp - -import ( - "github.com/libp2p/go-libp2p/core/host" - "github.com/libp2p/go-libp2p/core/network" -) - -// RegisterReqResp registers request/response protocol handlers. -func RegisterReqResp(h host.Host, handler *ReqRespHandler) { - h.SetStreamHandler(StatusProtocol, func(s network.Stream) { - defer s.Close() - handleStatus(s, handler) - }) - - bbr := func(s network.Stream) { - defer s.Close() - handleBlocksByRoot(s, handler) - } - h.SetStreamHandler(BlocksByRootProtocol, bbr) - h.SetStreamHandler(BlocksByRootProtocolLegacy, bbr) -} - -func handleStatus(s network.Stream, handler *ReqRespHandler) { - if handler.OnStatus == nil { - return - } - req, err := ReadStatus(s) - if err != nil { - return - } - resp := handler.OnStatus(req) - if _, err := s.Write([]byte{ResponseSuccess}); err != nil { - return - } - if err := WriteStatus(s, resp); err != nil { - return - } -} - -func handleBlocksByRoot(s network.Stream, handler *ReqRespHandler) { - if handler.OnBlocksByRoot == nil { - return - } - roots, err := readBlocksByRootRequest(s) - if err != nil { - return - } - blocks := handler.OnBlocksByRoot(roots) - for _, block := range blocks { - if _, err := s.Write([]byte{ResponseSuccess}); err != nil { - return - } - if err := writeSignedBlock(s, block); err != nil { - return - } - } -} diff --git a/node/block.go b/node/block.go new file mode 100644 index 0000000..f6abfb3 --- /dev/null +++ b/node/block.go @@ -0,0 +1,401 @@ +package node + +import ( + "context" + "time" + + "github.com/geanlabs/gean/logger" + "github.com/geanlabs/gean/p2p" + "github.com/geanlabs/gean/types" + "github.com/geanlabs/gean/xmss" +) + +// onBlock processes a received block using an iterative work queue. +func (e *Engine) onBlock(signedBlock *types.SignedBlockWithAttestation) { + oldFinalizedSlot := e.Store.LatestFinalized().Slot + + queue := []*types.SignedBlockWithAttestation{signedBlock} + + for len(queue) > 0 { + current := queue[0] + queue = queue[1:] + e.processOneBlock(current, &queue) + } + + // Prune AFTER the entire cascade completes — not mid-cascade. + newFinalized := e.Store.LatestFinalized() + if newFinalized.Slot > oldFinalizedSlot { + PruneOnFinalization(e.Store, e.FC, oldFinalizedSlot, newFinalized.Slot, newFinalized.Root) + e.discardFinalizedPending(newFinalized.Slot) + } +} + +func (e *Engine) processOneBlock(signedBlock *types.SignedBlockWithAttestation, queue *[]*types.SignedBlockWithAttestation) { + block := signedBlock.Block.Block + blockRoot, _ := block.HashTreeRoot() + parentRoot := block.ParentRoot + + // Skip if already processed. + if e.Store.HasState(blockRoot) { + return + } + + hasParent := e.Store.HasState(parentRoot) + logger.Info(logger.Chain, "processing block slot=%d block_root=0x%x has_parent=%t", block.Slot, blockRoot, hasParent) + + // Check if parent state exists. + if !hasParent { + // Check pending block cache limit. + if e.pendingBlockCount() >= MaxPendingBlocks { + logger.Warn(logger.Chain, "pending block cache full (%d), rejecting block slot=%d block_root=0x%x", + MaxPendingBlocks, block.Slot, blockRoot) + return + } + + // Compute depth: parent's depth + 1. + depth := 1 + if parentDepth, ok := e.PendingBlockDepths[parentRoot]; ok { + depth = parentDepth + 1 + } + + // Check depth limit. + if depth > MaxBlockFetchDepth { + logger.Warn(logger.Chain, "block fetch depth exceeded (%d > %d), discarding block slot=%d block_root=0x%x", + depth, MaxBlockFetchDepth, block.Slot, blockRoot) + return + } + + logger.Warn(logger.Chain, "block parent missing slot=%d block_root=0x%x parent_root=0x%x depth=%d, storing as pending", + block.Slot, blockRoot, parentRoot, depth) + + // Track depth. + e.PendingBlockDepths[blockRoot] = depth + + // Resolve the actual missing ancestor by walking the chain. + missingRoot := parentRoot + for { + ancestor, ok := e.PendingBlockParents[missingRoot] + if !ok { + break + } + missingRoot = ancestor + } + + e.PendingBlockParents[blockRoot] = missingRoot + + // Store block in DB as pending (no LiveChain entry — invisible to fork choice). + e.Store.StorePendingBlock(blockRoot, signedBlock) + + // Track parent→child relationship in memory. + children, ok := e.PendingBlocks[parentRoot] + if !ok { + children = make(map[[32]byte]bool) + e.PendingBlocks[parentRoot] = children + } + children[blockRoot] = true + + // Walk up through DB: if missingRoot has a stored header, + // the actual missing block is further up. + for { + header := e.Store.GetBlockHeader(missingRoot) + if header == nil { + break // truly missing — request from network + } + if e.Store.HasState(header.ParentRoot) { + // Parent state available — load and enqueue for processing. + storedBlock := e.Store.GetSignedBlock(missingRoot) + if storedBlock != nil { + *queue = append(*queue, storedBlock) + } + return + } + // Block exists but parent state missing — register as pending. + pChildren, ok := e.PendingBlocks[header.ParentRoot] + if !ok { + pChildren = make(map[[32]byte]bool) + e.PendingBlocks[header.ParentRoot] = pChildren + } + pChildren[missingRoot] = true + e.PendingBlockParents[missingRoot] = header.ParentRoot + missingRoot = header.ParentRoot + } + + // Request the actual missing block from network via the fetch batcher. + if e.P2P != nil { + logger.Info(logger.Sync, "queueing missing block block_root=0x%x for batched fetch", missingRoot) + select { + case e.FetchRootCh <- missingRoot: + default: + logger.Warn(logger.Sync, "fetch root channel full, dropping request for 0x%x", missingRoot) + } + } + return + } + + // Parent exists — process the block. + blockStart := time.Now() + err := OnBlock(e.Store, signedBlock, e.Keys.ValidatorIDs()) + ObserveBlockProcessingTime(time.Since(blockStart).Seconds()) + if err != nil { + logger.Error(logger.Chain, "block processing failed slot=%d block_root=0x%x: %v", block.Slot, blockRoot, err) + return + } + + // Register in fork choice. + e.FC.OnBlock(block.Slot, blockRoot, parentRoot) + + // Check for finalization advance. + finalized := e.Store.LatestFinalized() + if finalized.Slot > 0 { + e.FC.Prune(finalized.Root) + } + + // Update head BEFORE processing proposer attestation. + e.updateHead(false) + + // Process proposer attestation. + ProcessProposerAttestation(e.Store, signedBlock, true) + + // Clear depth tracking for this block (now processed). + delete(e.PendingBlockDepths, blockRoot) + + // Cascade: enqueue pending children for processing. + e.collectPendingChildren(blockRoot, queue) +} + +// collectPendingChildren moves pending children of parent into the work queue. +func (e *Engine) collectPendingChildren(parentRoot [32]byte, queue *[]*types.SignedBlockWithAttestation) { + childRoots, ok := e.PendingBlocks[parentRoot] + if !ok { + return + } + delete(e.PendingBlocks, parentRoot) + + logger.Info(logger.Chain, "processing %d pending children of parent_root=0x%x", len(childRoots), parentRoot) + + for childRoot := range childRoots { + delete(e.PendingBlockParents, childRoot) + delete(e.PendingBlockDepths, childRoot) + + childBlock := e.Store.GetSignedBlock(childRoot) + if childBlock == nil { + logger.Warn(logger.Chain, "pending block block_root=0x%x missing from DB, skipping", childRoot) + continue + } + *queue = append(*queue, childBlock) + } +} + +// pendingBlockCount returns the total number of pending blocks across all parents. +func (e *Engine) pendingBlockCount() int { + count := 0 + for _, children := range e.PendingBlocks { + count += len(children) + } + return count +} + +// discardFinalizedPending removes all pending blocks at or below the finalized slot. +// Their subtrees are also discarded since they can never be processed. +func (e *Engine) discardFinalizedPending(finalizedSlot uint64) { + discarded := 0 + + // Collect parent roots to discard. + var parentsToDiscard [][32]byte + for parentRoot, children := range e.PendingBlocks { + for childRoot := range children { + header := e.Store.GetBlockHeader(childRoot) + if header != nil && header.Slot <= finalizedSlot { + // This pending block is at/below finalized — discard entire subtree. + e.discardPendingSubtree(childRoot) + delete(children, childRoot) + discarded++ + } + } + if len(children) == 0 { + parentsToDiscard = append(parentsToDiscard, parentRoot) + } + } + + for _, parentRoot := range parentsToDiscard { + delete(e.PendingBlocks, parentRoot) + } + + if discarded > 0 { + logger.Info(logger.Store, "discarded %d finalized pending blocks (finalized_slot=%d)", discarded, finalizedSlot) + } +} + +// fetchBatchGracePeriod is how long the batcher waits for additional roots +// to coalesce after receiving the first one. +const fetchBatchGracePeriod = 50 * time.Millisecond + +// runFetchBatcher coalesces fetch requests from FetchRootCh into batches of +// up to MaxBlocksPerRequest roots, then fires a single batched fetch per batch. +// +// This drastically reduces network round-trips during catch-up: instead of +// 100 sequential requests for 100 missing blocks, we make ~10 requests with +// 10 roots each. The grace period (50ms) gives time for closely-spaced +// fetch needs to coalesce without delaying steady-state operation noticeably. +func (e *Engine) runFetchBatcher(ctx context.Context) { + for { + var batch [][32]byte + seen := make(map[[32]byte]bool) + + // Wait for the first root (blocks indefinitely). + select { + case <-ctx.Done(): + return + case root := <-e.FetchRootCh: + batch = append(batch, root) + seen[root] = true + } + + // Collect more roots within the grace period, up to MaxBlocksPerRequest. + grace := time.After(fetchBatchGracePeriod) + gather: + for len(batch) < p2p.MaxBlocksPerRequest { + select { + case <-ctx.Done(): + return + case root := <-e.FetchRootCh: + if !seen[root] { + batch = append(batch, root) + seen[root] = true + } + case <-grace: + break gather + } + } + + e.fireBatchFetch(ctx, batch) + } +} + +// fireBatchFetch issues a batched blocks_by_root request and feeds the +// returned blocks back into the engine. Roots not delivered are reported +// as failed so their pending subtrees can be discarded. +func (e *Engine) fireBatchFetch(ctx context.Context, roots [][32]byte) { + if e.P2P == nil || len(roots) == 0 { + return + } + logger.Info(logger.Sync, "batched fetch starting count=%d", len(roots)) + blocks, missing, err := e.P2P.FetchBlocksByRootBatchWithRetry(ctx, roots) + if err != nil { + logger.Warn(logger.Sync, "batched fetch failed count=%d err=%v", len(roots), err) + } + for _, b := range blocks { + e.OnBlock(b) + } + for _, r := range missing { + select { + case e.FailedRootCh <- r: + default: + logger.Warn(logger.Sync, "failed root channel full, dropping notification for 0x%x", r) + } + } +} + +// onFailedRoot discards pending blocks whose subtree depends on a root that +// no peer could serve after exhausting all fetch retries. +// +// We free memory by dropping the orphaned subtree, but we do NOT permanently +// blacklist the root — if a peer reconnects with the missing block later, or +// a new orphan arrives needing the same parent, gean will try fetching again. +func (e *Engine) onFailedRoot(failedRoot [32]byte) { + children, ok := e.PendingBlocks[failedRoot] + if !ok { + return + } + delete(e.PendingBlocks, failedRoot) + + discarded := 0 + for childRoot := range children { + e.discardPendingSubtree(childRoot) + discarded++ + } + logger.Warn(logger.Sync, "fetch exhausted for root 0x%x, discarded %d pending child block(s)", failedRoot, discarded) +} + +// discardPendingSubtree recursively discards a pending block and all its descendants. +func (e *Engine) discardPendingSubtree(blockRoot [32]byte) { + delete(e.PendingBlockParents, blockRoot) + delete(e.PendingBlockDepths, blockRoot) + + children, ok := e.PendingBlocks[blockRoot] + if !ok { + return + } + delete(e.PendingBlocks, blockRoot) + + for childRoot := range children { + e.discardPendingSubtree(childRoot) + } +} + +// onGossipAttestation validates and stores an individual attestation. +func (e *Engine) onGossipAttestation(att *types.SignedAttestation) { + // Validate attestation data. + if err := ValidateAttestationData(e.Store, att.Data); err != nil { + return + } + + // Get validator pubkey from target state. + targetState := e.Store.GetState(att.Data.Target.Root) + if targetState == nil { + return + } + if att.ValidatorID >= uint64(len(targetState.Validators)) { + return + } + pubkey := targetState.Validators[att.ValidatorID].Pubkey + + // Verify XMSS signature. + dataRoot, _ := att.Data.HashTreeRoot() + slot := uint32(att.Data.Slot) + + IncPqSigAttestationSigsTotal() + verifyStart := time.Now() + valid, err := verifyAttestation(pubkey, slot, dataRoot, att.Signature) + ObservePqSigVerificationTime(time.Since(verifyStart).Seconds()) + if err != nil || !valid { + IncPqSigAttestationSigsInvalid() + IncAttestationsInvalid() + return + } + IncPqSigAttestationSigsValid() + IncAttestationsValid(1) + + // Parse signature to opaque C handle for aggregation. + sigHandle, parseErr := xmss.ParseSignature(att.Signature[:]) + + // Store for aggregation. + logger.Info(logger.Gossip, "attestation verified: validator=%d slot=%d dataRoot=%x", att.ValidatorID, att.Data.Slot, dataRoot) + e.Store.GossipSignatures.InsertWithHandle(dataRoot, att.Data, att.ValidatorID, att.Signature, sigHandle, parseErr) +} + +// onGossipAggregatedAttestation validates and stores an aggregated attestation. +func (e *Engine) onGossipAggregatedAttestation(agg *types.SignedAggregatedAttestation) { + // Validate attestation data. + if err := ValidateAttestationData(e.Store, agg.Data); err != nil { + return + } + + // Verify aggregated proof. + if agg.Proof != nil && len(agg.Proof.ProofData) > 0 { + targetState := e.Store.GetState(agg.Data.Target.Root) + if targetState == nil { + return + } + + participantIDs := types.BitlistIndices(agg.Proof.Participants) + if err := verifyAggregatedProof(targetState, participantIDs, agg.Data, agg.Proof.ProofData); err != nil { + logger.Error(logger.Signature, "aggregated attestation verification failed: %v", err) + return + } + } + + // Store in new payloads. + dataRoot, _ := agg.Data.HashTreeRoot() + e.Store.NewPayloads.Push(dataRoot, agg.Data, agg.Proof) +} diff --git a/node/checkpoint_sync.go b/node/checkpoint_sync.go deleted file mode 100644 index 3ed7f62..0000000 --- a/node/checkpoint_sync.go +++ /dev/null @@ -1,156 +0,0 @@ -package node - -import ( - "fmt" - "io" - "net/http" - "time" - - "github.com/geanlabs/gean/types" -) - -const checkpointSyncTimeout = 30 * time.Second - -func downloadCheckpointState(url string) (*types.State, error) { - req, err := http.NewRequest(http.MethodGet, url, nil) - if err != nil { - return nil, fmt.Errorf("build checkpoint request: %w", err) - } - req.Header.Set("Accept", "application/octet-stream") - - client := &http.Client{Timeout: checkpointSyncTimeout} - resp, err := client.Do(req) - if err != nil { - return nil, fmt.Errorf("download checkpoint state: %w", err) - } - defer resp.Body.Close() - - if resp.StatusCode != http.StatusOK { - return nil, fmt.Errorf("checkpoint endpoint returned HTTP %d", resp.StatusCode) - } - - payload, err := io.ReadAll(resp.Body) - if err != nil { - return nil, fmt.Errorf("read checkpoint response: %w", err) - } - - var state types.State - if err := state.UnmarshalSSZ(payload); err != nil { - return nil, fmt.Errorf("decode checkpoint state: %w", err) - } - - return &state, nil -} - -func verifyCheckpointState(state *types.State, genesisTime uint64, genesisValidators []*types.Validator) (*types.State, [32]byte, [32]byte, error) { - if state == nil { - return nil, types.ZeroHash, types.ZeroHash, fmt.Errorf("checkpoint state is nil") - } - if state.Config == nil { - return nil, types.ZeroHash, types.ZeroHash, fmt.Errorf("checkpoint state config is nil") - } - if state.LatestBlockHeader == nil { - return nil, types.ZeroHash, types.ZeroHash, fmt.Errorf("checkpoint latest block header is nil") - } - if state.LatestJustified == nil { - return nil, types.ZeroHash, types.ZeroHash, fmt.Errorf("checkpoint latest justified checkpoint is nil") - } - if state.LatestFinalized == nil { - return nil, types.ZeroHash, types.ZeroHash, fmt.Errorf("checkpoint latest finalized checkpoint is nil") - } - if state.Config.GenesisTime != genesisTime { - return nil, types.ZeroHash, types.ZeroHash, fmt.Errorf("genesis time mismatch: expected %d, got %d", genesisTime, state.Config.GenesisTime) - } - if len(state.Validators) == 0 { - return nil, types.ZeroHash, types.ZeroHash, fmt.Errorf("checkpoint state has no validators") - } - if len(state.Validators) != len(genesisValidators) { - return nil, types.ZeroHash, types.ZeroHash, fmt.Errorf("validator count mismatch: expected %d, got %d", len(genesisValidators), len(state.Validators)) - } - - for i := range genesisValidators { - if genesisValidators[i] == nil { - return nil, types.ZeroHash, types.ZeroHash, fmt.Errorf("genesis validator %d is nil", i) - } - if state.Validators[i] == nil { - return nil, types.ZeroHash, types.ZeroHash, fmt.Errorf("checkpoint validator %d is nil", i) - } - if state.Validators[i].Pubkey != genesisValidators[i].Pubkey { - return nil, types.ZeroHash, types.ZeroHash, fmt.Errorf("validator pubkey mismatch at index %d", i) - } - } - - preparedState := state.Copy() - originalStateRoot := preparedState.LatestBlockHeader.StateRoot - preparedState.LatestBlockHeader.StateRoot = types.ZeroHash - - stateRoot, err := preparedState.HashTreeRoot() - if err != nil { - return nil, types.ZeroHash, types.ZeroHash, fmt.Errorf("hash checkpoint state: %w", err) - } - if originalStateRoot != types.ZeroHash && originalStateRoot != stateRoot { - return nil, types.ZeroHash, types.ZeroHash, fmt.Errorf("checkpoint header state root mismatch") - } - - preparedState.LatestBlockHeader.StateRoot = stateRoot - blockRoot, err := preparedState.LatestBlockHeader.HashTreeRoot() - if err != nil { - return nil, types.ZeroHash, types.ZeroHash, fmt.Errorf("hash checkpoint block header: %w", err) - } - if err := verifyCheckpointHistory(preparedState, blockRoot); err != nil { - return nil, types.ZeroHash, types.ZeroHash, err - } - - return preparedState, stateRoot, blockRoot, nil -} - -func verifyCheckpointHistory(state *types.State, anchorRoot [32]byte) error { - if state.LatestBlockHeader.Slot > state.Slot { - return fmt.Errorf("checkpoint latest block header slot %d exceeds state slot %d", state.LatestBlockHeader.Slot, state.Slot) - } - if state.LatestBlockHeader.Slot > 0 { - parentSlot, ok := checkpointRootSlot(state, anchorRoot, state.LatestBlockHeader.ParentRoot) - if !ok { - return fmt.Errorf("checkpoint parent root not found in canonical history") - } - if parentSlot >= state.LatestBlockHeader.Slot { - return fmt.Errorf("checkpoint parent slot %d is invalid for header slot %d", parentSlot, state.LatestBlockHeader.Slot) - } - } - if err := verifyCheckpointRootAtSlot(state, anchorRoot, state.LatestJustified, "justified"); err != nil { - return err - } - if err := verifyCheckpointRootAtSlot(state, anchorRoot, state.LatestFinalized, "finalized"); err != nil { - return err - } - return nil -} - -func verifyCheckpointRootAtSlot(state *types.State, anchorRoot [32]byte, checkpoint *types.Checkpoint, label string) error { - if checkpoint == nil || checkpoint.Root == types.ZeroHash { - return nil - } - slot, ok := checkpointRootSlot(state, anchorRoot, checkpoint.Root) - if !ok { - return fmt.Errorf("checkpoint %s root not found in canonical history", label) - } - if slot != checkpoint.Slot { - return fmt.Errorf("checkpoint %s slot mismatch: expected %d, got %d", label, checkpoint.Slot, slot) - } - return nil -} - -func checkpointRootSlot(state *types.State, anchorRoot, root [32]byte) (uint64, bool) { - if root == types.ZeroHash || state == nil || state.LatestBlockHeader == nil { - return 0, false - } - if root == anchorRoot { - return state.LatestBlockHeader.Slot, true - } - for slot, historicalRoot := range state.HistoricalBlockHashes { - if historicalRoot == root { - return uint64(slot), true - } - } - return 0, false -} diff --git a/node/checkpoint_sync_test.go b/node/checkpoint_sync_test.go deleted file mode 100644 index d0114f8..0000000 --- a/node/checkpoint_sync_test.go +++ /dev/null @@ -1,109 +0,0 @@ -package node - -import ( - "net/http" - "net/http/httptest" - "testing" - - "github.com/geanlabs/gean/types" -) - -func TestVerifyCheckpointState(t *testing.T) { - genesisValidators := makeCheckpointValidators(3) - state := makeCheckpointState(1234, genesisValidators) - - preparedState, stateRoot, blockRoot, err := verifyCheckpointState(state, 1234, genesisValidators) - if err != nil { - t.Fatalf("verifyCheckpointState returned error: %v", err) - } - if preparedState.LatestBlockHeader.StateRoot != stateRoot { - t.Fatalf("prepared state root mismatch: got %x want %x", preparedState.LatestBlockHeader.StateRoot, stateRoot) - } - if blockRoot == types.ZeroHash { - t.Fatal("expected non-zero checkpoint block root") - } -} - -func TestVerifyCheckpointStateRejectsValidatorMismatch(t *testing.T) { - genesisValidators := makeCheckpointValidators(2) - state := makeCheckpointState(1234, genesisValidators) - state.Validators[1].Pubkey[0] = 0xFF - - _, _, _, err := verifyCheckpointState(state, 1234, genesisValidators) - if err == nil { - t.Fatal("expected validator mismatch error") - } -} - -func TestVerifyCheckpointStateRejectsMissingCanonicalHistory(t *testing.T) { - genesisValidators := makeCheckpointValidators(2) - state := makeCheckpointState(1234, genesisValidators) - state.HistoricalBlockHashes = nil - - _, _, _, err := verifyCheckpointState(state, 1234, genesisValidators) - if err == nil { - t.Fatal("expected missing canonical history error") - } -} - -func TestDownloadCheckpointState(t *testing.T) { - state := makeCheckpointState(1234, makeCheckpointValidators(2)) - payload, err := state.MarshalSSZ() - if err != nil { - t.Fatalf("MarshalSSZ returned error: %v", err) - } - - server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - w.Header().Set("Content-Type", "application/octet-stream") - _, _ = w.Write(payload) - })) - defer server.Close() - - downloadedState, err := downloadCheckpointState(server.URL) - if err != nil { - t.Fatalf("downloadCheckpointState returned error: %v", err) - } - if downloadedState.Config.GenesisTime != 1234 { - t.Fatalf("downloaded genesis time = %d, want 1234", downloadedState.Config.GenesisTime) - } -} - -func makeCheckpointState(genesisTime uint64, validators []*types.Validator) *types.State { - emptyBody := &types.BlockBody{Attestations: []*types.AggregatedAttestation{}} - bodyRoot, _ := emptyBody.HashTreeRoot() - stateValidators := make([]*types.Validator, len(validators)) - for i, validator := range validators { - copyValidator := *validator - stateValidators[i] = ©Validator - } - - return &types.State{ - Config: &types.Config{GenesisTime: genesisTime}, - Slot: 3, - LatestBlockHeader: &types.BlockHeader{ - Slot: 3, - ProposerIndex: 0, - ParentRoot: [32]byte{0x11}, - StateRoot: types.ZeroHash, - BodyRoot: bodyRoot, - }, - LatestJustified: &types.Checkpoint{Root: [32]byte{0x11}, Slot: 2}, - LatestFinalized: &types.Checkpoint{Root: [32]byte{0x22}, Slot: 1}, - HistoricalBlockHashes: [][32]byte{{0x33}, {0x22}, {0x11}}, - JustifiedSlots: []byte{0x01}, - Validators: stateValidators, - JustificationsRoots: [][32]byte{}, - JustificationsValidators: []byte{0x01}, - } -} - -func makeCheckpointValidators(n int) []*types.Validator { - validators := make([]*types.Validator, n) - for i := range validators { - validators[i] = &types.Validator{ - Index: uint64(i), - Pubkey: [52]byte{byte(i + 1)}, - } - } - return validators -} diff --git a/node/clock.go b/node/clock.go deleted file mode 100644 index ba8bf65..0000000 --- a/node/clock.go +++ /dev/null @@ -1,79 +0,0 @@ -package node - -import ( - "time" - - "github.com/geanlabs/gean/types" -) - -// Clock tracks slot and interval timing relative to genesis. -type Clock struct { - GenesisTime uint64 -} - -// NewClock creates a clock from genesis time (unix seconds). -func NewClock(genesisTime uint64) *Clock { - return &Clock{GenesisTime: genesisTime} -} - -func (c *Clock) genesisTimeMillis() uint64 { - return c.GenesisTime * 1000 -} - -func (c *Clock) nowMillis() uint64 { - return uint64(time.Now().UnixMilli()) -} - -// IsBeforeGenesis returns true if the current time is before genesis. -func (c *Clock) IsBeforeGenesis() bool { - return c.nowMillis() < c.genesisTimeMillis() -} - -// CurrentSlot returns the current slot number, or 0 if before genesis. -func (c *Clock) CurrentSlot() uint64 { - now := c.nowMillis() - genesisTimeMillis := c.genesisTimeMillis() - if now < genesisTimeMillis { - return 0 - } - elapsed := now - genesisTimeMillis - return elapsed / types.MillisecondsPerSlot -} - -// CurrentInterval returns the current interval within the slot (0-4), or 0 if before genesis. -func (c *Clock) CurrentInterval() uint64 { - now := c.nowMillis() - genesisTimeMillis := c.genesisTimeMillis() - if now < genesisTimeMillis { - return 0 - } - elapsed := now - genesisTimeMillis - return (elapsed % types.MillisecondsPerSlot) / types.MillisecondsPerInterval -} - -// CurrentTime returns the current unix time in milliseconds. -func (c *Clock) CurrentTime() uint64 { - return c.nowMillis() -} - -// DurationUntilNextInterval returns the time until the next genesis-aligned interval boundary. -func (c *Clock) DurationUntilNextInterval() time.Duration { - now := c.nowMillis() - genesisTimeMillis := c.genesisTimeMillis() - if now < genesisTimeMillis { - return time.Duration(genesisTimeMillis-now) * time.Millisecond - } - - elapsed := now - genesisTimeMillis - timeIntoInterval := elapsed % types.MillisecondsPerInterval - if timeIntoInterval == 0 { - return 0 - } - - return time.Duration(types.MillisecondsPerInterval-timeIntoInterval) * time.Millisecond -} - -// SlotTicker returns a channel that fires at the start of each interval. -func (c *Clock) SlotTicker() *time.Ticker { - return time.NewTicker(time.Duration(types.MillisecondsPerInterval) * time.Millisecond) -} diff --git a/node/consensus_store.go b/node/consensus_store.go new file mode 100644 index 0000000..f8a4d50 --- /dev/null +++ b/node/consensus_store.go @@ -0,0 +1,351 @@ +package node + +import ( + "github.com/geanlabs/gean/storage" + "github.com/geanlabs/gean/types" + "github.com/geanlabs/gean/xmss" +) + +const ( + // Buffer capacities rs L87-91. + aggregatedPayloadCap = 512 + newPayloadCap = 64 +) + +// ConsensusStore holds all state required for fork choice and block processing. +// +// Note: ForkChoice does NOT live here — it lives in Engine (Phase 7), +// Engine calls ForkChoice with store data as parameters. +// with store data as parameters. +type ConsensusStore struct { + Backend storage.Backend + NewPayloads *PayloadBuffer + KnownPayloads *PayloadBuffer + GossipSignatures GossipSignatureMap + PubKeyCache *xmss.PubKeyCache // cached parsed pubkey handles for aggregation +} + +// NewConsensusStore creates a store backed by the given storage backend. +func NewConsensusStore(backend storage.Backend) *ConsensusStore { + return &ConsensusStore{ + Backend: backend, + NewPayloads: NewPayloadBuffer(newPayloadCap), + KnownPayloads: NewPayloadBuffer(aggregatedPayloadCap), + GossipSignatures: make(GossipSignatureMap), + PubKeyCache: xmss.NewPubKeyCache(), + } +} + +// --- Metadata accessors --- + +func (s *ConsensusStore) Time() uint64 { + return s.getMetadataUint64(storage.KeyTime) +} + +func (s *ConsensusStore) SetTime(t uint64) { + s.putMetadataUint64(storage.KeyTime, t) +} + +func (s *ConsensusStore) Head() [32]byte { + return s.getMetadataRoot(storage.KeyHead) +} + +func (s *ConsensusStore) SetHead(root [32]byte) { + s.putMetadataRoot(storage.KeyHead, root) +} + +func (s *ConsensusStore) SafeTarget() [32]byte { + return s.getMetadataRoot(storage.KeySafeTarget) +} + +func (s *ConsensusStore) SetSafeTarget(root [32]byte) { + s.putMetadataRoot(storage.KeySafeTarget, root) +} + +func (s *ConsensusStore) LatestJustified() *types.Checkpoint { + return s.getMetadataCheckpoint(storage.KeyLatestJustified) +} + +func (s *ConsensusStore) SetLatestJustified(cp *types.Checkpoint) { + s.putMetadataCheckpoint(storage.KeyLatestJustified, cp) +} + +func (s *ConsensusStore) LatestFinalized() *types.Checkpoint { + return s.getMetadataCheckpoint(storage.KeyLatestFinalized) +} + +func (s *ConsensusStore) SetLatestFinalized(cp *types.Checkpoint) { + s.putMetadataCheckpoint(storage.KeyLatestFinalized, cp) +} + +func (s *ConsensusStore) Config() *types.ChainConfig { + rv, err := s.Backend.BeginRead() + if err != nil { + return &types.ChainConfig{} + } + val, err := rv.Get(storage.TableMetadata, storage.KeyConfig) + if err != nil || val == nil { + return &types.ChainConfig{} + } + cfg := &types.ChainConfig{} + if err := cfg.UnmarshalSSZ(val); err != nil { + return &types.ChainConfig{} + } + return cfg +} + +func (s *ConsensusStore) SetConfig(cfg *types.ChainConfig) { + data, _ := cfg.MarshalSSZ() + s.putMetadata(storage.KeyConfig, data) +} + +// --- Block accessors --- + +func (s *ConsensusStore) GetBlockHeader(root [32]byte) *types.BlockHeader { + rv, err := s.Backend.BeginRead() + if err != nil { + return nil + } + val, err := rv.Get(storage.TableBlockHeaders, root[:]) + if err != nil || val == nil { + return nil + } + h := &types.BlockHeader{} + if err := h.UnmarshalSSZ(val); err != nil { + return nil + } + return h +} + +// GetSignedBlock retrieves a full signed block from storage by root. +// which stores the full SignedBlockWithAttestation SSZ. +func (s *ConsensusStore) GetSignedBlock(root [32]byte) *types.SignedBlockWithAttestation { + rv, err := s.Backend.BeginRead() + if err != nil { + return nil + } + + sigBytes, _ := rv.Get(storage.TableBlockSignatures, root[:]) + if sigBytes == nil { + return nil + } + + full := &types.SignedBlockWithAttestation{} + if err := full.UnmarshalSSZ(sigBytes); err != nil { + return nil + } + if full.Block == nil || full.Block.Block == nil { + return nil + } + return full +} + +// writeBlockData stores body and full signed block across split tables. +// Body in BlockBodies, full SignedBlockWithAttestation in BlockSignatures. +func writeBlockData(s *ConsensusStore, root [32]byte, signedBlock *types.SignedBlockWithAttestation) { + wb, _ := s.Backend.BeginWrite() + + // Store body separately. + if signedBlock.Block != nil && signedBlock.Block.Block != nil && signedBlock.Block.Block.Body != nil { + bodyData, _ := signedBlock.Block.Block.Body.MarshalSSZ() + if len(bodyData) > 0 { + wb.PutBatch(storage.TableBlockBodies, []storage.KV{{Key: root[:], Value: bodyData}}) + } + } + + // Store full SignedBlockWithAttestation (includes proposer attestation + signatures). + fullData, _ := signedBlock.MarshalSSZ() + wb.PutBatch(storage.TableBlockSignatures, []storage.KV{{Key: root[:], Value: fullData}}) + + wb.Commit() +} + +func (s *ConsensusStore) GetState(root [32]byte) *types.State { + rv, err := s.Backend.BeginRead() + if err != nil { + return nil + } + val, err := rv.Get(storage.TableStates, root[:]) + if err != nil || val == nil { + return nil + } + st := &types.State{} + if err := st.UnmarshalSSZ(val); err != nil { + return nil + } + return st +} + +func (s *ConsensusStore) HasState(root [32]byte) bool { + rv, err := s.Backend.BeginRead() + if err != nil { + return false + } + val, err := rv.Get(storage.TableStates, root[:]) + return err == nil && val != nil +} + +func (s *ConsensusStore) InsertState(root [32]byte, state *types.State) { + data, _ := state.MarshalSSZ() + wb, _ := s.Backend.BeginWrite() + wb.PutBatch(storage.TableStates, []storage.KV{{Key: root[:], Value: data}}) + wb.Commit() +} + +// StatesCount returns the number of states currently stored. +func (s *ConsensusStore) StatesCount() int { + rv, err := s.Backend.BeginRead() + if err != nil { + return 0 + } + it, err := rv.PrefixIterator(storage.TableStates, nil) + if err != nil { + return 0 + } + defer it.Close() + count := 0 + for it.Next() { + count++ + } + return count +} + +func (s *ConsensusStore) InsertBlockHeader(root [32]byte, header *types.BlockHeader) { + data, _ := header.MarshalSSZ() + wb, _ := s.Backend.BeginWrite() + wb.PutBatch(storage.TableBlockHeaders, []storage.KV{{Key: root[:], Value: data}}) + wb.Commit() +} + +// HeadSlot returns the slot of the current head block. +func (s *ConsensusStore) HeadSlot() uint64 { + h := s.GetBlockHeader(s.Head()) + if h == nil { + return 0 + } + return h.Slot +} + +// StorePendingBlock stores block in DB without LiveChain entry (invisible to fork choice). +// Split across 3 tables: headers (for chain walk), bodies, signatures (includes proposer att). +func (s *ConsensusStore) StorePendingBlock(root [32]byte, signedBlock *types.SignedBlockWithAttestation) { + block := signedBlock.Block.Block + header := &types.BlockHeader{ + Slot: block.Slot, + ProposerIndex: block.ProposerIndex, + ParentRoot: block.ParentRoot, + StateRoot: block.StateRoot, + } + if block.Body != nil { + bodyRoot, _ := block.Body.HashTreeRoot() + header.BodyRoot = bodyRoot + } + s.InsertBlockHeader(root, header) + writeBlockData(s, root, signedBlock) +} + +// InsertLiveChainEntry adds a (slot, root) -> parent_root entry for fork choice traversal. +func (s *ConsensusStore) InsertLiveChainEntry(slot uint64, root, parentRoot [32]byte) { + key := storage.EncodeLiveChainKey(slot, root) + wb, _ := s.Backend.BeginWrite() + wb.PutBatch(storage.TableLiveChain, []storage.KV{{Key: key, Value: parentRoot[:]}}) + wb.Commit() +} + +// PromoteNewToKnown moves all new payloads to known. +func (s *ConsensusStore) PromoteNewToKnown() { + entries := s.NewPayloads.Drain() + s.KnownPayloads.PushBatch(entries) +} + +// ExtractLatestKnownAttestations returns per-validator latest from known pool only. +// Used by updateHead. rs extract_latest_known_attestations (L43). +func (s *ConsensusStore) ExtractLatestKnownAttestations() map[uint64]*types.AttestationData { + return s.KnownPayloads.ExtractLatestAttestations() +} + +// ExtractLatestAllAttestations returns per-validator latest from known+new merged. +// Used by updateSafeTarget. rs extract_latest_all_attestations (L104). +func (s *ConsensusStore) ExtractLatestAllAttestations() map[uint64]*types.AttestationData { + known := s.KnownPayloads.ExtractLatestAttestations() + newAtts := s.NewPayloads.ExtractLatestAttestations() + // Merge: new overwrites known if newer. + for vid, data := range newAtts { + existing, ok := known[vid] + if !ok || existing.Slot < data.Slot { + known[vid] = data + } + } + return known +} + +// --- Internal metadata helpers --- + +func (s *ConsensusStore) getMetadataUint64(key []byte) uint64 { + rv, err := s.Backend.BeginRead() + if err != nil { + return 0 + } + val, err := rv.Get(storage.TableMetadata, key) + if err != nil || val == nil || len(val) < 8 { + return 0 + } + var result uint64 + for i := 0; i < 8; i++ { + result |= uint64(val[i]) << (i * 8) + } + return result +} + +func (s *ConsensusStore) putMetadataUint64(key []byte, val uint64) { + buf := make([]byte, 8) + for i := 0; i < 8; i++ { + buf[i] = byte(val >> (i * 8)) + } + s.putMetadata(key, buf) +} + +func (s *ConsensusStore) getMetadataRoot(key []byte) [32]byte { + rv, err := s.Backend.BeginRead() + if err != nil { + return [32]byte{} + } + val, err := rv.Get(storage.TableMetadata, key) + if err != nil || val == nil || len(val) < 32 { + return [32]byte{} + } + var root [32]byte + copy(root[:], val) + return root +} + +func (s *ConsensusStore) putMetadataRoot(key []byte, root [32]byte) { + s.putMetadata(key, root[:]) +} + +func (s *ConsensusStore) getMetadataCheckpoint(key []byte) *types.Checkpoint { + rv, err := s.Backend.BeginRead() + if err != nil { + return &types.Checkpoint{} + } + val, err := rv.Get(storage.TableMetadata, key) + if err != nil || val == nil { + return &types.Checkpoint{} + } + cp := &types.Checkpoint{} + if err := cp.UnmarshalSSZ(val); err != nil { + return &types.Checkpoint{} + } + return cp +} + +func (s *ConsensusStore) putMetadataCheckpoint(key []byte, cp *types.Checkpoint) { + data, _ := cp.MarshalSSZ() + s.putMetadata(key, data) +} + +func (s *ConsensusStore) putMetadata(key, value []byte) { + wb, _ := s.Backend.BeginWrite() + wb.PutBatch(storage.TableMetadata, []storage.KV{{Key: key, Value: value}}) + wb.Commit() +} diff --git a/node/consensus_store_test.go b/node/consensus_store_test.go new file mode 100644 index 0000000..29d4b28 --- /dev/null +++ b/node/consensus_store_test.go @@ -0,0 +1,302 @@ +package node + +import ( + "testing" + + "github.com/geanlabs/gean/storage" + "github.com/geanlabs/gean/types" +) + +func makeTestStore() *ConsensusStore { + backend := storage.NewInMemoryBackend() + s := NewConsensusStore(backend) + s.SetConfig(&types.ChainConfig{GenesisTime: 1000}) + return s +} + +func makeCheckpoint(rootByte byte, slot uint64) *types.Checkpoint { + var root [32]byte + root[0] = rootByte + return &types.Checkpoint{Root: root, Slot: slot} +} + +func makeHeader(slot, proposer uint64, parentRootByte byte) *types.BlockHeader { + var parent [32]byte + parent[0] = parentRootByte + return &types.BlockHeader{ + Slot: slot, + ProposerIndex: proposer, + ParentRoot: parent, + } +} + +func TestMetadataRoundtrip(t *testing.T) { + s := makeTestStore() + + s.SetTime(42) + if s.Time() != 42 { + t.Fatalf("time: expected 42, got %d", s.Time()) + } + + var root [32]byte + root[0] = 0xab + s.SetHead(root) + if s.Head() != root { + t.Fatal("head mismatch") + } + + cp := makeCheckpoint(0xcd, 10) + s.SetLatestJustified(cp) + got := s.LatestJustified() + if got.Root != cp.Root || got.Slot != cp.Slot { + t.Fatal("justified mismatch") + } + + cp2 := makeCheckpoint(0xef, 5) + s.SetLatestFinalized(cp2) + got2 := s.LatestFinalized() + if got2.Root != cp2.Root || got2.Slot != cp2.Slot { + t.Fatal("finalized mismatch") + } +} + +func TestBlockHeaderStorage(t *testing.T) { + s := makeTestStore() + var root [32]byte + root[0] = 0x01 + h := makeHeader(5, 2, 0x00) + + s.InsertBlockHeader(root, h) + got := s.GetBlockHeader(root) + if got == nil { + t.Fatal("header not found") + } + if got.Slot != 5 || got.ProposerIndex != 2 { + t.Fatalf("header mismatch: slot=%d proposer=%d", got.Slot, got.ProposerIndex) + } +} + +func TestStateStorage(t *testing.T) { + s := makeTestStore() + var root [32]byte + root[0] = 0x01 + + state := &types.State{ + Config: &types.ChainConfig{GenesisTime: 1000}, + Slot: 10, + LatestBlockHeader: &types.BlockHeader{}, + LatestJustified: &types.Checkpoint{}, + LatestFinalized: &types.Checkpoint{}, + JustifiedSlots: types.NewBitlistSSZ(0), + JustificationsValidators: types.NewBitlistSSZ(0), + } + s.InsertState(root, state) + if !s.HasState(root) { + t.Fatal("state should exist") + } + got := s.GetState(root) + if got == nil { + t.Fatal("state not found") + } + if got.Slot != 10 { + t.Fatalf("state slot mismatch: expected 10, got %d", got.Slot) + } +} + +func TestPayloadBufferPushAndExtract(t *testing.T) { + pb := NewPayloadBuffer(100) + var dr [32]byte + dr[0] = 1 + data := &types.AttestationData{Slot: 5} + participants := types.NewBitlistSSZ(3) + types.BitlistSet(participants, 0) + types.BitlistSet(participants, 2) + proof := &types.AggregatedSignatureProof{Participants: participants} + + pb.Push(dr, data, proof) + if pb.Len() != 1 { + t.Fatalf("expected 1 entry, got %d", pb.Len()) + } + + atts := pb.ExtractLatestAttestations() + if len(atts) != 2 { + t.Fatalf("expected 2 validators, got %d", len(atts)) + } + if atts[0].Slot != 5 || atts[2].Slot != 5 { + t.Fatal("attestation data mismatch") + } +} + +func TestPayloadBufferFIFOEviction(t *testing.T) { + pb := NewPayloadBuffer(2) // capacity 2 proofs + + for i := byte(0); i < 5; i++ { + var dr [32]byte + dr[0] = i + data := &types.AttestationData{Slot: uint64(i)} + bits := types.NewBitlistSSZ(1) + types.BitlistSet(bits, 0) + proof := &types.AggregatedSignatureProof{Participants: bits} + pb.Push(dr, data, proof) + } + + // Should have evicted old entries to stay under capacity. + if pb.TotalProofs() > 2 { + t.Fatalf("expected <= 2 proofs, got %d", pb.TotalProofs()) + } +} + +func TestPromoteNewToKnown(t *testing.T) { + s := makeTestStore() + + var dr [32]byte + dr[0] = 1 + data := &types.AttestationData{Slot: 5} + bits := types.NewBitlistSSZ(1) + types.BitlistSet(bits, 0) + proof := &types.AggregatedSignatureProof{Participants: bits} + + s.NewPayloads.Push(dr, data, proof) + if s.NewPayloads.Len() != 1 { + t.Fatal("expected 1 new payload") + } + if s.KnownPayloads.Len() != 0 { + t.Fatal("known should be empty") + } + + s.PromoteNewToKnown() + + if s.NewPayloads.Len() != 0 { + t.Fatal("new should be empty after promote") + } + if s.KnownPayloads.Len() != 1 { + t.Fatal("known should have 1 entry") + } +} + +func TestExtractLatestAllAttestations(t *testing.T) { + s := makeTestStore() + + // Validator 0 in known at slot 5. + var dr1 [32]byte + dr1[0] = 1 + bits1 := types.NewBitlistSSZ(1) + types.BitlistSet(bits1, 0) + s.KnownPayloads.Push(dr1, &types.AttestationData{Slot: 5}, &types.AggregatedSignatureProof{Participants: bits1}) + + // Validator 0 in new at slot 8 (newer). + var dr2 [32]byte + dr2[0] = 2 + bits2 := types.NewBitlistSSZ(1) + types.BitlistSet(bits2, 0) + s.NewPayloads.Push(dr2, &types.AttestationData{Slot: 8}, &types.AggregatedSignatureProof{Participants: bits2}) + + all := s.ExtractLatestAllAttestations() + if all[0].Slot != 8 { + t.Fatalf("expected slot 8 (newer), got %d", all[0].Slot) + } +} + +func TestGossipSignatureInsertAndDelete(t *testing.T) { + gsm := make(GossipSignatureMap) + var dr [32]byte + dr[0] = 1 + data := &types.AttestationData{Slot: 5} + var sig [types.SignatureSize]byte + + gsm.Insert(dr, data, 0, sig) + gsm.Insert(dr, data, 1, sig) + + if gsm.Len() != 1 { + t.Fatalf("expected 1 entry, got %d", gsm.Len()) + } + if len(gsm[dr].Signatures) != 2 { + t.Fatal("expected 2 signatures") + } + + gsm.Delete([]GossipDeleteKey{{ValidatorID: 0, DataRoot: dr}}) + if len(gsm[dr].Signatures) != 1 { + t.Fatal("expected 1 signature after delete") + } +} + +func TestGossipSignaturePruneBelow(t *testing.T) { + gsm := make(GossipSignatureMap) + var sig [types.SignatureSize]byte + for i := uint64(0); i < 5; i++ { + var dr [32]byte + dr[0] = byte(i) + gsm.Insert(dr, &types.AttestationData{Slot: i}, 0, sig) + } + + pruned := gsm.PruneBelow(2) // remove slots 0, 1, 2 + if pruned != 3 { + t.Fatalf("expected 3 pruned, got %d", pruned) + } + if gsm.Len() != 2 { + t.Fatalf("expected 2 remaining, got %d", gsm.Len()) + } +} + +func TestValidateAttestationDataAvailability(t *testing.T) { + s := makeTestStore() + data := &types.AttestationData{ + Slot: 5, + Source: &types.Checkpoint{Root: [32]byte{1}, Slot: 3}, + Target: &types.Checkpoint{Root: [32]byte{2}, Slot: 4}, + Head: &types.Checkpoint{Root: [32]byte{3}, Slot: 5}, + } + + // All blocks missing — should fail. + err := ValidateAttestationData(s, data) + if err == nil { + t.Fatal("should fail with unknown blocks") + } + se, ok := err.(*StoreError) + if !ok || se.Kind != ErrUnknownSourceBlock { + t.Fatalf("expected UnknownSourceBlock, got %v", err) + } +} + +func TestValidateAttestationDataTopology(t *testing.T) { + s := makeTestStore() + s.SetTime(30) // slot ~6 + + // Insert blocks for source, target, head. + s.InsertBlockHeader([32]byte{1}, &types.BlockHeader{Slot: 3}) + s.InsertBlockHeader([32]byte{2}, &types.BlockHeader{Slot: 4}) + s.InsertBlockHeader([32]byte{3}, &types.BlockHeader{Slot: 5}) + + // Valid attestation. + data := &types.AttestationData{ + Slot: 5, + Source: &types.Checkpoint{Root: [32]byte{1}, Slot: 3}, + Target: &types.Checkpoint{Root: [32]byte{2}, Slot: 4}, + Head: &types.Checkpoint{Root: [32]byte{3}, Slot: 5}, + } + if err := ValidateAttestationData(s, data); err != nil { + t.Fatalf("should pass: %v", err) + } + + // Source exceeds target. + bad := *data + bad.Source = &types.Checkpoint{Root: [32]byte{3}, Slot: 5} + bad.Target = &types.Checkpoint{Root: [32]byte{1}, Slot: 3} + err := ValidateAttestationData(s, &bad) + if err == nil { + t.Fatal("should fail: source exceeds target") + } +} + +func TestAggregationBitsFromValidatorIndices(t *testing.T) { + bits := aggregationBitsFromValidatorIndices([]uint64{0, 3, 7}) + if !types.BitlistGet(bits, 0) || !types.BitlistGet(bits, 3) || !types.BitlistGet(bits, 7) { + t.Fatal("expected bits 0, 3, 7 set") + } + if types.BitlistGet(bits, 1) || types.BitlistGet(bits, 5) { + t.Fatal("bits 1, 5 should not be set") + } + if types.BitlistLen(bits) != 8 { + t.Fatalf("expected length 8, got %d", types.BitlistLen(bits)) + } +} diff --git a/node/handler.go b/node/handler.go deleted file mode 100644 index 190f568..0000000 --- a/node/handler.go +++ /dev/null @@ -1,165 +0,0 @@ -package node - -import ( - "fmt" - "log/slog" - - "github.com/geanlabs/gean/chain/forkchoice" - "github.com/geanlabs/gean/network/gossipsub" - "github.com/geanlabs/gean/network/reqresp" - "github.com/geanlabs/gean/observability/logging" - "github.com/geanlabs/gean/types" -) - -// registerReqRespHandlers wires up request/response protocol handlers. -// This is called during node initialization so sync can work. -func registerReqRespHandlers(n *Node, fc *forkchoice.Store) { - reqresp.RegisterReqResp(n.Host.P2P, &reqresp.ReqRespHandler{ - OnStatus: func(req reqresp.Status) reqresp.Status { - status := fc.GetStatus() - return reqresp.Status{ - Finalized: &types.Checkpoint{Root: status.FinalizedRoot, Slot: status.FinalizedSlot}, - Head: &types.Checkpoint{Root: status.Head, Slot: status.HeadSlot}, - } - }, - OnBlocksByRoot: func(roots [][32]byte) []*types.SignedBlockWithAttestation { - var blocks []*types.SignedBlockWithAttestation - for _, root := range roots { - if sb, ok := fc.GetSignedBlock(root); ok { - blocks = append(blocks, sb) - } - } - return blocks - }, - }) -} - -// registerGossipHandlers subscribes to gossip topics for blocks and attestations. -// This is called AFTER initial sync to prevent processing gossip blocks before -// the chain is connected to the network's canonical chain. -func (n *Node) registerGossipHandlers() error { - gossipLog := logging.NewComponentLogger(logging.CompGossip) - - // Subscribe to gossip. - if err := gossipsub.SubscribeTopics(n.Host.Ctx, n.Topics, &gossipsub.GossipHandler{ - OnBlock: func(sb *types.SignedBlockWithAttestation) { - block := sb.Message.Block - blockRoot, _ := block.HashTreeRoot() - gossipLog.Info("received block via gossip", - "slot", block.Slot, - "proposer", block.ProposerIndex, - "block_root", logging.LongHash(blockRoot), - "parent_root", logging.LongHash(block.ParentRoot), - "state_root", logging.LongHash(block.StateRoot), - "attestations", len(block.Body.Attestations), - ) - if err := n.FC.ProcessBlock(sb); err != nil { - status := n.FC.GetStatus() - if isMissingParentStateErr(err) { - gossipLog.Warn("parent state missing for gossip block, attempting recovery", - "slot", block.Slot, - "block_root", logging.LongHash(blockRoot), - "parent_root", logging.LongHash(block.ParentRoot), - "head_slot", status.HeadSlot, - "finalized_slot", status.FinalizedSlot, - ) - if n.recoverMissingParentSync(n.Host.Ctx, block.ParentRoot) { - if retryErr := n.FC.ProcessBlock(sb); retryErr == nil { - gossipLog.Info("accepted gossip block after parent recovery", - "slot", block.Slot, - "block_root", logging.LongHash(blockRoot), - ) - // Process any pending children now that this block is available. - n.processPendingChildren(blockRoot, gossipLog) - return - } else { - err = retryErr - } - } - // Cache the block for later processing when parent becomes available. - n.PendingBlocks.Add(sb) - gossipLog.Info("cached pending block awaiting parent", - "slot", block.Slot, - "block_root", logging.LongHash(blockRoot), - "parent_root", logging.LongHash(block.ParentRoot), - "pending_count", n.PendingBlocks.Len(), - ) - return - } - gossipLog.Warn("rejected gossip block", - "slot", block.Slot, - "block_root", logging.LongHash(blockRoot), - "err", err, - "head_slot", status.HeadSlot, - "finalized_slot", status.FinalizedSlot, - ) - return - } - // Block accepted. - gossipLog.Info("block accepted", - "slot", block.Slot, - "proposer", block.ProposerIndex, - "block_root", logging.LongHash(blockRoot), - "parent_root", logging.LongHash(block.ParentRoot), - "state_root", logging.LongHash(block.StateRoot), - "attestations", len(block.Body.Attestations), - ) - n.processPendingChildren(blockRoot, gossipLog) - }, - OnAttestation: func(sa *types.SignedAttestation) { - if sa.Message != nil { - gossipLog.Debug("received attestation from gossip", - "slot", sa.Message.Slot, - "validator", sa.ValidatorID, - "head_root", logging.LongHash(sa.Message.Head.Root), - "target_slot", sa.Message.Target.Slot, - "target_root", logging.LongHash(sa.Message.Target.Root), - "source_slot", sa.Message.Source.Slot, - "source_root", logging.LongHash(sa.Message.Source.Root), - ) - } - n.FC.ProcessSubnetAttestation(sa) - }, - OnAggregatedAttestation: func(saa *types.SignedAggregatedAttestation) { - gossipLog.Debug("received aggregated attestation via gossip", - "slot", saa.Data.Slot, - ) - n.FC.ProcessAggregatedAttestation(saa) - }, - }); err != nil { - return fmt.Errorf("subscribe topics: %w", err) - } - - return nil -} - -// processPendingChildren processes any cached blocks that were waiting for this parent. -// This implements the leanSpec requirement to process cached blocks when their parent arrives. -func (n *Node) processPendingChildren(parentRoot [32]byte, log *slog.Logger) { - children := n.PendingBlocks.GetChildrenOf(parentRoot) - for _, sb := range children { - block := sb.Message.Block - blockRoot, _ := block.HashTreeRoot() - - if err := n.FC.ProcessBlock(sb); err != nil { - // Still can't process - may be missing a deeper ancestor. - log.Debug("pending child still not processable", - "slot", block.Slot, - "block_root", logging.LongHash(blockRoot), - "err", err, - ) - continue - } - - // Successfully processed - remove from pending and recurse. - n.PendingBlocks.Remove(blockRoot) - log.Info("processed pending child block", - "slot", block.Slot, - "block_root", logging.LongHash(blockRoot), - "parent_root", logging.LongHash(parentRoot), - ) - - // Recursively process any children of this block. - n.processPendingChildren(blockRoot, log) - } -} diff --git a/node/lifecycle.go b/node/lifecycle.go deleted file mode 100644 index 37f26f1..0000000 --- a/node/lifecycle.go +++ /dev/null @@ -1,365 +0,0 @@ -package node - -import ( - "fmt" - "log/slog" - "net" - "os" - "path/filepath" - "strconv" - "time" - - "github.com/multiformats/go-multiaddr" - - apiserver "github.com/geanlabs/gean/api/server" - "github.com/geanlabs/gean/chain/forkchoice" - "github.com/geanlabs/gean/chain/statetransition" - "github.com/geanlabs/gean/network" - "github.com/geanlabs/gean/network/gossipsub" - "github.com/geanlabs/gean/network/p2p" - "github.com/geanlabs/gean/observability/logging" - "github.com/geanlabs/gean/observability/metrics" - boltstore "github.com/geanlabs/gean/storage/bolt" - "github.com/geanlabs/gean/types" - "github.com/geanlabs/gean/xmss/leansig" -) - -// New creates and wires up a new Node. -func New(cfg Config) (*Node, error) { - log := logging.NewComponentLogger(logging.CompNode) - - fc, db, err := initChain(log, cfg) - if err != nil { - return nil, err - } - - host, topics, err := initP2P(cfg) - if err != nil { - db.Close() - return nil, err - } - - p2pManager, p2pDiscovery, err2 := initDiscovery(log, cfg) - if err2 != nil { - host.Close() - db.Close() - return nil, err2 - } - - validatorKeys, err := loadValidatorKeys(log, cfg) - if err != nil { - if p2pDiscovery != nil { - p2pDiscovery.Close() - } - if p2pManager != nil { - p2pManager.Close() - } - host.Close() - db.Close() - return nil, err - } - - validator := &ValidatorDuties{ - Indices: cfg.ValidatorIDs, - Keys: validatorKeys, - FC: fc, - Topics: topics, - PublishBlock: gossipsub.PublishBlock, - PublishAttestation: gossipsub.PublishAttestation, - PublishAggregatedAttestation: gossipsub.PublishAggregatedAttestation, - IsAggregator: cfg.IsAggregator, - Log: logging.NewComponentLogger(logging.CompValidator), - } - - n := &Node{ - FC: fc, - Host: host, - Topics: topics, - Clock: NewClock(cfg.GenesisTime), - Validator: validator, - P2PManager: p2pManager, - P2PDiscovery: p2pDiscovery, - PendingBlocks: NewPendingBlockCache(), - dbCloser: db, - log: log, - } - - // Register req/resp handlers for sync. Gossip handlers are registered - // before initial sync in ticker.go so blocks arriving during sync are - // not silently dropped. - registerReqRespHandlers(n, fc) - - if len(cfg.Bootnodes) > 0 { - network.ConnectBootnodes(host.Ctx, host.P2P, cfg.Bootnodes) - } - - startMetrics(log, cfg) - apiServer, err := startAPI(cfg, fc) - if err != nil { - if p2pDiscovery != nil { - p2pDiscovery.Close() - } - if p2pManager != nil { - p2pManager.Close() - } - host.Close() - db.Close() - return nil, err - } - n.API = apiServer - - return n, nil -} - -func initChain(log *slog.Logger, cfg Config) (*forkchoice.Store, *boltstore.Store, error) { - if err := os.MkdirAll(cfg.DataDir, 0700); err != nil { - return nil, nil, fmt.Errorf("create data dir: %w", err) - } - dbPath := filepath.Join(cfg.DataDir, "gean.db") - db, err := boltstore.New(dbPath) - if err != nil { - return nil, nil, fmt.Errorf("open database: %w", err) - } - - var ( - fc *forkchoice.Store - checkpointSyncSucceeded bool - ) - - if cfg.CheckpointSyncURL != "" { - log.Info("checkpoint sync enabled", "checkpoint_sync_url", cfg.CheckpointSyncURL) - - state, err := downloadCheckpointState(cfg.CheckpointSyncURL) - if err != nil { - log.Warn("checkpoint sync failed, falling back to database/genesis", "err", err) - } else { - preparedState, stateRoot, blockRoot, err := verifyCheckpointState(state, cfg.GenesisTime, cfg.Validators) - if err != nil { - log.Warn("checkpoint state verification failed, falling back to database/genesis", "err", err) - } else { - log.Info("checkpoint state verified", - "slot", preparedState.Slot, - "state_root", logging.LongHash(stateRoot), - "block_root", logging.LongHash(blockRoot), - "justified_slot", preparedState.LatestJustified.Slot, - "justified_root", logging.LongHash(preparedState.LatestJustified.Root), - "finalized_slot", preparedState.LatestFinalized.Slot, - "finalized_root", logging.LongHash(preparedState.LatestFinalized.Root), - ) - fc = forkchoice.NewStoreFromCheckpointState(preparedState, blockRoot, db) - checkpointSyncSucceeded = true - log.Info("checkpoint sync completed successfully, using state as anchor", - "slot", preparedState.Slot, - "anchor_block_root", logging.LongHash(blockRoot), - ) - } - } - } - - if fc == nil { - fc = forkchoice.RestoreFromDB(db) - } - - if fc != nil && !checkpointSyncSucceeded { - status := fc.GetStatus() - log.Info("chain restored from database", - "head_root", logging.LongHash(status.Head), - "head_slot", status.HeadSlot, - "justified_root", logging.LongHash(status.JustifiedRoot), - "justified_slot", status.JustifiedSlot, - "finalized_root", logging.LongHash(status.FinalizedRoot), - "finalized_slot", status.FinalizedSlot, - ) - } else if fc != nil { - status := fc.GetStatus() - log.Info("chain initialized from checkpoint state", - "head_root", logging.LongHash(status.Head), - "head_slot", status.HeadSlot, - "justified_root", logging.LongHash(status.JustifiedRoot), - "justified_slot", status.JustifiedSlot, - "finalized_root", logging.LongHash(status.FinalizedRoot), - "finalized_slot", status.FinalizedSlot, - ) - } else { - genesisState := statetransition.GenerateGenesis(cfg.GenesisTime, cfg.Validators) - - genesisBlock := &types.Block{ - Slot: 0, - ProposerIndex: 0, - ParentRoot: types.ZeroHash, - StateRoot: types.ZeroHash, - Body: &types.BlockBody{Attestations: []*types.AggregatedAttestation{}}, - } - - stateRoot, _ := genesisState.HashTreeRoot() - genesisBlock.StateRoot = stateRoot - - genesisRoot, _ := genesisBlock.HashTreeRoot() - log.Info("genesis state initialized", - "state_root", logging.LongHash(stateRoot), - "block_root", logging.LongHash(genesisRoot), - ) - - fc = forkchoice.NewStore(genesisState, genesisBlock, db) - } - - fc.NowFn = func() uint64 { return uint64(time.Now().UnixMilli()) } - fc.SetIsAggregator(cfg.IsAggregator) - - return fc, db, nil -} - -func initP2P(cfg Config) (*network.Host, *gossipsub.Topics, error) { - host, err := network.NewHost(cfg.ListenAddr, cfg.NodeKeyPath, cfg.Bootnodes) - if err != nil { - return nil, nil, fmt.Errorf("create host: %w", err) - } - - netLog := logging.NewComponentLogger(logging.CompNetwork) - netLog.Info("libp2p host started", - "peer_id", host.P2P.ID().String(), - "addr", cfg.ListenAddr, - ) - - devnetID := cfg.DevnetID - if devnetID == "" { - devnetID = "devnet0" - } - topics, err := gossipsub.JoinTopics(host.PubSub, devnetID, 0) - if err != nil { - host.Close() - return nil, nil, fmt.Errorf("join topics: %w", err) - } - - gossipLog := logging.NewComponentLogger(logging.CompGossip) - gossipLog.Info("gossipsub topics joined", "devnet", devnetID) - - return host, topics, nil -} - -func initDiscovery(log *slog.Logger, cfg Config) (*p2p.LocalNodeManager, *p2p.DiscoveryService, error) { - discPort := cfg.DiscoveryPort - if discPort == 0 { - discPort = 9000 - } - - // Parse QUIC port from listen address for ENR advertisement - quicPort := parseQUICPort(cfg.ListenAddr) - - p2pDBPath := filepath.Join(cfg.DataDir, "p2p") - if err := os.MkdirAll(p2pDBPath, 0700); err != nil { - return nil, nil, fmt.Errorf("failed to create p2p db dir: %w", err) - } - - p2pManager, err := p2p.NewLocalNodeManager(p2pDBPath, cfg.NodeKeyPath, net.IPv4(0, 0, 0, 0), discPort, 0, quicPort) - if err != nil { - return nil, nil, fmt.Errorf("failed to init p2p manager: %w", err) - } - - if local := p2pManager.LocalNode(); local != nil { - local.Set(p2p.AggregatorEntry(cfg.IsAggregator)) - } - - p2pDiscovery, err := p2p.NewDiscoveryService(p2pManager, discPort, cfg.Bootnodes) - if err != nil { - log.Warn("p2p discovery unavailable", "err", err) - } - - return p2pManager, p2pDiscovery, nil -} - -func startAPI(cfg Config, fc *forkchoice.Store) (*apiserver.Server, error) { - apiCfg := apiserver.Config{ - Host: cfg.APIHost, - Port: cfg.APIPort, - Enabled: cfg.APIEnabled, - } - apiServer := apiserver.New(apiCfg, func() *forkchoice.Store { return fc }) - if err := apiServer.Start(); err != nil { - return nil, err - } - if !cfg.APIEnabled { - return nil, nil - } - return apiServer, nil -} - -func loadValidatorKeys(log *slog.Logger, cfg Config) (map[uint64]forkchoice.Signer, error) { - keys := make(map[uint64]forkchoice.Signer) - if cfg.ValidatorKeysDir == "" { - if len(cfg.ValidatorIDs) > 0 { - log.Warn("no validator keys directory specified; validator duties will fail signing") - } - return keys, nil - } - - for _, idx := range cfg.ValidatorIDs { - pkPath := filepath.Join(cfg.ValidatorKeysDir, fmt.Sprintf("validator_%d_pk.ssz", idx)) - skPath := filepath.Join(cfg.ValidatorKeysDir, fmt.Sprintf("validator_%d_sk.ssz", idx)) - - kp, err := leansig.LoadKeypair(pkPath, skPath) - if err != nil { - // Clean up previously loaded keypairs to prevent Rust memory leaks. - // Modeled after zeam's errdefer keypair.deinit() pattern - // (cli/src/node.zig:433-469). - freeLoadedKeys(keys) - return nil, fmt.Errorf("failed to load keypair for validator %d: %w", idx, err) - } - keys[idx] = kp - log.Info("loaded validator keypair", "validator_index", idx) - } - return keys, nil -} - -// freeLoadedKeys releases Rust-allocated XMSS keypairs from a partially -// loaded key map. Called on error during loadValidatorKeys to prevent leaks. -func freeLoadedKeys(keys map[uint64]forkchoice.Signer) { - for _, key := range keys { - if f, ok := key.(interface{ Free() }); ok { - f.Free() - } - } -} - -func startMetrics(log *slog.Logger, cfg Config) { - if cfg.MetricsPort <= 0 { - return - } - metrics.NodeInfo.WithLabelValues("gean", Version).Set(1) - metrics.NodeStartTime.Set(float64(time.Now().Unix())) - metrics.ValidatorsCount.Set(float64(len(cfg.ValidatorIDs))) - metrics.ConnectedPeers.WithLabelValues("gean").Set(0) - - // Devnet-3 aggregator metrics. - if cfg.IsAggregator { - metrics.IsAggregator.Set(1) - } else { - metrics.IsAggregator.Set(0) - } - metrics.AttestationCommitteeCount.Set(1) // Always 1 for devnet-3. - metrics.AttestationCommitteeSubnet.Set(0) // Always subnet 0 for devnet-3. - - metrics.Serve(cfg.MetricsPort) - log.Info("metrics server started", "port", cfg.MetricsPort) -} - -// parseQUICPort extracts the UDP port from a QUIC multiaddr like /ip4/0.0.0.0/udp/9008/quic-v1. -func parseQUICPort(listenAddr string) int { - if listenAddr == "" { - return 0 - } - ma, err := multiaddr.NewMultiaddr(listenAddr) - if err != nil { - return 0 - } - // Extract the UDP port component (QUIC runs over UDP) - val, err := ma.ValueForProtocol(multiaddr.P_UDP) - if err != nil { - return 0 - } - port, err := strconv.Atoi(val) - if err != nil { - return 0 - } - return port -} diff --git a/node/metrics.go b/node/metrics.go new file mode 100644 index 0000000..93b41c7 --- /dev/null +++ b/node/metrics.go @@ -0,0 +1,245 @@ +package node + +import ( + "github.com/prometheus/client_golang/prometheus" + "github.com/prometheus/client_golang/prometheus/promauto" +) + +// All metrics use lean_ prefix rs. + +// --- Gauges --- + +var ( + metricHeadSlot = promauto.NewGauge(prometheus.GaugeOpts{ + Name: "lean_head_slot", Help: "Latest head slot", + }) + metricCurrentSlot = promauto.NewGauge(prometheus.GaugeOpts{ + Name: "lean_current_slot", Help: "Current slot from wall clock", + }) + metricSafeTargetSlot = promauto.NewGauge(prometheus.GaugeOpts{ + Name: "lean_safe_target_slot", Help: "Safe target slot for attestation", + }) + metricLatestJustifiedSlot = promauto.NewGauge(prometheus.GaugeOpts{ + Name: "lean_latest_justified_slot", Help: "Latest justified checkpoint slot", + }) + metricLatestFinalizedSlot = promauto.NewGauge(prometheus.GaugeOpts{ + Name: "lean_latest_finalized_slot", Help: "Latest finalized checkpoint slot", + }) + metricValidatorsCount = promauto.NewGauge(prometheus.GaugeOpts{ + Name: "lean_validators_count", Help: "Number of validators managed by this node", + }) + metricIsAggregator = promauto.NewGauge(prometheus.GaugeOpts{ + Name: "lean_is_aggregator", Help: "Whether this node is an aggregator (0 or 1)", + }) + metricAttestationCommitteeCount = promauto.NewGauge(prometheus.GaugeOpts{ + Name: "lean_attestation_committee_count", Help: "Number of attestation committees/subnets", + }) + metricGossipSignatures = promauto.NewGauge(prometheus.GaugeOpts{ + Name: "lean_gossip_signatures", Help: "Number of gossip signature entries", + }) + metricLatestNewAggregatedPayloads = promauto.NewGauge(prometheus.GaugeOpts{ + Name: "lean_latest_new_aggregated_payloads", Help: "Number of new (pending) aggregated payloads", + }) + metricLatestKnownAggregatedPayloads = promauto.NewGauge(prometheus.GaugeOpts{ + Name: "lean_latest_known_aggregated_payloads", Help: "Number of known (active) aggregated payloads", + }) + metricNodeInfo = promauto.NewGaugeVec(prometheus.GaugeOpts{ + Name: "lean_node_info", Help: "Node information", + }, []string{"name", "version"}) + metricNodeStartTime = promauto.NewGauge(prometheus.GaugeOpts{ + Name: "lean_node_start_time_seconds", Help: "Node start time as Unix timestamp", + }) + metricTableBytes = promauto.NewGaugeVec(prometheus.GaugeOpts{ + Name: "lean_table_bytes", Help: "Estimated table size in bytes", + }, []string{"table"}) + metricConnectedPeers = promauto.NewGaugeVec(prometheus.GaugeOpts{ + Name: "lean_connected_peers", Help: "Number of connected peers", + }, []string{"client"}) + metricAttestationCommitteeSubnet = promauto.NewGauge(prometheus.GaugeOpts{ + Name: "lean_attestation_committee_subnet", Help: "Node's attestation committee subnet", + }) +) + +// --- Counters --- + +var ( + metricAttestationsValid = promauto.NewCounter(prometheus.CounterOpts{ + Name: "lean_attestations_valid_total", Help: "Total valid attestations processed", + }) + metricAttestationsInvalid = promauto.NewCounter(prometheus.CounterOpts{ + Name: "lean_attestations_invalid_total", Help: "Total invalid attestations rejected", + }) + metricForkChoiceReorgs = promauto.NewCounter(prometheus.CounterOpts{ + Name: "lean_fork_choice_reorgs_total", Help: "Total fork choice reorgs", + }) + metricPqSigAggregatedSignaturesTotal = promauto.NewCounter(prometheus.CounterOpts{ + Name: "lean_pq_sig_aggregated_signatures_total", Help: "Total aggregated signature proofs produced", + }) + metricPqSigAttestationsInAggregated = promauto.NewCounter(prometheus.CounterOpts{ + Name: "lean_pq_sig_attestations_in_aggregated_signatures_total", Help: "Total attestations included in aggregated proofs", + }) + metricPqSigAggregatedValid = promauto.NewCounter(prometheus.CounterOpts{ + Name: "lean_pq_sig_aggregated_signatures_valid_total", Help: "Total valid aggregated signature verifications", + }) + metricPqSigAggregatedInvalid = promauto.NewCounter(prometheus.CounterOpts{ + Name: "lean_pq_sig_aggregated_signatures_invalid_total", Help: "Total invalid aggregated signature verifications", + }) + metricPqSigAttestationSigsTotal = promauto.NewCounter(prometheus.CounterOpts{ + Name: "lean_pq_sig_attestation_signatures_total", Help: "Total individual attestation signatures processed", + }) + metricPqSigAttestationSigsValid = promauto.NewCounter(prometheus.CounterOpts{ + Name: "lean_pq_sig_attestation_signatures_valid_total", Help: "Total valid individual attestation signatures", + }) + metricPqSigAttestationSigsInvalid = promauto.NewCounter(prometheus.CounterOpts{ + Name: "lean_pq_sig_attestation_signatures_invalid_total", Help: "Total invalid individual attestation signatures", + }) + metricFinalizationsTotal = promauto.NewCounterVec(prometheus.CounterOpts{ + Name: "lean_finalizations_total", Help: "Total number of finalization attempts", + }, []string{"result"}) + metricSTFSlotsProcessed = promauto.NewCounter(prometheus.CounterOpts{ + Name: "lean_state_transition_slots_processed_total", Help: "Total number of processed slots", + }) + metricSTFAttestationsProcessed = promauto.NewCounter(prometheus.CounterOpts{ + Name: "lean_state_transition_attestations_processed_total", Help: "Total number of processed attestations", + }) + metricPeerConnectionEvents = promauto.NewCounterVec(prometheus.CounterOpts{ + Name: "lean_peer_connection_events_total", Help: "Total peer connection events", + }, []string{"direction", "result"}) + metricPeerDisconnectionEvents = promauto.NewCounterVec(prometheus.CounterOpts{ + Name: "lean_peer_disconnection_events_total", Help: "Total peer disconnection events", + }, []string{"direction", "reason"}) +) + +// --- Histograms --- + +var ( + metricBlockProcessingTime = promauto.NewHistogram(prometheus.HistogramOpts{ + Name: "lean_fork_choice_block_processing_time_seconds", + Help: "Time to process a block", + Buckets: []float64{0.005, 0.01, 0.025, 0.05, 0.1, 1, 1.25, 1.5, 2, 4}, + }) + metricAttestationValidationTime = promauto.NewHistogram(prometheus.HistogramOpts{ + Name: "lean_attestation_validation_time_seconds", + Help: "Time to validate attestation data", + Buckets: []float64{0.005, 0.01, 0.025, 0.05, 0.1, 1}, + }) + metricCommitteeAggregationTime = promauto.NewHistogram(prometheus.HistogramOpts{ + Name: "lean_committee_signatures_aggregation_time_seconds", + Help: "Time to aggregate committee signatures", + Buckets: []float64{0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 0.75, 1}, + }) + metricPqSigSigningTime = promauto.NewHistogram(prometheus.HistogramOpts{ + Name: "lean_pq_sig_attestation_signing_time_seconds", + Help: "Time to sign an attestation", + Buckets: []float64{0.005, 0.01, 0.025, 0.05, 0.1, 1}, + }) + metricPqSigVerificationTime = promauto.NewHistogram(prometheus.HistogramOpts{ + Name: "lean_pq_sig_attestation_verification_time_seconds", + Help: "Time to verify an individual attestation signature", + Buckets: []float64{0.005, 0.01, 0.025, 0.05, 0.1, 1}, + }) + metricPqSigAggBuildingTime = promauto.NewHistogram(prometheus.HistogramOpts{ + Name: "lean_pq_sig_aggregated_signatures_building_time_seconds", + Help: "Time to build an aggregated signature proof", + Buckets: []float64{0.1, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 4}, + }) + metricPqSigAggVerificationTime = promauto.NewHistogram(prometheus.HistogramOpts{ + Name: "lean_pq_sig_aggregated_signatures_verification_time_seconds", + Help: "Time to verify an aggregated signature proof", + Buckets: []float64{0.1, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 4}, + }) + metricForkChoiceReorgDepth = promauto.NewHistogram(prometheus.HistogramOpts{ + Name: "lean_fork_choice_reorg_depth", + Help: "Depth of fork choice reorgs", + Buckets: []float64{1, 2, 3, 5, 7, 10, 20, 30, 50, 100}, + }) + metricSTFTime = promauto.NewHistogram(prometheus.HistogramOpts{ + Name: "lean_state_transition_time_seconds", + Help: "Time to process full state transition", + Buckets: []float64{0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 2.5, 3, 4}, + }) + metricSTFSlotsTime = promauto.NewHistogram(prometheus.HistogramOpts{ + Name: "lean_state_transition_slots_processing_time_seconds", + Help: "Time to process slots", + Buckets: []float64{0.005, 0.01, 0.025, 0.05, 0.1, 1}, + }) + metricSTFBlockTime = promauto.NewHistogram(prometheus.HistogramOpts{ + Name: "lean_state_transition_block_processing_time_seconds", + Help: "Time to process block in state transition", + Buckets: []float64{0.005, 0.01, 0.025, 0.05, 0.1, 1}, + }) + metricSTFAttestationsTime = promauto.NewHistogram(prometheus.HistogramOpts{ + Name: "lean_state_transition_attestations_processing_time_seconds", + Help: "Time to process attestations", + Buckets: []float64{0.005, 0.01, 0.025, 0.05, 0.1, 1}, + }) +) + +// --- Public update functions --- + +func SetNodeInfo(name, version string) { metricNodeInfo.WithLabelValues(name, version).Set(1) } +func SetNodeStartTime(t float64) { metricNodeStartTime.Set(t) } +func SetHeadSlot(s uint64) { metricHeadSlot.Set(float64(s)) } +func SetCurrentSlot(s uint64) { metricCurrentSlot.Set(float64(s)) } +func SetSafeTargetSlot(s uint64) { metricSafeTargetSlot.Set(float64(s)) } +func SetLatestJustifiedSlot(s uint64) { metricLatestJustifiedSlot.Set(float64(s)) } +func SetLatestFinalizedSlot(s uint64) { metricLatestFinalizedSlot.Set(float64(s)) } +func SetValidatorsCount(n int) { metricValidatorsCount.Set(float64(n)) } +func SetIsAggregator(b bool) { + if b { + metricIsAggregator.Set(1) + } else { + metricIsAggregator.Set(0) + } +} +func SetAttestationCommitteeCount(n uint64) { metricAttestationCommitteeCount.Set(float64(n)) } +func SetGossipSignatures(n int) { metricGossipSignatures.Set(float64(n)) } +func SetNewAggregatedPayloads(n int) { metricLatestNewAggregatedPayloads.Set(float64(n)) } +func SetKnownAggregatedPayloads(n int) { metricLatestKnownAggregatedPayloads.Set(float64(n)) } +func SetTableBytes(table string, bytes uint64) { + metricTableBytes.WithLabelValues(table).Set(float64(bytes)) +} +func SetConnectedPeers(client string, n int) { + metricConnectedPeers.WithLabelValues(client).Set(float64(n)) +} +func SetAttestationCommitteeSubnet(n uint64) { metricAttestationCommitteeSubnet.Set(float64(n)) } + +func IncAttestationsValid(n uint64) { metricAttestationsValid.Add(float64(n)) } +func IncAttestationsInvalid() { metricAttestationsInvalid.Inc() } +func IncForkChoiceReorgs() { metricForkChoiceReorgs.Inc() } +func IncPqSigAggregatedTotal() { metricPqSigAggregatedSignaturesTotal.Inc() } +func IncPqSigAttestationsInAggregated(n int) { metricPqSigAttestationsInAggregated.Add(float64(n)) } +func IncPqSigAggregatedValid() { metricPqSigAggregatedValid.Inc() } +func IncPqSigAggregatedInvalid() { metricPqSigAggregatedInvalid.Inc() } +func IncPqSigAttestationSigsTotal() { metricPqSigAttestationSigsTotal.Inc() } +func IncPqSigAttestationSigsValid() { metricPqSigAttestationSigsValid.Inc() } +func IncPqSigAttestationSigsInvalid() { metricPqSigAttestationSigsInvalid.Inc() } + +func ObserveBlockProcessingTime(seconds float64) { metricBlockProcessingTime.Observe(seconds) } +func ObserveAttestationValidationTime(seconds float64) { + metricAttestationValidationTime.Observe(seconds) +} +func ObserveCommitteeAggregationTime(seconds float64) { + metricCommitteeAggregationTime.Observe(seconds) +} +func ObservePqSigSigningTime(seconds float64) { metricPqSigSigningTime.Observe(seconds) } +func ObservePqSigVerificationTime(seconds float64) { metricPqSigVerificationTime.Observe(seconds) } +func ObservePqSigAggBuildingTime(seconds float64) { metricPqSigAggBuildingTime.Observe(seconds) } +func ObservePqSigAggVerificationTime(seconds float64) { + metricPqSigAggVerificationTime.Observe(seconds) +} +func ObserveForkChoiceReorgDepth(depth float64) { metricForkChoiceReorgDepth.Observe(depth) } + +func IncFinalization(result string) { metricFinalizationsTotal.WithLabelValues(result).Inc() } +func IncSTFSlotsProcessed(n uint64) { metricSTFSlotsProcessed.Add(float64(n)) } +func IncSTFAttestationsProcessed(n uint64) { metricSTFAttestationsProcessed.Add(float64(n)) } +func IncPeerConnection(direction, result string) { + metricPeerConnectionEvents.WithLabelValues(direction, result).Inc() +} +func IncPeerDisconnection(direction, reason string) { + metricPeerDisconnectionEvents.WithLabelValues(direction, reason).Inc() +} +func ObserveSTFTime(seconds float64) { metricSTFTime.Observe(seconds) } +func ObserveSTFSlotsTime(seconds float64) { metricSTFSlotsTime.Observe(seconds) } +func ObserveSTFBlockTime(seconds float64) { metricSTFBlockTime.Observe(seconds) } +func ObserveSTFAttestationsTime(seconds float64) { metricSTFAttestationsTime.Observe(seconds) } diff --git a/node/node.go b/node/node.go index 3935a0b..b7d3782 100644 --- a/node/node.go +++ b/node/node.go @@ -2,146 +2,148 @@ package node import ( "context" - "io" - "log/slog" - "sync" "time" + "unsafe" - apiserver "github.com/geanlabs/gean/api/server" - "github.com/geanlabs/gean/chain/forkchoice" - "github.com/geanlabs/gean/network" - "github.com/geanlabs/gean/network/gossipsub" - "github.com/geanlabs/gean/network/p2p" + "github.com/geanlabs/gean/forkchoice" + "github.com/geanlabs/gean/logger" + "github.com/geanlabs/gean/p2p" "github.com/geanlabs/gean/types" - "github.com/libp2p/go-libp2p/core/peer" + "github.com/geanlabs/gean/xmss" ) -var Version = "v0.1.0" - -// Node is the main gean node orchestrator. -type Node struct { - FC *forkchoice.Store - Host *network.Host - Topics *gossipsub.Topics - API *apiserver.Server - Validator *ValidatorDuties - - // P2P Services - P2PManager *p2p.LocalNodeManager - P2PDiscovery *p2p.DiscoveryService - - // PendingBlocks caches blocks awaiting parent availability. - PendingBlocks *PendingBlockCache - - // Sync deduplication: tracks roots currently being fetched to avoid - // duplicate requests across peers and recovery attempts. - // Matches leanSpec BackfillSync._pending pattern. - pendingRoots map[[32]byte]struct{} - pendingRootsMu sync.Mutex - - // Per-peer backoff tracking for sync requests. - // Tracks consecutive failures and last attempt time per peer. - peerBackoff map[peer.ID]*peerSyncState - peerBackoffMu sync.Mutex - - // Per-peer concurrency tracking. Limits in-flight sync requests to - // maxConcurrentRequestsPerPeer (2) per peer, matching leanSpec - // MAX_CONCURRENT_REQUESTS. - peerInFlight map[peer.ID]int - peerInFlightMu sync.Mutex - - // Recovery cooldown prevents recoverMissingParentSync from flooding - // peers when multiple gossip blocks arrive with missing parents. - recoveryMu sync.Mutex - lastRecoveryTime time.Time - - Clock *Clock - dbCloser io.Closer - log *slog.Logger - - ctx context.Context - cancel context.CancelFunc +// Engine is the consensus coordination loop. +// It owns Store, ForkChoice, and KeyManager as siblings, +// rs L78-95). +// Pending block limits to prevent stuck-forever scenarios. +const ( + MaxBlockFetchDepth = 512 // Max ancestor chain depth before discarding + MaxPendingBlocks = 1024 // Max pending blocks before rejecting new ones +) + +type Engine struct { + Store *ConsensusStore + FC *forkchoice.ForkChoice + P2P *p2p.Host + Keys *xmss.KeyManager + IsAggregator bool + CommitteeCount uint64 + PendingBlocks map[[32]byte]map[[32]byte]bool // parent_root -> {child_roots} + PendingBlockParents map[[32]byte][32]byte // block_root -> missing_ancestor + PendingBlockDepths map[[32]byte]int // block_root -> fetch depth + + // Channels for receiving messages from P2P goroutine. + BlockCh chan *types.SignedBlockWithAttestation + AttestationCh chan *types.SignedAttestation + AggregationCh chan *types.SignedAggregatedAttestation + FailedRootCh chan [32]byte // roots that exhausted all fetch retries — triggers subtree cleanup + FetchRootCh chan [32]byte // roots to fetch — coalesced into batches by the fetch batcher } -// peerSyncState tracks backoff state for a single peer during sync. -// Modeled after ethlambda's exponential backoff pattern. -type peerSyncState struct { - failures int - lastTried time.Time +// New creates a new Engine. +func New( + s *ConsensusStore, + fc *forkchoice.ForkChoice, + p2pHost *p2p.Host, + keys *xmss.KeyManager, + isAggregator bool, + committeeCount uint64, +) *Engine { + return &Engine{ + Store: s, + FC: fc, + P2P: p2pHost, + Keys: keys, + IsAggregator: isAggregator, + CommitteeCount: committeeCount, + PendingBlocks: make(map[[32]byte]map[[32]byte]bool), + PendingBlockParents: make(map[[32]byte][32]byte), + PendingBlockDepths: make(map[[32]byte]int), + BlockCh: make(chan *types.SignedBlockWithAttestation, 64), + AttestationCh: make(chan *types.SignedAttestation, 256), + AggregationCh: make(chan *types.SignedAggregatedAttestation, 64), + FailedRootCh: make(chan [32]byte, 64), + FetchRootCh: make(chan [32]byte, 256), + } } -// Sync constants aligned with leanSpec (subspecs/sync/config.py). -const ( - // maxBlocksPerRequest is the maximum number of block roots to request - // in a single BlocksByRoot RPC call. Matches leanSpec MAX_BLOCKS_PER_REQUEST. - maxBlocksPerRequest = 10 +// Run starts the engine's main loop. +// This is the single-writer goroutine — all state mutations happen here. +func (e *Engine) Run(ctx context.Context) { + // Set up callbacks for gossip store (avoids circular deps). + FreeSignatureFunc = func(ptr unsafe.Pointer) { + xmss.FreeSignature(ptr) + } + AggregateMetricsFunc = func(durationSeconds float64, numAttestations int) { + ObservePqSigAggBuildingTime(durationSeconds) + IncPqSigAggregatedTotal() + IncPqSigAttestationsInAggregated(numAttestations) + } - // maxBackfillDepth is the maximum depth for backward chain walks. - // Matches leanSpec MAX_BACKFILL_DEPTH and zeam MAX_BLOCK_FETCH_DEPTH. - maxBackfillDepth = 512 + // Initialize static metrics. + SetNodeInfo("gean", "dev") + SetNodeStartTime(float64(time.Now().Unix())) + SetIsAggregator(e.IsAggregator) + SetAttestationCommitteeCount(e.CommitteeCount) + if e.Keys != nil { + SetValidatorsCount(len(e.Keys.ValidatorIDs())) + } - // recoveryCooldown prevents rapid-fire recovery attempts when multiple - // gossip blocks arrive with missing parents in quick succession. - recoveryCooldown = 2 * time.Second + ticker := time.NewTicker(types.MillisecondsPerInterval * time.Millisecond) + defer ticker.Stop() - // maxSyncRetries is the maximum number of retry attempts per peer - // before giving up. Matches ethlambda MAX_FETCH_RETRIES. - maxSyncRetries = 10 + // Start the fetch batcher: coalesces individual fetch requests into + // batches of up to MaxBlocksPerRequest roots per peer request. + go e.runFetchBatcher(ctx) - // initialBackoff is the starting backoff duration for failed sync requests. - initialBackoff = 5 * time.Millisecond + logger.Info(logger.Node, "started") - // backoffMultiplier doubles the backoff on each consecutive failure. - backoffMultiplier = 2 + for { + select { + case <-ctx.Done(): + logger.Info(logger.Node, "shutting down") + return - // maxConcurrentRequestsPerPeer limits in-flight sync requests to a - // single peer. Matches leanSpec MAX_CONCURRENT_REQUESTS. - maxConcurrentRequestsPerPeer = 2 -) + case <-ticker.C: + e.onTick() -func (n *Node) Close() { - n.cancel() - if n.API != nil { - n.API.Stop() - } - // Free Rust-allocated XMSS keypairs. - if n.Validator != nil { - for _, key := range n.Validator.Keys { - if f, ok := key.(interface{ Free() }); ok { - f.Free() - } + case block := <-e.BlockCh: + e.onBlock(block) + + case att := <-e.AttestationCh: + e.onGossipAttestation(att) + + case agg := <-e.AggregationCh: + e.onGossipAggregatedAttestation(agg) + + case root := <-e.FailedRootCh: + e.onFailedRoot(root) } } - if n.dbCloser != nil { - n.dbCloser.Close() - } - if n.P2PDiscovery != nil { - n.P2PDiscovery.Close() - } - if n.P2PManager != nil { - n.P2PManager.Close() +} + +// --- MessageHandler interface for P2P --- + +func (e *Engine) OnBlock(block *types.SignedBlockWithAttestation) { + select { + case e.BlockCh <- block: + default: + logger.Warn(logger.Chain, "block channel full, dropping") } - if n.Host != nil { - n.Host.Close() +} + +func (e *Engine) OnGossipAttestation(att *types.SignedAttestation) { + select { + case e.AttestationCh <- att: + default: + logger.Warn(logger.Gossip, "attestation channel full, dropping") } } -// Config holds node configuration. -type Config struct { - GenesisTime uint64 - Validators []*types.Validator - ListenAddr string - NodeKeyPath string - Bootnodes []string - DiscoveryPort int - DataDir string - CheckpointSyncURL string - ValidatorIDs []uint64 - ValidatorKeysDir string - MetricsPort int - DevnetID string - IsAggregator bool - APIHost string - APIPort int - APIEnabled bool +func (e *Engine) OnGossipAggregatedAttestation(agg *types.SignedAggregatedAttestation) { + select { + case e.AggregationCh <- agg: + default: + logger.Warn(logger.Signature, "aggregation channel full, dropping") + } } diff --git a/node/node_test.go b/node/node_test.go new file mode 100644 index 0000000..785ef58 --- /dev/null +++ b/node/node_test.go @@ -0,0 +1,294 @@ +package node + +import ( + "os" + "testing" + + "github.com/geanlabs/gean/forkchoice" + "github.com/geanlabs/gean/logger" + "github.com/geanlabs/gean/storage" + "github.com/geanlabs/gean/types" +) + +func TestMain(m *testing.M) { + logger.Quiet = true + os.Exit(m.Run()) +} + +func makeTestEngine() *Engine { + backend := storage.NewInMemoryBackend() + s := NewConsensusStore(backend) + + // Set up genesis state. + s.SetConfig(&types.ChainConfig{GenesisTime: 1000}) + var genesisRoot [32]byte + genesisRoot[0] = 0x01 + s.SetHead(genesisRoot) + s.SetSafeTarget(genesisRoot) + s.SetLatestJustified(&types.Checkpoint{Root: genesisRoot, Slot: 0}) + s.SetLatestFinalized(&types.Checkpoint{Root: genesisRoot, Slot: 0}) + s.InsertBlockHeader(genesisRoot, &types.BlockHeader{Slot: 0}) + + genesisState := &types.State{ + Config: &types.ChainConfig{GenesisTime: 1000}, + Slot: 0, + LatestBlockHeader: &types.BlockHeader{}, + LatestJustified: &types.Checkpoint{Root: genesisRoot, Slot: 0}, + LatestFinalized: &types.Checkpoint{Root: genesisRoot, Slot: 0}, + JustifiedSlots: types.NewBitlistSSZ(0), + JustificationsValidators: types.NewBitlistSSZ(0), + } + s.InsertState(genesisRoot, genesisState) + + fc := forkchoice.New(0, genesisRoot) + + return New(s, fc, nil, nil, false, 1) +} + +func TestEngineCreation(t *testing.T) { + e := makeTestEngine() + if e.Store == nil { + t.Fatal("store should not be nil") + } + if e.FC == nil { + t.Fatal("fork choice should not be nil") + } +} + +func TestEngineUpdateHead(t *testing.T) { + e := makeTestEngine() + e.updateHead(false) + + head := e.Store.Head() + if types.IsZeroRoot(head) { + t.Fatal("head should not be zero after updateHead") + } +} + +func TestEngineUpdateSafeTarget(t *testing.T) { + e := makeTestEngine() + e.updateSafeTarget() + + safeTarget := e.Store.SafeTarget() + if types.IsZeroRoot(safeTarget) { + t.Fatal("safe target should not be zero") + } +} + +func TestEnginePendingBlocks(t *testing.T) { + e := makeTestEngine() + + var blockRoot, parentRoot [32]byte + blockRoot[0] = 0x10 + parentRoot[0] = 0x20 + + // Manually add pending entries (simulates addPendingBlock logic). + e.PendingBlockParents[blockRoot] = parentRoot + children := make(map[[32]byte]bool) + children[blockRoot] = true + e.PendingBlocks[parentRoot] = children + + if len(e.PendingBlocks) != 1 { + t.Fatalf("expected 1 pending parent, got %d", len(e.PendingBlocks)) + } + if len(e.PendingBlockParents) != 1 { + t.Fatalf("expected 1 pending block, got %d", len(e.PendingBlockParents)) + } +} + +func TestEngineCascadePending(t *testing.T) { + e := makeTestEngine() + + var parentRoot, child1, child2 [32]byte + parentRoot[0] = 0x01 + child1[0] = 0x10 + child2[0] = 0x20 + + e.PendingBlockParents[child1] = parentRoot + e.PendingBlockParents[child2] = parentRoot + children := make(map[[32]byte]bool) + children[child1] = true + children[child2] = true + e.PendingBlocks[parentRoot] = children + + if len(e.PendingBlocks[parentRoot]) != 2 { + t.Fatalf("expected 2 children pending, got %d", len(e.PendingBlocks[parentRoot])) + } + + // collectPendingChildren removes entries and returns blocks to process. + var queue []*types.SignedBlockWithAttestation + e.collectPendingChildren(parentRoot, &queue) + + if len(e.PendingBlocks) != 0 { + t.Fatalf("expected 0 pending after cascade, got %d", len(e.PendingBlocks)) + } + if len(e.PendingBlockParents) != 0 { + t.Fatalf("expected 0 pending parents after cascade, got %d", len(e.PendingBlockParents)) + } +} + +func TestEngineMessageHandler(t *testing.T) { + e := makeTestEngine() + + // Verify Engine implements the MessageHandler interface. + block := &types.SignedBlockWithAttestation{ + Block: &types.BlockWithAttestation{ + Block: &types.Block{Slot: 1}, + ProposerAttestation: &types.Attestation{}, + }, + Signature: &types.BlockSignatures{}, + } + + // Should not panic — just push to channel. + e.OnBlock(block) + + // Check channel received it. + select { + case received := <-e.BlockCh: + if received.Block.Block.Slot != 1 { + t.Fatal("wrong block slot") + } + default: + t.Fatal("block should be in channel") + } +} + +func TestEngineGetOurProposer(t *testing.T) { + e := makeTestEngine() + // No keys — should return false. + _, ok := e.getOurProposer(1) + if ok { + t.Fatal("should not be proposer without keys") + } +} + +func TestEngineCurrentSlot(t *testing.T) { + e := makeTestEngine() + // Genesis at 1000s, slot 1 starts at 1004s. + slot := e.currentSlot(1004 * 1000) // 1004000ms + if slot != 1 { + t.Fatalf("expected slot 1, got %d", slot) + } +} + +func TestEngineCurrentInterval(t *testing.T) { + e := makeTestEngine() + // Genesis at 1000s. Interval 0 of slot 1 starts at 1004000ms. + // Interval 1 starts at 1004800ms. + interval := e.currentInterval(1004800) + if interval != 1 { + t.Fatalf("expected interval 1, got %d", interval) + } +} + +func TestPendingBlockCount(t *testing.T) { + e := makeTestEngine() + + if e.pendingBlockCount() != 0 { + t.Fatal("expected 0 pending blocks initially") + } + + // Add 3 children under 2 parents. + parent1 := [32]byte{0x01} + parent2 := [32]byte{0x02} + child1 := [32]byte{0x10} + child2 := [32]byte{0x20} + child3 := [32]byte{0x30} + + e.PendingBlocks[parent1] = map[[32]byte]bool{child1: true, child2: true} + e.PendingBlocks[parent2] = map[[32]byte]bool{child3: true} + + if e.pendingBlockCount() != 3 { + t.Fatalf("expected 3 pending blocks, got %d", e.pendingBlockCount()) + } +} + +func TestPendingBlockDepthTracking(t *testing.T) { + e := makeTestEngine() + + // Simulate a chain of pending blocks with increasing depth. + root1 := [32]byte{0x01} + root2 := [32]byte{0x02} + root3 := [32]byte{0x03} + + e.PendingBlockDepths[root1] = 1 + e.PendingBlockDepths[root2] = 2 + e.PendingBlockDepths[root3] = 3 + + if e.PendingBlockDepths[root3] != 3 { + t.Fatalf("expected depth 3, got %d", e.PendingBlockDepths[root3]) + } + + // Verify depth is inherited from parent. + parentDepth := e.PendingBlockDepths[root2] + childDepth := parentDepth + 1 + if childDepth != 3 { + t.Fatalf("expected inherited depth 3, got %d", childDepth) + } +} + +func TestDiscardPendingSubtree(t *testing.T) { + e := makeTestEngine() + + // Build a tree: root -> child1, child1 -> grandchild1, grandchild2 + root := [32]byte{0x01} + child1 := [32]byte{0x10} + grandchild1 := [32]byte{0xA0} + grandchild2 := [32]byte{0xB0} + + e.PendingBlocks[root] = map[[32]byte]bool{child1: true} + e.PendingBlocks[child1] = map[[32]byte]bool{grandchild1: true, grandchild2: true} + e.PendingBlockParents[child1] = root + e.PendingBlockParents[grandchild1] = child1 + e.PendingBlockParents[grandchild2] = child1 + e.PendingBlockDepths[child1] = 1 + e.PendingBlockDepths[grandchild1] = 2 + e.PendingBlockDepths[grandchild2] = 2 + + // Discard subtree from child1. + e.discardPendingSubtree(child1) + + // child1 and its descendants should be gone. + if _, ok := e.PendingBlockParents[child1]; ok { + t.Fatal("child1 should be removed from PendingBlockParents") + } + if _, ok := e.PendingBlockParents[grandchild1]; ok { + t.Fatal("grandchild1 should be removed from PendingBlockParents") + } + if _, ok := e.PendingBlockParents[grandchild2]; ok { + t.Fatal("grandchild2 should be removed from PendingBlockParents") + } + if _, ok := e.PendingBlockDepths[child1]; ok { + t.Fatal("child1 depth should be removed") + } + if _, ok := e.PendingBlockDepths[grandchild1]; ok { + t.Fatal("grandchild1 depth should be removed") + } + + // Root's children entry should still exist (discardPendingSubtree doesn't clean parent). + if _, ok := e.PendingBlocks[root]; !ok { + t.Fatal("root's PendingBlocks entry should still exist") + } +} + +func TestCascadeClearsDepth(t *testing.T) { + e := makeTestEngine() + + var parentRoot, child1 [32]byte + parentRoot[0] = 0x01 + child1[0] = 0x10 + + e.PendingBlockParents[child1] = parentRoot + e.PendingBlockDepths[child1] = 5 + children := make(map[[32]byte]bool) + children[child1] = true + e.PendingBlocks[parentRoot] = children + + var queue []*types.SignedBlockWithAttestation + e.collectPendingChildren(parentRoot, &queue) + + // Depth should be cleared after cascade. + if _, ok := e.PendingBlockDepths[child1]; ok { + t.Fatal("depth should be cleared after collectPendingChildren") + } +} diff --git a/node/pending_blocks.go b/node/pending_blocks.go deleted file mode 100644 index 74ec177..0000000 --- a/node/pending_blocks.go +++ /dev/null @@ -1,126 +0,0 @@ -package node - -import ( - "slices" - "sync" - - "github.com/geanlabs/gean/types" -) - -const maxPendingBlocks = 1024 - -// PendingBlockCache stores blocks awaiting parent availability. -// Per leanSpec sync requirements, blocks with missing parents should be cached, -// not discarded, allowing the node to process them once the parent arrives. -type PendingBlockCache struct { - mu sync.Mutex - blocks map[[32]byte]*types.SignedBlockWithAttestation - byParent map[[32]byte][][32]byte // parent root -> child block roots - order [][32]byte // insertion order for eviction -} - -// NewPendingBlockCache creates an empty pending block cache. -func NewPendingBlockCache() *PendingBlockCache { - return &PendingBlockCache{ - blocks: make(map[[32]byte]*types.SignedBlockWithAttestation), - byParent: make(map[[32]byte][][32]byte), - } -} - -// Add stores a block that is awaiting its parent. -// If the cache is full, the oldest entry is evicted. -func (c *PendingBlockCache) Add(sb *types.SignedBlockWithAttestation) { - if sb == nil || sb.Message == nil || sb.Message.Block == nil { - return - } - - block := sb.Message.Block - blockRoot, _ := block.HashTreeRoot() - parentRoot := block.ParentRoot - - c.mu.Lock() - defer c.mu.Unlock() - - // Already cached. - if _, ok := c.blocks[blockRoot]; ok { - return - } - - // Evict oldest if at capacity. - for len(c.order) >= maxPendingBlocks { - oldest := c.order[0] - c.order = c.order[1:] - if oldBlock, ok := c.blocks[oldest]; ok { - delete(c.blocks, oldest) - oldParent := oldBlock.Message.Block.ParentRoot - c.removeFromParentIndex(oldParent, oldest) - } - } - - c.blocks[blockRoot] = sb - c.byParent[parentRoot] = append(c.byParent[parentRoot], blockRoot) - c.order = append(c.order, blockRoot) -} - -// GetChildrenOf returns all pending blocks that have the given root as their parent. -func (c *PendingBlockCache) GetChildrenOf(parentRoot [32]byte) []*types.SignedBlockWithAttestation { - c.mu.Lock() - defer c.mu.Unlock() - - childRoots := c.byParent[parentRoot] - if len(childRoots) == 0 { - return nil - } - - children := make([]*types.SignedBlockWithAttestation, 0, len(childRoots)) - for _, root := range childRoots { - if sb, ok := c.blocks[root]; ok { - children = append(children, sb) - } - } - return children -} - -// Remove deletes a block from the cache (called after successful processing). -func (c *PendingBlockCache) Remove(blockRoot [32]byte) { - c.mu.Lock() - defer c.mu.Unlock() - - sb, ok := c.blocks[blockRoot] - if !ok { - return - } - - delete(c.blocks, blockRoot) - parentRoot := sb.Message.Block.ParentRoot - c.removeFromParentIndex(parentRoot, blockRoot) - - // Remove from order slice. - for i, r := range c.order { - if r == blockRoot { - c.order = slices.Delete(c.order, i, i+1) - break - } - } -} - -// removeFromParentIndex removes a block root from the byParent index. -// Must be called with lock held. -func (c *PendingBlockCache) removeFromParentIndex(parentRoot, blockRoot [32]byte) { - children := c.byParent[parentRoot] - for i, r := range children { - if r == blockRoot { - c.byParent[parentRoot] = slices.Delete(children, i, i+1) - break - } - } - if len(c.byParent[parentRoot]) == 0 { - delete(c.byParent, parentRoot) - } -} - -func (c *PendingBlockCache) Len() int { - c.mu.Lock() - defer c.mu.Unlock() - return len(c.blocks) -} diff --git a/node/store_aggregate.go b/node/store_aggregate.go new file mode 100644 index 0000000..c539b6e --- /dev/null +++ b/node/store_aggregate.go @@ -0,0 +1,165 @@ +package node + +import ( + "sort" + "time" + + "github.com/geanlabs/gean/logger" + "github.com/geanlabs/gean/types" + "github.com/geanlabs/gean/xmss" +) + +// AggregateCommitteeSignatures collects gossip signatures and aggregates them +// using real XMSS ZK aggregation via xmss.AggregateSignatures. +func AggregateCommitteeSignatures(s *ConsensusStore) []*types.SignedAggregatedAttestation { + if s.GossipSignatures.Len() == 0 { + return nil + } + + var newAggregates []*types.SignedAggregatedAttestation + var keysToDelete []GossipDeleteKey + var payloadEntries []PayloadKV + + for dataRoot, entry := range s.GossipSignatures { + if len(entry.Signatures) == 0 { + continue + } + + // Get target state for pubkey lookup. + targetState := s.GetState(entry.Data.Target.Root) + if targetState == nil { + logger.Warn(logger.Signature, "aggregate: missing target state for %x", entry.Data.Target.Root) + continue + } + + // Sort signatures by validator ID for deterministic aggregation ordering. + // Verification side uses BitlistIndices which returns ascending order, + // so aggregation must match. + sortedSigs := make([]GossipSignatureEntry, len(entry.Signatures)) + copy(sortedSigs, entry.Signatures) + sort.Slice(sortedSigs, func(i, j int) bool { + return sortedSigs[i].ValidatorID < sortedSigs[j].ValidatorID + }) + + // Collect pubkeys and signatures as opaque C handles. + var pubkeys []xmss.CPubKey + var sigs []xmss.CSig + var ids []uint64 + var cleanupSigs []xmss.CSig // for fallback-parsed sigs only + + valid := true + for _, sigEntry := range sortedSigs { + if sigEntry.ValidatorID >= uint64(len(targetState.Validators)) { + logger.Error(logger.Signature, "aggregate: validator %d out of range", sigEntry.ValidatorID) + valid = false + break + } + + // Use stored C handle if available. + // If no handle, parse from SSZ bytes (fallback for P2P proposer attestations). + sigHandle := sigEntry.SigHandle + if sigHandle == nil { + parsed, err := xmss.ParseSignature(sigEntry.Signature[:]) + if err != nil { + logger.Warn(logger.Signature, "aggregate: parse sig fallback for validator %d: %v", sigEntry.ValidatorID, err) + valid = false + break + } + cleanupSigs = append(cleanupSigs, parsed) + sigHandle = parsed + } + + // Get cached pubkey handle (parsed once, reused across aggregation cycles). + pk, err := s.PubKeyCache.Get(targetState.Validators[sigEntry.ValidatorID].Pubkey) + if err != nil { + logger.Error(logger.Signature, "aggregate: parse pubkey %d: %v", sigEntry.ValidatorID, err) + valid = false + break + } + + pubkeys = append(pubkeys, pk) + sigs = append(sigs, sigHandle) + ids = append(ids, sigEntry.ValidatorID) + } + + // Free only fallback-parsed sig handles. Pubkey handles are owned by the cache. + defer func() { + for _, sig := range cleanupSigs { + xmss.FreeSignature(sig) + } + }() + + if !valid || len(ids) == 0 { + continue + } + + // Aggregate via real XMSS ZK proof. + slot := uint32(entry.Data.Slot) + aggStart := time.Now() + proofBytes, err := xmss.AggregateSignatures(pubkeys, sigs, dataRoot, slot) + aggDuration := time.Since(aggStart) + if err != nil { + logger.Error(logger.Signature, "aggregate: failed slot=%d sigs=%d validators=%v duration=%v: %v", + slot, len(sigs), ids, aggDuration, err) + continue + } + logger.Info(logger.Signature, "aggregate: slot=%d sigs=%d validators=%v proof=%d bytes duration=%v", + slot, len(sigs), ids, len(proofBytes), aggDuration) + + // Metrics — imported from engine package via function references to avoid circular deps. + if AggregateMetricsFunc != nil { + AggregateMetricsFunc(aggDuration.Seconds(), len(ids)) + } + + participants := aggregationBitsFromValidatorIndices(ids) + proof := &types.AggregatedSignatureProof{ + Participants: participants, + ProofData: proofBytes, + } + + newAggregates = append(newAggregates, &types.SignedAggregatedAttestation{ + Data: entry.Data, + Proof: proof, + }) + + payloadEntries = append(payloadEntries, PayloadKV{ + DataRoot: dataRoot, + Data: entry.Data, + Proof: proof, + }) + + for _, id := range ids { + keysToDelete = append(keysToDelete, GossipDeleteKey{ + ValidatorID: id, + DataRoot: dataRoot, + }) + } + } + + // Insert into known (immediately usable for block building and fork choice). + + s.KnownPayloads.PushBatch(payloadEntries) + + // Delete aggregated signatures from gossip store. + s.GossipSignatures.Delete(keysToDelete) + + return newAggregates +} + +// aggregationBitsFromValidatorIndices builds a bitlist from validator IDs. +func aggregationBitsFromValidatorIndices(ids []uint64) []byte { + if len(ids) == 0 { + return types.NewBitlistSSZ(0) + } + maxID := uint64(0) + for _, id := range ids { + if id > maxID { + maxID = id + } + } + bits := types.NewBitlistSSZ(maxID + 1) + for _, id := range ids { + types.BitlistSet(bits, id) + } + return bits +} diff --git a/node/store_block.go b/node/store_block.go new file mode 100644 index 0000000..4023715 --- /dev/null +++ b/node/store_block.go @@ -0,0 +1,285 @@ +package node + +import ( + "fmt" + "time" + + "github.com/geanlabs/gean/logger" + "github.com/geanlabs/gean/statetransition" + "github.com/geanlabs/gean/types" + "github.com/geanlabs/gean/xmss" +) + +// OnBlock processes a new signed block with signature verification. +func OnBlock( + s *ConsensusStore, + signedBlock *types.SignedBlockWithAttestation, + localValidatorIDs []uint64, +) error { + return onBlockCore(s, signedBlock, true, localValidatorIDs) +} + +// OnBlockWithoutVerification processes a block without signature checks. +// Used for fork choice spec tests where signatures are absent. +// Caller must call ProcessProposerAttestation(s, signedBlock, false) AFTER updateHead. +func OnBlockWithoutVerification( + s *ConsensusStore, + signedBlock *types.SignedBlockWithAttestation, +) error { + return onBlockCore(s, signedBlock, false, nil) +} + +// onBlockCore is the core block processing logic. +func onBlockCore( + s *ConsensusStore, + signedBlock *types.SignedBlockWithAttestation, + verify bool, + localValidatorIDs []uint64, +) error { + start := time.Now() + block := signedBlock.Block.Block + slot := block.Slot + + // Compute block root. + blockRoot, err := block.HashTreeRoot() + if err != nil { + return fmt.Errorf("compute block root: %w", err) + } + + // Skip duplicate blocks. + if s.HasState(blockRoot) { + return nil // already known + } + + // Get parent state. + parentState := s.GetState(block.ParentRoot) + if parentState == nil { + return &StoreError{ErrMissingParentState, + fmt.Sprintf("parent state not found for slot %d, missing block %x", slot, block.ParentRoot)} + } + + // Verify signatures BEFORE state transition. + // Uses parent_state for validator lookup. + if verify { + if err := verifyBlockSignatures(s, signedBlock, parentState); err != nil { + return err + } + } + + // Clone state for transition. + stateBytes, _ := parentState.MarshalSSZ() + postState := &types.State{} + postState.UnmarshalSSZ(stateBytes) + + // Execute state transition. + if err := statetransition.StateTransition(postState, block); err != nil { + return &StoreError{ErrStateTransitionFailed, fmt.Sprintf("state transition: %v", err)} + } + + // Cache state root in latest block header. + postState.LatestBlockHeader.StateRoot = block.StateRoot + + // Check if justified/finalized advanced. + var newJustified, newFinalized *types.Checkpoint + currentJustified := s.LatestJustified() + currentFinalized := s.LatestFinalized() + + if postState.LatestJustified.Slot > currentJustified.Slot { + newJustified = postState.LatestJustified + } + if postState.LatestFinalized.Slot > currentFinalized.Slot { + newFinalized = postState.LatestFinalized + } + + // Update checkpoints. + if newJustified != nil { + s.SetLatestJustified(newJustified) + } + if newFinalized != nil { + s.SetLatestFinalized(newFinalized) + } + + // Store block header, state, and live chain entry. + header := &types.BlockHeader{ + Slot: block.Slot, + ProposerIndex: block.ProposerIndex, + ParentRoot: block.ParentRoot, + StateRoot: block.StateRoot, + } + bodyRoot, _ := block.Body.HashTreeRoot() + header.BodyRoot = bodyRoot + + s.InsertBlockHeader(blockRoot, header) + s.InsertState(blockRoot, postState) + s.InsertLiveChainEntry(slot, blockRoot, block.ParentRoot) + + // Store block body and signatures. + storeBlockParts(s, blockRoot, signedBlock) + + // Process block body attestations into known payloads. + processBlockAttestations(s, signedBlock, blockRoot) + + // NOTE: Proposer attestation is NOT processed here. + // Engine must call updateHead BEFORE ProcessProposerAttestation + // to prevent circular weight advantage. + + attCount := 0 + if block.Body != nil { + attCount = len(block.Body.Attestations) + } + logger.Info(logger.Chain, "block slot=%d block_root=0x%x parent_root=0x%x proposer=%d attestations=%d justified_slot=%d finalized_slot=%d proc_time=%s", + slot, blockRoot, block.ParentRoot, block.ProposerIndex, attCount, + s.LatestJustified().Slot, s.LatestFinalized().Slot, + time.Since(start).Round(time.Millisecond)) + + return nil +} + +// ProcessProposerAttestation processes the proposer's self-attestation. +// Must be called AFTER updateHead to prevent circular weight advantage. +func ProcessProposerAttestation(s *ConsensusStore, signedBlock *types.SignedBlockWithAttestation, verify bool) { + if signedBlock.Block.ProposerAttestation == nil { + return + } + blockRoot, _ := signedBlock.Block.Block.HashTreeRoot() + processProposerAttestation(s, signedBlock, blockRoot, verify) +} + +// verifyBlockSignatures verifies proposer and attestation signatures. +func verifyBlockSignatures( + s *ConsensusStore, + signedBlock *types.SignedBlockWithAttestation, + state *types.State, +) error { + block := signedBlock.Block.Block + sigs := signedBlock.Signature + + // Verify proposer attestation signature. + // ProposerSignature signs the proposer's AttestationData, NOT the block root. + + if block.ProposerIndex >= uint64(len(state.Validators)) { + return &StoreError{ErrInvalidValidatorIndex, "proposer index out of range"} + } + proposerPubkey := state.Validators[block.ProposerIndex].Pubkey + + proposerAtt := signedBlock.Block.ProposerAttestation + if proposerAtt == nil || proposerAtt.Data == nil { + return &StoreError{ErrProposerSignatureVerificationFailed, "missing proposer attestation data"} + } + attDataRoot, _ := proposerAtt.Data.HashTreeRoot() + slot := uint32(proposerAtt.Data.Slot) + + valid, err := xmss.VerifySignatureSSZ(proposerPubkey, slot, attDataRoot, sigs.ProposerSignature) + if err != nil { + return &StoreError{ErrProposerSignatureDecodingFailed, fmt.Sprintf("proposer sig decode: %v", err)} + } + if !valid { + return &StoreError{ErrProposerSignatureVerificationFailed, "proposer signature invalid"} + } + + // Verify attestation aggregate signatures. + if block.Body == nil { + return nil + } + for i, att := range block.Body.Attestations { + if i >= len(sigs.AttestationSignatures) { + return &StoreError{ErrAttestationSignatureMismatch, + fmt.Sprintf("attestation %d has no matching signature", i)} + } + proof := sigs.AttestationSignatures[i] + + // Get participant pubkeys. + // During checkpoint sync backfill, target states may not exist for + // attestations referencing blocks before the checkpoint. Skip verification + // for these — the block was already validated by the originating node. + targetState := s.GetState(att.Data.Target.Root) + if targetState == nil { + continue // skip attestation verification when target state unavailable + } + + participantIDs := types.BitlistIndices(proof.Participants) + var pubkeys []([types.PubkeySize]byte) + for _, vid := range participantIDs { + if vid >= uint64(len(targetState.Validators)) { + return &StoreError{ErrInvalidValidatorIndex, fmt.Sprintf("validator %d out of range", vid)} + } + pubkeys = append(pubkeys, targetState.Validators[vid].Pubkey) + } + + // Verify aggregated proof. + dataRoot, _ := att.Data.HashTreeRoot() + attSlot := uint32(att.Data.Slot) + + parsedPubkeys := make([]xmss.CPubKey, len(pubkeys)) + for j, pk := range pubkeys { + parsed, err := xmss.ParsePublicKey(pk) + if err != nil { + // Free already parsed keys before returning. + for k := 0; k < j; k++ { + xmss.FreePublicKey(parsedPubkeys[k]) + } + return &StoreError{ErrPubkeyDecodingFailed, fmt.Sprintf("pubkey %d: %v", participantIDs[j], err)} + } + parsedPubkeys[j] = parsed + } + // Free all parsed pubkeys after verification. + defer func() { + for _, pk := range parsedPubkeys { + xmss.FreePublicKey(pk) + } + }() + + if err := xmss.VerifyAggregatedSignature(proof.ProofData, parsedPubkeys, dataRoot, attSlot); err != nil { + return &StoreError{ErrAggregateVerificationFailed, fmt.Sprintf("attestation %d proof: %v", i, err)} + } + } + + return nil +} + +// storeBlockParts stores block body and full signed block across split tables. +func storeBlockParts(s *ConsensusStore, blockRoot [32]byte, signedBlock *types.SignedBlockWithAttestation) { + writeBlockData(s, blockRoot, signedBlock) +} + +// processBlockAttestations extracts attestations from block body into known payloads. +func processBlockAttestations(s *ConsensusStore, signedBlock *types.SignedBlockWithAttestation, blockRoot [32]byte) { + if signedBlock.Block.Block.Body == nil || signedBlock.Signature == nil { + return + } + for i, att := range signedBlock.Block.Block.Body.Attestations { + if i >= len(signedBlock.Signature.AttestationSignatures) { + continue + } + proof := signedBlock.Signature.AttestationSignatures[i] + dataRoot, _ := att.Data.HashTreeRoot() + s.KnownPayloads.Push(dataRoot, att.Data, proof) + } +} + +// processProposerAttestation handles the proposer's self-attestation. +// Production (verify=true): store proposer's real XMSS signature in gossip for aggregation at interval 2. +// Spec tests only (verify=false via OnBlockWithoutVerification): insert participants-only proof +// into new payloads since no real signatures exist in test fixtures. +func processProposerAttestation(s *ConsensusStore, signedBlock *types.SignedBlockWithAttestation, blockRoot [32]byte, verify bool) { + att := signedBlock.Block.ProposerAttestation + if att == nil || att.Data == nil { + return + } + dataRoot, _ := att.Data.HashTreeRoot() + + if verify && signedBlock.Signature != nil { + // Store proposer's gossip signature for aggregation with C handle. + // ParseSignature creates a native leansig handle from SSZ bytes. + sigHandle, parseErr := xmss.ParseSignature(signedBlock.Signature.ProposerSignature[:]) + s.GossipSignatures.InsertWithHandle(dataRoot, att.Data, att.ValidatorID, signedBlock.Signature.ProposerSignature, sigHandle, parseErr) + } else { + // Without sig verification, insert directly with a dummy proof. + participants := aggregationBitsFromValidatorIndices([]uint64{att.ValidatorID}) + proof := &types.AggregatedSignatureProof{ + Participants: participants, + ProofData: nil, + } + s.NewPayloads.Push(dataRoot, att.Data, proof) + } +} diff --git a/node/store_build.go b/node/store_build.go new file mode 100644 index 0000000..3cad8d5 --- /dev/null +++ b/node/store_build.go @@ -0,0 +1,288 @@ +package node + +import ( + "fmt" + "sort" + + "github.com/geanlabs/gean/statetransition" + "github.com/geanlabs/gean/storage" + "github.com/geanlabs/gean/types" +) + +// ProduceBlockWithSignatures builds a block using per-validator latest-vote selection. +// Returns the block and per-attestation signature proofs. +func ProduceBlockWithSignatures( + s *ConsensusStore, + slot, validatorIndex uint64, +) (*types.Block, []*types.AggregatedSignatureProof, error) { + headRoot := s.Head() + headState := s.GetState(headRoot) + if headState == nil { + return nil, nil, &StoreError{ErrMissingParentState, + fmt.Sprintf("head state missing for slot %d", slot)} + } + + numValidators := headState.NumValidators() + if !types.IsProposer(slot, validatorIndex, numValidators) { + return nil, nil, errNotProposer(validatorIndex, slot) + } + + knownEntries := s.KnownPayloads.Entries() + knownBlockRoots := s.getBlockRoots() + + return buildBlock(headState, slot, validatorIndex, headRoot, knownBlockRoots, knownEntries) +} + +// buildBlock builds a valid block using per-validator latest-vote selection. +// +// For each validator, we pick their latest vote whose source matches the +// current justified checkpoint, then group validators by their vote's data +// root and emit one attestation per (data root, validator subset) pair. +// +// This bounds block size by the validator count: at most numValidators +// distinct attestations per fixed-point iteration. Multiple validators +// voting for the same target share a single AggregatedAttestation. +func buildBlock( + headState *types.State, + slot, proposerIndex uint64, + parentRoot [32]byte, + knownBlockRoots map[[32]byte]bool, + payloads map[[32]byte]*PayloadEntry, +) (*types.Block, []*types.AggregatedSignatureProof, error) { + var attestations []*types.AggregatedAttestation + var signatures []*types.AggregatedSignatureProof + + if len(payloads) > 0 { + // Genesis edge case: derive justified checkpoint matching process_block_header. + var currentJustified *types.Checkpoint + if headState.LatestBlockHeader.Slot == 0 { + currentJustified = &types.Checkpoint{ + Root: parentRoot, + Slot: headState.LatestJustified.Slot, + } + } else { + currentJustified = headState.LatestJustified + } + + // Track validators already included to avoid duplication across iterations. + processedValidators := make(map[uint64]bool) + + for { + // For the current justified source, find each validator's latest vote. + // The result maps each validator to the payload entry containing their + // latest matching vote (highest data.Slot). + perValidator := selectLatestPerValidator(payloads, knownBlockRoots, currentJustified, processedValidators) + if len(perValidator) == 0 { + break + } + + // Group validators by their selected payload entry. Multiple validators + // pointing at the same entry will share AggregatedAttestations. + groups := groupValidatorsByEntry(perValidator) + + added := 0 + for _, group := range groups { + added += emitAttestationsForGroup(group.entry, group.validators, &attestations, &signatures, processedValidators) + } + if added == 0 { + break + } + + // Check if justification advanced via trial state transition. + candidate := &types.Block{ + Slot: slot, + ProposerIndex: proposerIndex, + ParentRoot: parentRoot, + Body: &types.BlockBody{Attestations: attestations}, + } + trialBytes, _ := headState.MarshalSSZ() + trialState := &types.State{} + trialState.UnmarshalSSZ(trialBytes) + + statetransition.ProcessSlots(trialState, slot) + statetransition.ProcessBlock(trialState, candidate) + + if trialState.LatestJustified.Slot != currentJustified.Slot || + trialState.LatestJustified.Root != currentJustified.Root { + currentJustified = trialState.LatestJustified + // Continue: new checkpoint may unlock more attestation data. + } else { + break + } + } + } + + // Build final block with correct state root. + finalBlock := &types.Block{ + Slot: slot, + ProposerIndex: proposerIndex, + ParentRoot: parentRoot, + Body: &types.BlockBody{Attestations: attestations}, + } + + finalBytes, _ := headState.MarshalSSZ() + postState := &types.State{} + postState.UnmarshalSSZ(finalBytes) + + if err := statetransition.ProcessSlots(postState, slot); err != nil { + return nil, nil, fmt.Errorf("process slots: %w", err) + } + if err := statetransition.ProcessBlock(postState, finalBlock); err != nil { + return nil, nil, fmt.Errorf("process block: %w", err) + } + + stateRoot, _ := postState.HashTreeRoot() + finalBlock.StateRoot = stateRoot + + return finalBlock, signatures, nil +} + +// selectLatestPerValidator finds, for each validator, the payload entry that +// contains their latest vote whose source matches `currentJustified`. +// +// Validators in `excluded` are skipped (used to avoid re-selecting validators +// already included in earlier fixed-point iterations). +func selectLatestPerValidator( + payloads map[[32]byte]*PayloadEntry, + knownBlockRoots map[[32]byte]bool, + currentJustified *types.Checkpoint, + excluded map[uint64]bool, +) map[uint64]*PayloadEntry { + perValidator := make(map[uint64]*PayloadEntry) + for _, entry := range payloads { + if !knownBlockRoots[entry.Data.Head.Root] { + continue + } + if entry.Data.Source.Root != currentJustified.Root || + entry.Data.Source.Slot != currentJustified.Slot { + continue + } + for _, proof := range entry.Proofs { + for _, vid := range types.BitlistIndices(proof.Participants) { + if excluded[vid] { + continue + } + existing, ok := perValidator[vid] + if !ok || entry.Data.Slot > existing.Data.Slot { + perValidator[vid] = entry + } + } + } + } + return perValidator +} + +// validatorGroup holds a payload entry and the validators selected from it. +type validatorGroup struct { + entry *PayloadEntry + validators []uint64 +} + +// groupValidatorsByEntry inverts perValidator into groups keyed by entry, +// returning a deterministically-sorted slice. Multiple validators pointing +// at the same entry are batched so we can pick proofs that cover them all. +func groupValidatorsByEntry(perValidator map[uint64]*PayloadEntry) []validatorGroup { + byEntry := make(map[*PayloadEntry][]uint64) + for vid, entry := range perValidator { + byEntry[entry] = append(byEntry[entry], vid) + } + groups := make([]validatorGroup, 0, len(byEntry)) + for entry, vids := range byEntry { + sort.Slice(vids, func(i, j int) bool { return vids[i] < vids[j] }) + groups = append(groups, validatorGroup{entry: entry, validators: vids}) + } + // Deterministic order: by target slot then by data root. + sort.Slice(groups, func(i, j int) bool { + ei, ej := groups[i].entry, groups[j].entry + if ei.Data.Target.Slot != ej.Data.Target.Slot { + return ei.Data.Target.Slot < ej.Data.Target.Slot + } + ri, _ := ei.Data.HashTreeRoot() + rj, _ := ej.Data.HashTreeRoot() + return compareRoots(ri, rj) < 0 + }) + return groups +} + +// emitAttestationsForGroup picks the smallest set of proofs from `entry` that +// covers all validators in `wanted`, appending one AggregatedAttestation per +// chosen proof. Returns the number of attestations emitted. +func emitAttestationsForGroup( + entry *PayloadEntry, + wanted []uint64, + attestations *[]*types.AggregatedAttestation, + signatures *[]*types.AggregatedSignatureProof, + processedValidators map[uint64]bool, +) int { + needed := make(map[uint64]bool, len(wanted)) + for _, vid := range wanted { + needed[vid] = true + } + + emitted := 0 + for len(needed) > 0 { + bestIdx := -1 + bestCount := 0 + for i, proof := range entry.Proofs { + count := 0 + for _, vid := range types.BitlistIndices(proof.Participants) { + if needed[vid] { + count++ + } + } + if count > bestCount { + bestCount = count + bestIdx = i + } + } + if bestIdx < 0 || bestCount == 0 { + break + } + + proof := entry.Proofs[bestIdx] + *attestations = append(*attestations, &types.AggregatedAttestation{ + AggregationBits: proof.Participants, + Data: entry.Data, + }) + *signatures = append(*signatures, proof) + emitted++ + + for _, vid := range types.BitlistIndices(proof.Participants) { + delete(needed, vid) + processedValidators[vid] = true + } + } + return emitted +} + +// getBlockRoots returns all known block roots from the store. +func (s *ConsensusStore) getBlockRoots() map[[32]byte]bool { + roots := make(map[[32]byte]bool) + rv, err := s.Backend.BeginRead() + if err != nil { + return roots + } + it, err := rv.PrefixIterator(storage.TableBlockHeaders, nil) + if err != nil { + return roots + } + defer it.Close() + for it.Next() { + var root [32]byte + copy(root[:], it.Key()) + roots[root] = true + } + return roots +} + +func compareRoots(a, b [32]byte) int { + for i := 0; i < 32; i++ { + if a[i] != b[i] { + if a[i] < b[i] { + return -1 + } + return 1 + } + } + return 0 +} diff --git a/node/store_errors.go b/node/store_errors.go new file mode 100644 index 0000000..ee39af2 --- /dev/null +++ b/node/store_errors.go @@ -0,0 +1,88 @@ +package node + +import "fmt" + +// StoreError represents errors during consensus store operations. +type StoreError struct { + Kind StoreErrorKind + Message string +} + +func (e *StoreError) Error() string { return e.Message } + +type StoreErrorKind int + +const ( + ErrMissingParentState StoreErrorKind = iota + ErrInvalidValidatorIndex + ErrPubkeyDecodingFailed + ErrSignatureDecodingFailed + ErrSignatureVerificationFailed + ErrProposerSignatureDecodingFailed + ErrProposerSignatureVerificationFailed + ErrStateTransitionFailed + ErrUnknownSourceBlock + ErrUnknownTargetBlock + ErrUnknownHeadBlock + ErrSourceExceedsTarget + ErrHeadOlderThanTarget + ErrSourceSlotMismatch + ErrTargetSlotMismatch + ErrHeadSlotMismatch + ErrAttestationTooFarInFuture + ErrAttestationSignatureMismatch + ErrParticipantsMismatch + ErrAggregateVerificationFailed + ErrSignatureAggregationFailed + ErrMissingTargetState + ErrNotProposer + ErrProposerAttestationMismatch +) + +func errMissingParentState(parentRoot [32]byte, slot uint64) error { + return &StoreError{ErrMissingParentState, fmt.Sprintf("parent state not found for slot %d, missing block %x", slot, parentRoot[:4])} +} + +func errUnknownSourceBlock(root [32]byte) error { + return &StoreError{ErrUnknownSourceBlock, fmt.Sprintf("unknown source block: %x", root[:4])} +} + +func errUnknownTargetBlock(root [32]byte) error { + return &StoreError{ErrUnknownTargetBlock, fmt.Sprintf("unknown target block: %x", root[:4])} +} + +func errUnknownHeadBlock(root [32]byte) error { + return &StoreError{ErrUnknownHeadBlock, fmt.Sprintf("unknown head block: %x", root[:4])} +} + +func errSourceExceedsTarget() error { + return &StoreError{ErrSourceExceedsTarget, "source checkpoint slot exceeds target"} +} + +func errHeadOlderThanTarget(headSlot, targetSlot uint64) error { + return &StoreError{ErrHeadOlderThanTarget, fmt.Sprintf("head slot %d older than target slot %d", headSlot, targetSlot)} +} + +func errSourceSlotMismatch(cpSlot, blockSlot uint64) error { + return &StoreError{ErrSourceSlotMismatch, fmt.Sprintf("source checkpoint slot %d != block slot %d", cpSlot, blockSlot)} +} + +func errTargetSlotMismatch(cpSlot, blockSlot uint64) error { + return &StoreError{ErrTargetSlotMismatch, fmt.Sprintf("target checkpoint slot %d != block slot %d", cpSlot, blockSlot)} +} + +func errHeadSlotMismatch(cpSlot, blockSlot uint64) error { + return &StoreError{ErrHeadSlotMismatch, fmt.Sprintf("head checkpoint slot %d != block slot %d", cpSlot, blockSlot)} +} + +func errAttestationTooFarInFuture(attSlot, currentSlot uint64) error { + return &StoreError{ErrAttestationTooFarInFuture, fmt.Sprintf("attestation slot %d too far in future (current %d)", attSlot, currentSlot)} +} + +func errNotProposer(vid, slot uint64) error { + return &StoreError{ErrNotProposer, fmt.Sprintf("validator %d not proposer for slot %d", vid, slot)} +} + +func errMissingTargetState(root [32]byte) error { + return &StoreError{ErrMissingTargetState, fmt.Sprintf("missing target state: %x", root[:4])} +} diff --git a/node/store_gossip.go b/node/store_gossip.go new file mode 100644 index 0000000..ab98cc4 --- /dev/null +++ b/node/store_gossip.go @@ -0,0 +1,107 @@ +package node + +import ( + "unsafe" + + "github.com/geanlabs/gean/types" +) + +// FreeSignatureFunc is set by the engine to provide C handle cleanup. +// AggregateMetricsFunc is set by the engine to record aggregation metrics. +// These avoid importing engine/crypto from store (no circular deps). +var FreeSignatureFunc func(unsafe.Pointer) +var AggregateMetricsFunc func(durationSeconds float64, numAttestations int) + +// GossipSignatureEntry holds one validator's signature for aggregation. +type GossipSignatureEntry struct { + ValidatorID uint64 + Signature [types.SignatureSize]byte + // SigHandle is an opaque C pointer to the parsed leansig Signature. + // Kept alive to avoid SSZ round-trip corruption during aggregation. + SigHandle unsafe.Pointer +} + +// GossipDataEntry groups signatures by attestation data. +type GossipDataEntry struct { + Data *types.AttestationData + Signatures []GossipSignatureEntry +} + +// GossipSignatureMap maps data_root -> signatures. +type GossipSignatureMap map[[32]byte]*GossipDataEntry + +// Insert adds a gossip signature for a validator (without C handle). +func (m GossipSignatureMap) Insert(dataRoot [32]byte, data *types.AttestationData, validatorID uint64, sig [types.SignatureSize]byte) { + m.InsertWithHandle(dataRoot, data, validatorID, sig, nil, nil) +} + +// InsertWithHandle adds a gossip signature with an optional opaque C handle. +func (m GossipSignatureMap) InsertWithHandle(dataRoot [32]byte, data *types.AttestationData, validatorID uint64, sig [types.SignatureSize]byte, handle unsafe.Pointer, parseErr error) { + entry, ok := m[dataRoot] + if !ok { + entry = &GossipDataEntry{Data: data} + m[dataRoot] = entry + } + var h unsafe.Pointer + if parseErr == nil { + h = handle + } + entry.Signatures = append(entry.Signatures, GossipSignatureEntry{ + ValidatorID: validatorID, + Signature: sig, + SigHandle: h, + }) +} + +// Delete removes specific (validatorID, dataRoot) entries, freeing C handles. +func (m GossipSignatureMap) Delete(keys []GossipDeleteKey) { + for _, key := range keys { + entry, ok := m[key.DataRoot] + if !ok { + continue + } + filtered := entry.Signatures[:0] + for _, sig := range entry.Signatures { + if sig.ValidatorID == key.ValidatorID { + // Free C handle if present. + if sig.SigHandle != nil && FreeSignatureFunc != nil { + FreeSignatureFunc(sig.SigHandle) + } + } else { + filtered = append(filtered, sig) + } + } + entry.Signatures = filtered + if len(entry.Signatures) == 0 { + delete(m, key.DataRoot) + } + } +} + +// PruneBelow removes entries with slot <= finalizedSlot, freeing C handles. +func (m GossipSignatureMap) PruneBelow(finalizedSlot uint64) int { + pruned := 0 + for root, entry := range m { + if entry.Data.Slot <= finalizedSlot { + for _, sig := range entry.Signatures { + if sig.SigHandle != nil && FreeSignatureFunc != nil { + FreeSignatureFunc(sig.SigHandle) + } + } + delete(m, root) + pruned++ + } + } + return pruned +} + +// Len returns the number of data entries. +func (m GossipSignatureMap) Len() int { + return len(m) +} + +// GossipDeleteKey identifies a specific signature to delete. +type GossipDeleteKey struct { + ValidatorID uint64 + DataRoot [32]byte +} diff --git a/node/store_payloads.go b/node/store_payloads.go new file mode 100644 index 0000000..76c8f81 --- /dev/null +++ b/node/store_payloads.go @@ -0,0 +1,158 @@ +package node + +import ( + "github.com/geanlabs/gean/types" +) + +// PayloadEntry stores attestation data + proofs for a single data_root. +type PayloadEntry struct { + Data *types.AttestationData + Proofs []*types.AggregatedSignatureProof +} + +// PayloadBuffer is a capped FIFO buffer for aggregated payloads. +type PayloadBuffer struct { + data map[[32]byte]*PayloadEntry // data_root -> entry + order [][32]byte // insertion order for FIFO eviction + capacity int + totalProofs int +} + +// NewPayloadBuffer creates a new buffer with the given capacity. +func NewPayloadBuffer(capacity int) *PayloadBuffer { + return &PayloadBuffer{ + data: make(map[[32]byte]*PayloadEntry), + capacity: capacity, + } +} + +// Push inserts a proof for an attestation, FIFO-evicting when over capacity. +func (pb *PayloadBuffer) Push(dataRoot [32]byte, attData *types.AttestationData, proof *types.AggregatedSignatureProof) { + if entry, ok := pb.data[dataRoot]; ok { + // Skip duplicate proofs (same participants) + for _, existing := range entry.Proofs { + if bitlistEqual(existing.Participants, proof.Participants) { + return + } + } + entry.Proofs = append(entry.Proofs, proof) + pb.totalProofs++ + } else { + pb.data[dataRoot] = &PayloadEntry{ + Data: attData, + Proofs: []*types.AggregatedSignatureProof{proof}, + } + pb.order = append(pb.order, dataRoot) + pb.totalProofs++ + } + + // Evict oldest until under capacity. + for pb.totalProofs > pb.capacity && len(pb.order) > 0 { + evicted := pb.order[0] + pb.order = pb.order[1:] + if entry, ok := pb.data[evicted]; ok { + pb.totalProofs -= len(entry.Proofs) + delete(pb.data, evicted) + } + } +} + +// PushBatch inserts multiple entries. +func (pb *PayloadBuffer) PushBatch(entries []PayloadKV) { + for _, e := range entries { + pb.Push(e.DataRoot, e.Data, e.Proof) + } +} + +// Drain takes all entries, leaving the buffer empty. +func (pb *PayloadBuffer) Drain() []PayloadKV { + result := make([]PayloadKV, 0, pb.totalProofs) + for _, dataRoot := range pb.order { + entry := pb.data[dataRoot] + for _, proof := range entry.Proofs { + result = append(result, PayloadKV{ + DataRoot: dataRoot, + Data: entry.Data, + Proof: proof, + }) + } + } + pb.data = make(map[[32]byte]*PayloadEntry) + pb.order = nil + pb.totalProofs = 0 + return result +} + +// Len returns the number of distinct attestation data entries. +func (pb *PayloadBuffer) Len() int { + return len(pb.data) +} + +// TotalProofs returns the total number of proofs across all entries. +func (pb *PayloadBuffer) TotalProofs() int { + return pb.totalProofs +} + +// ExtractLatestAttestations returns per-validator latest attestations from participation bits. +func (pb *PayloadBuffer) ExtractLatestAttestations() map[uint64]*types.AttestationData { + result := make(map[uint64]*types.AttestationData) + for _, entry := range pb.data { + for _, proof := range entry.Proofs { + participantLen := types.BitlistLen(proof.Participants) + for vid := uint64(0); vid < participantLen; vid++ { + if types.BitlistGet(proof.Participants, vid) { + existing, ok := result[vid] + if !ok || existing.Slot < entry.Data.Slot { + result[vid] = entry.Data + } + } + } + } + } + return result +} + +// PruneBelow removes entries with target slot <= finalizedSlot. +func (pb *PayloadBuffer) PruneBelow(finalizedSlot uint64) int { + pruned := 0 + var newOrder [][32]byte + for _, dataRoot := range pb.order { + entry, ok := pb.data[dataRoot] + if !ok { + continue + } + if entry.Data != nil && entry.Data.Target.Slot <= finalizedSlot { + pb.totalProofs -= len(entry.Proofs) + delete(pb.data, dataRoot) + pruned++ + } else { + newOrder = append(newOrder, dataRoot) + } + } + pb.order = newOrder + return pruned +} + +// Entries returns all (data_root, data, proofs) for block building. +func (pb *PayloadBuffer) Entries() map[[32]byte]*PayloadEntry { + return pb.data +} + +// PayloadKV is a flattened (data_root, data, proof) tuple. +type PayloadKV struct { + DataRoot [32]byte + Data *types.AttestationData + Proof *types.AggregatedSignatureProof +} + +func bitlistEqual(a, b []byte) bool { + if len(a) != len(b) { + return false + } + for i := range a { + if a[i] != b[i] { + return false + } + } + return true +} diff --git a/node/store_produce.go b/node/store_produce.go new file mode 100644 index 0000000..2eea1a0 --- /dev/null +++ b/node/store_produce.go @@ -0,0 +1,102 @@ +package node + +import ( + "github.com/geanlabs/gean/statetransition" + "github.com/geanlabs/gean/types" +) + +// ProduceAttestationData creates attestation data for the given slot. +func ProduceAttestationData(s *ConsensusStore, slot uint64) *types.AttestationData { + headRoot := s.Head() + headState := s.GetState(headRoot) + if headState == nil { + return nil + } + + // Derive source from head state's justified checkpoint. + // At genesis the checkpoint root is zero; substitute the real genesis block root. + + var source *types.Checkpoint + if headState.LatestBlockHeader.Slot == 0 { + source = &types.Checkpoint{ + Root: headRoot, + Slot: headState.LatestJustified.Slot, + } + } else { + source = headState.LatestJustified + } + + headHeader := s.GetBlockHeader(headRoot) + if headHeader == nil { + return nil + } + headCheckpoint := &types.Checkpoint{ + Root: headRoot, + Slot: headHeader.Slot, + } + + target := GetAttestationTarget(s) + + return &types.AttestationData{ + Slot: slot, + Head: headCheckpoint, + Target: target, + Source: source, + } +} + +// GetAttestationTarget computes the target checkpoint for attestations. +func GetAttestationTarget(s *ConsensusStore) *types.Checkpoint { + targetRoot := s.Head() + targetHeader := s.GetBlockHeader(targetRoot) + if targetHeader == nil { + return &types.Checkpoint{} + } + + safeTargetHeader := s.GetBlockHeader(s.SafeTarget()) + safeTargetSlot := uint64(0) + if safeTargetHeader != nil { + safeTargetSlot = safeTargetHeader.Slot + } + + // Walk back toward safe target (up to JUSTIFICATION_LOOKBACK_SLOTS steps). + + for i := uint64(0); i < types.JustificationLookbackSlots; i++ { + if targetHeader.Slot > safeTargetSlot { + targetRoot = targetHeader.ParentRoot + parent := s.GetBlockHeader(targetRoot) + if parent == nil { + break + } + targetHeader = parent + } else { + break + } + } + + finalizedSlot := s.LatestFinalized().Slot + + // Walk back until justifiable slot. + + for targetHeader.Slot > finalizedSlot && + !statetransition.SlotIsJustifiableAfter(targetHeader.Slot, finalizedSlot) { + targetRoot = targetHeader.ParentRoot + parent := s.GetBlockHeader(targetRoot) + if parent == nil { + break + } + targetHeader = parent + } + + // Clamp to latest_justified if walked behind. + + latestJustified := s.LatestJustified() + if targetHeader.Slot < latestJustified.Slot { + return latestJustified + } + + return &types.Checkpoint{ + Root: targetRoot, + Slot: targetHeader.Slot, + } +} diff --git a/node/store_prune.go b/node/store_prune.go new file mode 100644 index 0000000..eff0a86 --- /dev/null +++ b/node/store_prune.go @@ -0,0 +1,165 @@ +package node + +import ( + "github.com/geanlabs/gean/forkchoice" + "github.com/geanlabs/gean/logger" + "github.com/geanlabs/gean/storage" +) + +// Pruning constants. +const ( + PruningIntervalSlots = 7200 // Periodic pruning every ~8 hours at 4s/slot +) + +// PruneOnFinalization performs canonicality-based pruning when finalization advances. +// Identifies canonical chain from oldFinalized to newFinalized, prunes non-canonical +// states/blocks, and cleans up stale attestation data. +// Uses canonicality analysis to prune dead forks. +func PruneOnFinalization(s *ConsensusStore, fc *forkchoice.ForkChoice, oldFinalizedSlot, newFinalizedSlot uint64, newFinalizedRoot [32]byte) { + if newFinalizedSlot <= oldFinalizedSlot { + return + } + + // 1. Identify canonical and non-canonical roots in the fork choice tree. + canonical, nonCanonical := fc.GetCanonicalAnalysis(newFinalizedRoot) + + // 2. Prune non-canonical states and blocks from DB. + prunedStates := pruneStatesByRoots(s, nonCanonical) + prunedBlocks := pruneBlocksByRoots(s, nonCanonical) + + // 3. Prune old canonical states (keep only the finalized root's state). + // canonical[0] is the finalized root — keep it, prune earlier ancestors. + if len(canonical) > 1 { + prunedStates += pruneStatesByRoots(s, canonical[1:]) + } + + // 4. Prune live chain entries below finalized. + prunedChain := pruneLiveChain(s, newFinalizedSlot) + + // 5. Prune stale attestation data (gossip sigs + payloads with target <= finalized). + prunedSigs := s.GossipSignatures.PruneBelow(newFinalizedSlot) + prunedKnown := s.KnownPayloads.PruneBelow(newFinalizedSlot) + prunedNew := s.NewPayloads.PruneBelow(newFinalizedSlot) + + logger.Info(logger.Store, "pruning: finalized_slot=%d states=%d blocks=%d live_chain=%d gossip_sigs=%d payloads=%d non_canonical=%d", + newFinalizedSlot, prunedStates, prunedBlocks, prunedChain, prunedSigs, + prunedKnown+prunedNew, len(nonCanonical)) +} + +// PeriodicPrune runs canonicality-based pruning as a fallback when finalization stalls. +// Only triggers when finalization is more than 2*PruningIntervalSlots behind current slot. +// Runs canonicality-based pruning as a fallback safety mechanism. +func PeriodicPrune(s *ConsensusStore, fc *forkchoice.ForkChoice, currentSlot, finalizedSlot uint64) { + if currentSlot == 0 || currentSlot%PruningIntervalSlots != 0 { + return + } + + // Only prune if finalization is stalled (2x interval behind). + if finalizedSlot+2*PruningIntervalSlots >= currentSlot { + return + } + + logger.Warn(logger.Store, "finalization stalled: finalized_slot=%d current_slot=%d, running periodic pruning", finalizedSlot, currentSlot) + + // Get canonical ancestor at PruningIntervalSlots depth. + ancestorRoot, ancestorSlot, ok := fc.GetCanonicalAncestorAtDepth(PruningIntervalSlots) + if !ok || ancestorSlot <= finalizedSlot { + return + } + + // Prune non-canonical states below the ancestor. + _, nonCanonical := fc.GetCanonicalAnalysis(ancestorRoot) + prunedStates := pruneStatesByRoots(s, nonCanonical) + prunedBlocks := pruneBlocksByRoots(s, nonCanonical) + + if prunedStates > 0 || prunedBlocks > 0 { + logger.Info(logger.Store, "periodic pruning: ancestor_slot=%d states=%d blocks=%d non_canonical=%d", + ancestorSlot, prunedStates, prunedBlocks, len(nonCanonical)) + } +} + +// pruneStatesByRoots removes states for the given roots from DB. +func pruneStatesByRoots(s *ConsensusStore, roots [][32]byte) int { + if len(roots) == 0 { + return 0 + } + + keys := make([][]byte, len(roots)) + for i, root := range roots { + k := make([]byte, 32) + copy(k, root[:]) + keys[i] = k + } + + wb, err := s.Backend.BeginWrite() + if err != nil { + return 0 + } + wb.DeleteBatch(storage.TableStates, keys) + wb.Commit() + return len(roots) +} + +// pruneBlocksByRoots removes block headers, bodies, and signatures for the given roots. +func pruneBlocksByRoots(s *ConsensusStore, roots [][32]byte) int { + if len(roots) == 0 { + return 0 + } + + keys := make([][]byte, len(roots)) + for i, root := range roots { + k := make([]byte, 32) + copy(k, root[:]) + keys[i] = k + } + + wb, err := s.Backend.BeginWrite() + if err != nil { + return 0 + } + wb.DeleteBatch(storage.TableBlockHeaders, keys) + wb.DeleteBatch(storage.TableBlockBodies, keys) + wb.DeleteBatch(storage.TableBlockSignatures, keys) + wb.Commit() + return len(roots) +} + +// pruneLiveChain removes LiveChain entries with slot < finalizedSlot. +func pruneLiveChain(s *ConsensusStore, finalizedSlot uint64) int { + rv, err := s.Backend.BeginRead() + if err != nil { + return 0 + } + + iter, err := rv.PrefixIterator(storage.TableLiveChain, nil) + if err != nil { + return 0 + } + defer iter.Close() + + var keysToDelete [][]byte + for iter.Next() { + key := iter.Key() + if len(key) < 8 { + continue + } + slot, _ := storage.DecodeLiveChainKey(key) + if slot < finalizedSlot { + k := make([]byte, len(key)) + copy(k, key) + keysToDelete = append(keysToDelete, k) + } + } + + if len(keysToDelete) == 0 { + return 0 + } + + wb, err := s.Backend.BeginWrite() + if err != nil { + return 0 + } + wb.DeleteBatch(storage.TableLiveChain, keysToDelete) + wb.Commit() + return len(keysToDelete) +} diff --git a/node/store_tick.go b/node/store_tick.go new file mode 100644 index 0000000..76fed06 --- /dev/null +++ b/node/store_tick.go @@ -0,0 +1,68 @@ +package node + +import ( + "github.com/geanlabs/gean/types" +) + +// OnTick processes a tick event, dispatching interval-specific actions. +// +// Returns any new aggregated attestations produced at interval 2. +// Note: head/safe-target updates are NOT done here — they happen in Engine +// which owns ForkChoice. This only handles payload promotion and aggregation. +func OnTick( + s *ConsensusStore, + timestampMs uint64, + hasProposal bool, + isAggregator bool, +) []*types.SignedAggregatedAttestation { + var newAggregates []*types.SignedAggregatedAttestation + + // Convert UNIX timestamp (ms) to interval count since genesis. + genesisTimeMs := s.Config().GenesisTime * 1000 + timeDeltaMs := timestampMs - genesisTimeMs + if timestampMs < genesisTimeMs { + timeDeltaMs = 0 + } + time := timeDeltaMs / types.MillisecondsPerInterval + + // Fast-forward if more than a slot behind. + // Use guard to prevent uint64 underflow. + if time > s.Time() && time-s.Time() > types.IntervalsPerSlot { + s.SetTime(time - types.IntervalsPerSlot) + } + + for s.Time() < time { + s.SetTime(s.Time() + 1) + + interval := s.Time() % types.IntervalsPerSlot + + // has_proposal only signaled for the final tick. + isFinalTick := s.Time() == time + shouldSignalProposal := hasProposal && isFinalTick + + switch interval { + case 0: + // Start of slot — promote attestations if proposal exists. + if shouldSignalProposal { + s.PromoteNewToKnown() + // Head update happens in Engine. + } + case 1: + // Vote propagation — no store action. + case 2: + // Aggregation interval. + if isAggregator { + aggs := AggregateCommitteeSignatures(s) + newAggregates = append(newAggregates, aggs...) + } + case 3: + // Safe target update happens in Engine (it owns ForkChoice). + case 4: + // End of slot — promote accumulated attestations. + s.PromoteNewToKnown() + // Head update happens in Engine. + } + } + + return newAggregates +} diff --git a/node/store_validate.go b/node/store_validate.go new file mode 100644 index 0000000..9800a6c --- /dev/null +++ b/node/store_validate.go @@ -0,0 +1,51 @@ +package node + +import ( + "github.com/geanlabs/gean/types" +) + +// ValidateAttestationData checks 9 validation branches for incoming attestations. +func ValidateAttestationData(s *ConsensusStore, data *types.AttestationData) error { + // 1-3. Availability: source, target, head blocks must exist. + sourceHeader := s.GetBlockHeader(data.Source.Root) + if sourceHeader == nil { + return errUnknownSourceBlock(data.Source.Root) + } + targetHeader := s.GetBlockHeader(data.Target.Root) + if targetHeader == nil { + return errUnknownTargetBlock(data.Target.Root) + } + headHeader := s.GetBlockHeader(data.Head.Root) + if headHeader == nil { + return errUnknownHeadBlock(data.Head.Root) + } + + // 4. Topology: source.slot <= target.slot. + if data.Source.Slot > data.Target.Slot { + return errSourceExceedsTarget() + } + + // 5. Topology: head.slot >= target.slot. + if data.Head.Slot < data.Target.Slot { + return errHeadOlderThanTarget(data.Head.Slot, data.Target.Slot) + } + + // 6-8. Consistency: checkpoint slots match actual block slots. + if sourceHeader.Slot != data.Source.Slot { + return errSourceSlotMismatch(data.Source.Slot, sourceHeader.Slot) + } + if targetHeader.Slot != data.Target.Slot { + return errTargetSlotMismatch(data.Target.Slot, targetHeader.Slot) + } + if headHeader.Slot != data.Head.Slot { + return errHeadSlotMismatch(data.Head.Slot, headHeader.Slot) + } + + // 9. Time: attestation not > 1 slot in future. + currentSlot := s.Time() / types.IntervalsPerSlot + if data.Slot > currentSlot+1 { + return errAttestationTooFarInFuture(data.Slot, currentSlot) + } + + return nil +} diff --git a/node/sync.go b/node/sync.go deleted file mode 100644 index aa29765..0000000 --- a/node/sync.go +++ /dev/null @@ -1,338 +0,0 @@ -package node - -import ( - "context" - "fmt" - "strings" - "time" - - "github.com/geanlabs/gean/chain/forkchoice" - "github.com/geanlabs/gean/network/reqresp" - "github.com/geanlabs/gean/observability/logging" - "github.com/geanlabs/gean/types" - "github.com/libp2p/go-libp2p/core/peer" -) - -func isMissingParentStateErr(err error) bool { - return err != nil && strings.Contains(err.Error(), "parent state not found") -} - -// syncWithPeer exchanges status and fetches missing blocks from a single peer. -// It walks backwards from the peer's head, keeping fetched blocks in memory, -// then processes them in forward order. Each root is marked pending immediately -// to prevent duplicate fetches by concurrent sync paths. -// -// The backward walk is capped at maxBackfillDepth (512) to prevent resource -// exhaustion from deep chains, matching leanSpec MAX_BACKFILL_DEPTH. -func (n *Node) syncWithPeer(ctx context.Context, pid peer.ID) bool { - if !n.canSyncWithPeer(pid) { - return false - } - if !n.acquirePeerSlot(pid) { - n.log.Debug("peer at max concurrent requests, skipping", "peer_id", pid.String()) - return false - } - defer n.releasePeerSlot(pid) - - status := n.FC.GetStatus() - ourStatus := reqresp.Status{ - Finalized: &types.Checkpoint{Root: status.FinalizedRoot, Slot: status.FinalizedSlot}, - Head: &types.Checkpoint{Root: status.Head, Slot: status.HeadSlot}, - } - - peerStatus, err := reqresp.RequestStatus(ctx, n.Host.P2P, pid, ourStatus) - if err != nil { - n.log.Debug("status exchange failed", "peer_id", pid.String(), "err", err) - n.recordSyncFailure(pid) - return false - } - n.log.Info("status exchanged", - "peer_id", pid.String(), - "local_head_slot", status.HeadSlot, - "local_head_root", logging.LongHash(status.Head), - "local_finalized_slot", status.FinalizedSlot, - "local_finalized_root", logging.LongHash(status.FinalizedRoot), - "peer_head_slot", peerStatus.Head.Slot, - "peer_head_root", logging.LongHash(peerStatus.Head.Root), - "peer_finalized_slot", peerStatus.Finalized.Slot, - "peer_finalized_root", logging.LongHash(peerStatus.Finalized.Root), - ) - - // Skip sync only if peer is strictly behind us, or at the exact same position. - if peerStatus.Head.Slot < status.HeadSlot { - return false - } - if peerStatus.Head.Slot == status.HeadSlot && peerStatus.Head.Root == status.Head { - return false - } - - // Walk backwards from peer's head, fetching blocks and keeping them. - // Each root is marked pending immediately (before fetch) to prevent - // concurrent sync paths from requesting the same root. - var pending []*types.SignedBlockWithAttestation - var pendingMarked [][32]byte - nextRoot := peerStatus.Head.Root - - for i := 0; i < maxBackfillDepth; i++ { - if n.FC.HasState(nextRoot) { - break - } - - // Skip roots already being fetched by another sync path. - if n.isRootPending(nextRoot) { - n.log.Debug("skipping already-pending root", "root", logging.LongHash(nextRoot)) - break - } - - // Mark this root as pending BEFORE requesting it, matching - // leanSpec BackfillSync._pending pattern (backfill_sync.py:164). - n.markRootPending(nextRoot) - pendingMarked = append(pendingMarked, nextRoot) - - blocks, err := reqresp.RequestBlocksByRoot(ctx, n.Host.P2P, pid, [][32]byte{nextRoot}) - if err != nil || len(blocks) == 0 { - n.log.Debug("blocks_by_root failed during sync walk", - "peer_id", pid.String(), - "requested_root", logging.LongHash(nextRoot), - "err", err, - ) - n.recordSyncFailure(pid) - break - } - - sb := blocks[0] - pending = append(pending, sb) - nextRoot = sb.Message.Block.ParentRoot - } - - // Always clear pending roots when done, even on failure. - defer n.clearPendingRoots(pendingMarked) - - if len(pending) == 0 { - return false - } - - // Check if we reached a known ancestor with state. - if !n.FC.HasState(nextRoot) { - n.log.Debug("sync walk did not reach known ancestor with state", - "peer_id", pid.String(), - "ancestor_root", logging.LongHash(nextRoot), - "fetched", len(pending), - "max_depth", maxBackfillDepth, - ) - return false - } - - // Process in forward order (oldest first). Blocks were already fetched - // during the walk — no re-fetch needed. - synced := 0 - total := len(pending) - for i := len(pending) - 1; i >= 0; i-- { - sb := pending[i] - blockRoot, _ := sb.Message.Block.HashTreeRoot() - if err := n.FC.ProcessBlock(sb); err != nil { - n.log.Debug("sync block rejected", - "slot", sb.Message.Block.Slot, - "block_root", logging.LongHash(blockRoot), - "err", err, - ) - } else { - synced++ - n.log.Info("synced block", - "slot", sb.Message.Block.Slot, - "block_root", logging.LongHash(blockRoot), - "peer_id", pid.String(), - "progress", fmt.Sprintf("%d/%d", synced, total), - ) - } - } - - if synced > 0 { - n.recordSyncSuccess(pid) - } - return synced > 0 -} - -// recoverMissingParentSync attempts to fill a missing parent chain by syncing -// with connected peers. Rate-limited to prevent excessive request flooding -// when multiple gossip blocks arrive with missing parents in quick succession. -func (n *Node) recoverMissingParentSync(ctx context.Context, parentRoot [32]byte) bool { - if n.FC.HasState(parentRoot) { - return true - } - - // Rate-limit: skip if a recovery attempt happened recently. - n.recoveryMu.Lock() - if time.Since(n.lastRecoveryTime) < recoveryCooldown { - n.recoveryMu.Unlock() - return false - } - n.lastRecoveryTime = time.Now() - n.recoveryMu.Unlock() - - // Try peers until one succeeds. - for _, pid := range n.Host.P2P.Network().Peers() { - n.syncWithPeer(ctx, pid) - if n.FC.HasState(parentRoot) { - return true - } - } - return false -} - -// initialSync exchanges status with connected peers and requests any blocks -// we're missing. This allows a node that restarts mid-devnet to catch up. -func (n *Node) initialSync(ctx context.Context) { - peers := n.Host.P2P.Network().Peers() - n.log.Info("initial sync starting", "peer_count", len(peers)) - for _, pid := range peers { - n.syncWithPeer(ctx, pid) - } - status := n.FC.GetStatus() - n.log.Info("initial sync completed", - "head_slot", status.HeadSlot, - "head_root", logging.LongHash(status.Head), - "justified_slot", status.JustifiedSlot, - "justified_root", logging.LongHash(status.JustifiedRoot), - "finalized_slot", status.FinalizedSlot, - "finalized_root", logging.LongHash(status.FinalizedRoot), - ) -} - -// isBehindPeers reports whether our head is behind the highest head slot -// advertised by connected peers. -func (n *Node) isBehindPeers(ctx context.Context, status forkchoice.ChainStatus) (bool, uint64) { - maxPeerHeadSlot := status.HeadSlot - peers := n.Host.P2P.Network().Peers() - if len(peers) == 0 { - return false, maxPeerHeadSlot - } - - ourStatus := reqresp.Status{ - Finalized: &types.Checkpoint{Root: status.FinalizedRoot, Slot: status.FinalizedSlot}, - Head: &types.Checkpoint{Root: status.Head, Slot: status.HeadSlot}, - } - - for _, pid := range peers { - peerCtx, cancel := context.WithTimeout(ctx, 1200*time.Millisecond) - peerStatus, err := reqresp.RequestStatus(peerCtx, n.Host.P2P, pid, ourStatus) - cancel() - if err != nil || peerStatus.Head == nil { - continue - } - if peerStatus.Head.Slot > maxPeerHeadSlot { - maxPeerHeadSlot = peerStatus.Head.Slot - } - } - - behind := status.HeadSlot < maxPeerHeadSlot - return behind, maxPeerHeadSlot -} - -// --- Pending roots deduplication --- - -func (n *Node) isRootPending(root [32]byte) bool { - n.pendingRootsMu.Lock() - defer n.pendingRootsMu.Unlock() - if n.pendingRoots == nil { - return false - } - _, ok := n.pendingRoots[root] - return ok -} - -func (n *Node) markRootPending(root [32]byte) { - n.pendingRootsMu.Lock() - defer n.pendingRootsMu.Unlock() - if n.pendingRoots == nil { - n.pendingRoots = make(map[[32]byte]struct{}) - } - n.pendingRoots[root] = struct{}{} -} - -func (n *Node) clearPendingRoots(roots [][32]byte) { - n.pendingRootsMu.Lock() - defer n.pendingRootsMu.Unlock() - for _, root := range roots { - delete(n.pendingRoots, root) - } -} - -// --- Per-peer concurrency limiting --- - -// acquirePeerSlot checks if the peer has capacity for another in-flight -// request. Returns true if a slot was acquired, false if the peer is at -// maxConcurrentRequestsPerPeer. Matches leanSpec MAX_CONCURRENT_REQUESTS. -func (n *Node) acquirePeerSlot(pid peer.ID) bool { - n.peerInFlightMu.Lock() - defer n.peerInFlightMu.Unlock() - if n.peerInFlight == nil { - n.peerInFlight = make(map[peer.ID]int) - } - if n.peerInFlight[pid] >= maxConcurrentRequestsPerPeer { - return false - } - n.peerInFlight[pid]++ - return true -} - -func (n *Node) releasePeerSlot(pid peer.ID) { - n.peerInFlightMu.Lock() - defer n.peerInFlightMu.Unlock() - if n.peerInFlight == nil { - return - } - n.peerInFlight[pid]-- - if n.peerInFlight[pid] <= 0 { - delete(n.peerInFlight, pid) - } -} - -// --- Per-peer exponential backoff --- - -// canSyncWithPeer checks if enough time has passed since the last failure -// for this peer, using exponential backoff. -func (n *Node) canSyncWithPeer(pid peer.ID) bool { - n.peerBackoffMu.Lock() - defer n.peerBackoffMu.Unlock() - if n.peerBackoff == nil { - return true - } - state, ok := n.peerBackoff[pid] - if !ok { - return true - } - if state.failures >= maxSyncRetries { - // Reset after max retries so peer gets another chance eventually. - delete(n.peerBackoff, pid) - return true - } - backoff := initialBackoff - for i := 1; i < state.failures; i++ { - backoff *= backoffMultiplier - } - return time.Since(state.lastTried) >= backoff -} - -func (n *Node) recordSyncFailure(pid peer.ID) { - n.peerBackoffMu.Lock() - defer n.peerBackoffMu.Unlock() - if n.peerBackoff == nil { - n.peerBackoff = make(map[peer.ID]*peerSyncState) - } - state, ok := n.peerBackoff[pid] - if !ok { - state = &peerSyncState{} - n.peerBackoff[pid] = state - } - state.failures++ - state.lastTried = time.Now() -} - -func (n *Node) recordSyncSuccess(pid peer.ID) { - n.peerBackoffMu.Lock() - defer n.peerBackoffMu.Unlock() - if n.peerBackoff != nil { - delete(n.peerBackoff, pid) - } -} diff --git a/node/tick.go b/node/tick.go new file mode 100644 index 0000000..23989af --- /dev/null +++ b/node/tick.go @@ -0,0 +1,232 @@ +package node + +import ( + "context" + "fmt" + "time" + + "github.com/geanlabs/gean/logger" + "github.com/geanlabs/gean/types" +) + +// onTick processes an 800ms tick event. +func (e *Engine) onTick() { + timestampMs := uint64(time.Now().UnixMilli()) + + currentSlot := e.currentSlot(timestampMs) + currentInterval := e.currentInterval(timestampMs) + + SetCurrentSlot(currentSlot) + + // Check if we're the proposer for this slot. + hasProposal := false + var proposerValidatorID uint64 + if currentInterval == 0 && currentSlot > 0 { + proposerValidatorID, hasProposal = e.getOurProposer(currentSlot) + } + + // Tick the store — handles interval dispatch (promote attestations, aggregate). + newAggregates := OnTick(e.Store, timestampMs, hasProposal, e.IsAggregator) + + // Publish new aggregates from interval 2. + for _, agg := range newAggregates { + if e.P2P != nil { + e.P2P.PublishAggregatedAttestation(context.Background(), agg) + } + } + + // Interval 0: propose block if we're the proposer. + if hasProposal { + e.maybePropose(currentSlot, proposerValidatorID) + } + + // Interval 0/4: update head after attestation promotion. + if currentInterval == 0 || currentInterval == 4 { + e.updateHead(false) + } + + // Interval 1: produce attestations + chain status log. + if currentInterval == 1 { + e.produceAttestations(currentSlot) + e.logChainStatus(currentSlot) + } + + // Interval 3: update safe target + periodic pruning fallback. + if currentInterval == 3 { + e.updateSafeTarget() + PeriodicPrune(e.Store, e.FC, currentSlot, e.Store.LatestFinalized().Slot) + } +} + +// updateHead runs LMD GHOST using known attestations. +func (e *Engine) updateHead(logTree bool) { + attestations := e.Store.ExtractLatestKnownAttestations() + justifiedRoot := e.Store.LatestJustified().Root + + // Feed attestations to fork choice vote store. + for vid, data := range attestations { + idx := e.FC.NodeIndex(data.Head.Root) + if idx >= 0 { + e.FC.Votes.SetKnown(vid, idx, data.Slot, data) + } + } + + oldHead := e.Store.Head() + newHead := e.FC.UpdateHead(justifiedRoot) + + if newHead != oldHead { + e.Store.SetHead(newHead) + if !types.IsZeroRoot(oldHead) { + newHeader := e.Store.GetBlockHeader(newHead) + if newHeader == nil { + return + } + justified := e.Store.LatestJustified() + finalized := e.Store.LatestFinalized() + + // Check if this is a real reorg (new head's parent != old head) + // or normal chain extension (new head is child of old head). + isReorg := newHeader.ParentRoot != oldHead + + SetHeadSlot(newHeader.Slot) + SetLatestJustifiedSlot(justified.Slot) + SetLatestFinalizedSlot(finalized.Slot) + SetGossipSignatures(e.Store.GossipSignatures.Len()) + SetNewAggregatedPayloads(e.Store.NewPayloads.Len()) + SetKnownAggregatedPayloads(e.Store.KnownPayloads.Len()) + + if isReorg { + IncForkChoiceReorgs() + logger.Warn(logger.Forkchoice, "REORG slot=%d head_root=0x%x parent_root=0x%x (was 0x%x) justified_slot=%d justified_root=0x%x finalized_slot=%d finalized_root=0x%x", + newHeader.Slot, newHead, newHeader.ParentRoot, oldHead, + justified.Slot, justified.Root, + finalized.Slot, finalized.Root) + } else { + logger.Info(logger.Forkchoice, "head slot=%d head_root=0x%x parent_root=0x%x justified_slot=%d justified_root=0x%x finalized_slot=%d finalized_root=0x%x", + newHeader.Slot, newHead, newHeader.ParentRoot, + justified.Slot, justified.Root, + finalized.Slot, finalized.Root) + } + } + } +} + +// updateSafeTarget runs LMD GHOST with 2/3 threshold using all attestations. +func (e *Engine) updateSafeTarget() { + attestations := e.Store.ExtractLatestAllAttestations() + justifiedRoot := e.Store.LatestJustified().Root + + // Feed merged attestations to vote store as "new" for safe target. + for vid, data := range attestations { + idx := e.FC.NodeIndex(data.Head.Root) + if idx >= 0 { + e.FC.Votes.SetNew(vid, idx, data.Slot, data) + } + } + + headState := e.Store.GetState(e.Store.Head()) + if headState == nil { + return + } + numValidators := uint64(len(headState.Validators)) + + safeTarget := e.FC.UpdateSafeTarget(justifiedRoot, numValidators) + e.Store.SetSafeTarget(safeTarget) + + safeHeader := e.Store.GetBlockHeader(safeTarget) + if safeHeader != nil { + SetSafeTargetSlot(safeHeader.Slot) + } +} + +// logChainStatus prints a chain status summary every slot at interval 1. + +func (e *Engine) logChainStatus(currentSlot uint64) { + headRoot := e.Store.Head() + headHeader := e.Store.GetBlockHeader(headRoot) + justified := e.Store.LatestJustified() + finalized := e.Store.LatestFinalized() + + headSlot := uint64(0) + parentRoot := types.ZeroRoot + stateRoot := types.ZeroRoot + if headHeader != nil { + headSlot = headHeader.Slot + parentRoot = headHeader.ParentRoot + stateRoot = headHeader.StateRoot + } + + behind := uint64(0) + if currentSlot > headSlot { + behind = currentSlot - headSlot + } + + peerCount := 0 + if e.P2P != nil { + peerCount = e.P2P.ConnectedPeers() + } + + gossipSigs := e.Store.GossipSignatures.Len() + knownPayloads := e.Store.KnownPayloads.Len() + statesCount := e.Store.StatesCount() + fcNodesCount := 0 + if e.FC != nil { + fcNodesCount = e.FC.Array.Len() + } + + // Build mesh info string with full topic paths. + meshInfo := "" + if e.P2P != nil { + meshSizes := e.P2P.TopicMeshSizes() + for topic, size := range meshSizes { + meshInfo += fmt.Sprintf("\n %-60s mesh_peers=%d", topic, size) + } + } + + logger.Info(logger.Chain, "\n\n+===============================================================+\n CHAIN STATUS: Current Slot: %d | Head Slot: %d | Behind: %d\n+---------------------------------------------------------------+\n Connected Peers: %d\n+---------------------------------------------------------------+\n Head Block Root: 0x%x\n Parent Block Root: 0x%x\n State Root: 0x%x\n+---------------------------------------------------------------+\n Latest Justified: Slot %6d | Root: 0x%x\n Latest Finalized: Slot %6d | Root: 0x%x\n+---------------------------------------------------------------+\n Gossip Sigs: %d | Known Payloads: %d | States: %d | FC Nodes: %d\n+---------------------------------------------------------------+\n Topics:%s\n+===============================================================+\n", + currentSlot, headSlot, behind, + peerCount, + headRoot, parentRoot, stateRoot, + justified.Slot, justified.Root, + finalized.Slot, finalized.Root, + gossipSigs, knownPayloads, statesCount, fcNodesCount, + meshInfo) +} + +// currentSlot derives the current slot from a timestamp. +func (e *Engine) currentSlot(timestampMs uint64) uint64 { + genesisMs := e.Store.Config().GenesisTime * 1000 + if timestampMs < genesisMs { + return 0 + } + return (timestampMs - genesisMs) / types.MillisecondsPerSlot +} + +// currentInterval derives the current interval within a slot. +func (e *Engine) currentInterval(timestampMs uint64) uint64 { + genesisMs := e.Store.Config().GenesisTime * 1000 + if timestampMs < genesisMs { + return 0 + } + totalIntervals := (timestampMs - genesisMs) / types.MillisecondsPerInterval + return totalIntervals % types.IntervalsPerSlot +} + +// getOurProposer checks if any of our validators is the proposer for this slot. +func (e *Engine) getOurProposer(slot uint64) (uint64, bool) { + if e.Keys == nil { + return 0, false + } + headState := e.Store.GetState(e.Store.Head()) + if headState == nil { + return 0, false + } + numValidators := headState.NumValidators() + + for _, vid := range e.Keys.ValidatorIDs() { + if types.IsProposer(slot, vid, numValidators) { + return vid, true + } + } + return 0, false +} diff --git a/node/ticker.go b/node/ticker.go deleted file mode 100644 index 955c1be..0000000 --- a/node/ticker.go +++ /dev/null @@ -1,132 +0,0 @@ -package node - -import ( - "context" - "fmt" - "time" - - "github.com/geanlabs/gean/observability/logging" - "github.com/geanlabs/gean/observability/metrics" -) - -// Run starts the main event loop. -func (n *Node) Run(ctx context.Context) error { - n.log.Info("node started", - "validators", fmt.Sprintf("%v", n.Validator.Indices), - "peers", len(n.Host.P2P.Network().Peers()), - ) - - // Register gossip handlers before syncing so blocks produced by peers - // during initial sync are not silently dropped. leanSpec requires nodes - // to subscribe to topics before connecting to peers. - if err := n.registerGossipHandlers(); err != nil { - n.log.Error("failed to register gossip handlers", "err", err) - return err - } - n.log.Info("gossip handlers registered, starting initial sync") - n.initialSync(ctx) - n.log.Info("initial sync completed") - - var lastSyncCheckSlot uint64 = ^uint64(0) - var lastLogSlot uint64 = ^uint64(0) - behindPeers := false - maxPeerHeadSlot := uint64(0) - - for { - wait := n.Clock.DurationUntilNextInterval() - timer := time.NewTimer(wait) - - select { - case <-ctx.Done(): - if !timer.Stop() { - select { - case <-timer.C: - default: - } - } - n.log.Info("node shutting down") - if n.API != nil { - n.API.Stop() - } - if err := n.Host.Close(); err != nil { - n.log.Warn("host close error", "err", err) - } - return nil - case <-timer.C: - } - - if n.Clock.IsBeforeGenesis() { - continue - } - slot := n.Clock.CurrentSlot() - interval := n.Clock.CurrentInterval() - hasProposal := interval == 0 && n.Validator.HasProposal(slot) - - // Advance fork choice time. - n.FC.AdvanceTimeMillis(n.Clock.CurrentTime(), hasProposal) - - status := n.FC.GetStatus() - - // Re-evaluate sync gating once per slot using peer head status. - if slot != lastSyncCheckSlot { - behindPeers, maxPeerHeadSlot = n.isBehindPeers(ctx, status) - if behindPeers { - // Try peers until one succeeds — avoid repeating the - // backward walk across every peer for the same chain. - for _, pid := range n.Host.P2P.Network().Peers() { - if n.syncWithPeer(ctx, pid) { - status = n.FC.GetStatus() - break - } - } - // Check locally if we caught up instead of re-querying - // all peers for status (saves P network requests). - behindPeers = status.HeadSlot < maxPeerHeadSlot - if behindPeers { - n.log.Warn( - "skipping validator duties while behind peers", - "slot", slot, - "head_slot", status.HeadSlot, - "finalized_slot", status.FinalizedSlot, - "max_peer_head_slot", maxPeerHeadSlot, - ) - } - } - lastSyncCheckSlot = slot - } - - // Execute validator duties unless we are behind peers' head. - if !behindPeers { - n.Validator.OnInterval(ctx, slot, interval) - } - - // Update metrics and log on slot boundary. - if slot != lastLogSlot { - start := time.Now() - // Refresh status for metrics if not already current. - status = n.FC.GetStatus() - - metrics.CurrentSlot.Set(float64(slot)) - metrics.HeadSlot.Set(float64(status.HeadSlot)) - metrics.LatestFinalizedSlot.Set(float64(status.FinalizedSlot)) - metrics.LatestJustifiedSlot.Set(float64(status.JustifiedSlot)) - peerCount := len(n.Host.P2P.Network().Peers()) - metrics.ConnectedPeers.WithLabelValues("gean").Set(float64(peerCount)) - - n.log.Info("slot", - "slot", slot, - "head_slot", status.HeadSlot, - "head_root", logging.LongHash(status.Head), - "finalized_slot", status.FinalizedSlot, - "finalized_root", logging.LongHash(status.FinalizedRoot), - "justified_slot", status.JustifiedSlot, - "justified_root", logging.LongHash(status.JustifiedRoot), - "behind_peers", behindPeers, - "max_peer_head", maxPeerHeadSlot, - "peers", peerCount, - "elapsed", logging.TimeSince(start), - ) - lastLogSlot = slot - } - } -} diff --git a/node/validator.go b/node/validator.go index d98b5c2..de6e330 100644 --- a/node/validator.go +++ b/node/validator.go @@ -2,254 +2,139 @@ package node import ( "context" - "encoding/hex" - "fmt" - "log/slog" "time" - pubsub "github.com/libp2p/go-libp2p-pubsub" - - "github.com/geanlabs/gean/chain/forkchoice" - "github.com/geanlabs/gean/chain/statetransition" - "github.com/geanlabs/gean/network/gossipsub" - "github.com/geanlabs/gean/observability/logging" - "github.com/geanlabs/gean/observability/metrics" + "github.com/geanlabs/gean/logger" "github.com/geanlabs/gean/types" + "github.com/geanlabs/gean/xmss" ) -// ValidatorDuties handles proposer, attester, and aggregator duties. -type ValidatorDuties struct { - Indices []uint64 - Keys map[uint64]forkchoice.Signer - FC *forkchoice.Store - Topics *gossipsub.Topics - PublishBlock func(context.Context, *pubsub.Topic, *types.SignedBlockWithAttestation) error - PublishAttestation func(context.Context, *pubsub.Topic, *types.SignedAttestation) error - PublishAggregatedAttestation func(context.Context, *pubsub.Topic, *types.SignedAggregatedAttestation) error - IsAggregator bool - Log *slog.Logger - lastProposedSlot map[uint64]uint64 -} - -// HasProposal reports whether this node has a proposer for the slot. -func (v *ValidatorDuties) HasProposal(slot uint64) bool { - for _, idx := range v.Indices { - if statetransition.IsProposer(idx, slot, v.FC.NumValidators()) { - return true - } +// maybePropose builds and publishes a block if we're the proposer. +// Uses store.ProduceBlockWithSignatures for greedy attestation selection. +func (e *Engine) maybePropose(slot, validatorID uint64) { + if e.Keys == nil { + return } - return false -} -// OnInterval executes validator duties for the current interval. -func (v *ValidatorDuties) OnInterval(ctx context.Context, slot, interval uint64) { - switch interval { - case 0: - v.TryPropose(ctx, slot) - case 1: - v.TryAttest(ctx, slot) - case 2: - if v.IsAggregator { - v.TryAggregate(ctx, slot) - } + // Skip if head is already at this slot (another proposer's block was imported). + if e.Store.HeadSlot() >= slot { + return } -} -// TryAggregate aggregates collected subnet attestation signatures and publishes -// SignedAggregatedAttestation messages on the aggregation gossip topic. -// Called at interval 2 only by aggregator nodes. -func (v *ValidatorDuties) TryAggregate(ctx context.Context, slot uint64) { - start := time.Now() - aggregated, err := v.FC.AggregateCommitteeSignatures() + logger.Info(logger.Validator, "proposing block slot=%d validator=%d", slot, validatorID) + + // Build block with greedy attestation selection. + block, attSigProofs, err := ProduceBlockWithSignatures(e.Store, slot, validatorID) if err != nil { - v.Log.Error("aggregation failed", "slot", slot, "err", err) + logger.Error(logger.Validator, "produce block failed: %v", err) return } - if len(aggregated) == 0 { - v.Log.Debug("no attestations to aggregate", "slot", slot) + // Produce proposer's own attestation. + attData := ProduceAttestationData(e.Store, slot) + if attData == nil { + logger.Error(logger.Validator, "failed to produce attestation data for proposal") return } - for _, saa := range aggregated { - if err := v.PublishAggregatedAttestation(ctx, v.Topics.Aggregation, saa); err != nil { - v.Log.Error("failed to publish aggregated attestation", - "slot", slot, - "err", err, - ) - } else { - v.Log.Info("published aggregated attestation", - "slot", slot, - "att_slot", saa.Data.Slot, - "participants", countBitlistParticipants(saa.Proof.Participants), - "participants_bitlist_bytes", len(saa.Proof.Participants), - "proof_size", len(saa.Proof.ProofData), - ) - } + proposerAtt := &types.Attestation{ + ValidatorID: validatorID, + Data: attData, } - duration := time.Since(start) - metrics.CommitteeSignaturesAggregationTime.Observe(duration.Seconds()) - v.Log.Info("aggregation complete", - "slot", slot, - "count", len(aggregated), - "duration", duration, - ) -} -func (v *ValidatorDuties) TryPropose(ctx context.Context, slot uint64) { - // Slot 0 is the anchor/genesis slot and should not produce a new block. - if slot == 0 { + // Sign proposer's attestation (this becomes the ProposerSignature in the block). + signStart := time.Now() + attSig, err := e.Keys.SignAttestation(validatorID, attData) + ObservePqSigSigningTime(time.Since(signStart).Seconds()) + if err != nil { + logger.Error(logger.Validator, "sign proposer attestation failed: %v", err) return } - if v.lastProposedSlot == nil { - v.lastProposedSlot = make(map[uint64]uint64) + + signedBlock := &types.SignedBlockWithAttestation{ + Block: &types.BlockWithAttestation{ + Block: block, + ProposerAttestation: proposerAtt, + }, + Signature: &types.BlockSignatures{ + ProposerSignature: attSig, // attestation signature, NOT block signature + AttestationSignatures: attSigProofs, + }, } - for _, idx := range v.Indices { - if !statetransition.IsProposer(idx, slot, v.FC.NumValidators()) { - continue - } - if lastSlot, ok := v.lastProposedSlot[idx]; ok && lastSlot == slot { - continue - } - v.lastProposedSlot[idx] = slot + // Process locally first. + if err := OnBlock(e.Store, signedBlock, e.Keys.ValidatorIDs()); err != nil { + logger.Error(logger.Chain, "local block processing failed: %v", err) + return + } - kp, ok := v.Keys[idx] - if !ok { - v.Log.Error("proposer key not found", "validator", idx) - continue - } + // Register in fork choice. + bRoot, _ := block.HashTreeRoot() + e.FC.OnBlock(slot, bRoot, block.ParentRoot) + e.updateHead(false) - envelope, err := v.FC.ProduceBlock(slot, idx, kp) - if err != nil { - status := v.FC.GetStatus() - v.Log.Error("block proposal failed", - "slot", slot, - "proposer", idx, - "err", err, - "head_slot", status.HeadSlot, - "finalized_slot", status.FinalizedSlot, - ) - continue - } + // Store proposer's attestation signature in gossip for aggregation with C handle. + dataRoot, _ := attData.HashTreeRoot() + sigHandle, parseErr := xmss.ParseSignature(attSig[:]) + e.Store.GossipSignatures.InsertWithHandle(dataRoot, attData, validatorID, attSig, sigHandle, parseErr) - blockRoot, _ := envelope.Message.Block.HashTreeRoot() - - // Log signing confirmation. - proposerSig := envelope.Signature.ProposerSignature - v.Log.Info("block signed (XMSS)", - "slot", slot, - "proposer", idx, - "sig_size", fmt.Sprintf("%d bytes", len(proposerSig)), - "sig_prefix", hex.EncodeToString(proposerSig[:8]), - ) - - if err := v.PublishBlock(ctx, v.Topics.Block, envelope); err != nil { - v.Log.Error("failed to publish block", - "slot", slot, - "proposer", idx, - "block_root", logging.LongHash(blockRoot), - "err", err, - ) - } else { - v.Log.Info("proposed block", - "slot", slot, - "proposer", idx, - "block_root", logging.LongHash(blockRoot), - "parent_root", logging.LongHash(envelope.Message.Block.ParentRoot), - "state_root", logging.LongHash(envelope.Message.Block.StateRoot), - "attestations", len(envelope.Message.Block.Body.Attestations), - ) + // Publish to network. + if e.P2P != nil { + if err := e.P2P.PublishBlock(context.Background(), signedBlock); err != nil { + logger.Error(logger.Network, "publish block failed: %v", err) } } + + logger.Info(logger.Validator, "proposed block slot=%d block_root=0x%x attestations=%d", + slot, bRoot, len(block.Body.Attestations)) } -func (v *ValidatorDuties) TryAttest(ctx context.Context, slot uint64) { - for _, idx := range v.Indices { - // Skip if this validator is the proposer for this slot. - // The proposer already attests via ProposerAttestation in its block. - if statetransition.IsProposer(idx, slot, v.FC.NumValidators()) { - continue - } +// produceAttestations creates and publishes attestations for non-proposing validators. +func (e *Engine) produceAttestations(slot uint64) { + if e.Keys == nil { + return + } + + headState := e.Store.GetState(e.Store.Head()) + if headState == nil { + return + } + numValidators := headState.NumValidators() - kp, ok := v.Keys[idx] - if !ok { - v.Log.Error("validator key not found", "validator", idx) + attData := ProduceAttestationData(e.Store, slot) + if attData == nil { + return + } + + for _, vid := range e.Keys.ValidatorIDs() { + // Skip proposer — they already attested via block. + if types.IsProposer(slot, vid, numValidators) { continue } - signStart := time.Now() - sa, err := v.FC.ProduceAttestation(slot, idx, kp) - signDuration := time.Since(signStart) - metrics.PQSigAttestationSigningTime.Observe(signDuration.Seconds()) - + sStart := time.Now() + sig, err := e.Keys.SignAttestation(vid, attData) + ObservePqSigSigningTime(time.Since(sStart).Seconds()) if err != nil { - status := v.FC.GetStatus() - v.Log.Error("attestation failed", - "slot", slot, - "validator", idx, - "err", err, - "head_slot", status.HeadSlot, - "finalized_slot", status.FinalizedSlot, - ) + logger.Error(logger.Validator, "sign attestation failed validator=%d: %v", vid, err) continue } - // Log signing confirmation. - metrics.PQSigAttestationSignaturesTotal.Inc() - v.Log.Info("attestation signed (XMSS)", - "slot", slot, - "validator", idx, - "sig_size", fmt.Sprintf("%d bytes", len(sa.Signature)), - "sig_prefix", hex.EncodeToString(sa.Signature[:8]), - "signing_time", signDuration, - ) - - // Warn if no peers are subscribed — publish will be silently dropped with no error. - if topicPeerCount(v.Topics.SubnetAttestation) == 0 { - v.Log.Warn("attestation topic has 0 peers — published attestation will not be delivered", - "slot", slot, - "validator", idx, - ) + signedAtt := &types.SignedAttestation{ + ValidatorID: vid, + Data: attData, + Signature: sig, } - if err := v.PublishAttestation(ctx, v.Topics.SubnetAttestation, sa); err != nil { - v.Log.Error("failed to publish attestation", - "slot", slot, - "validator", idx, - "err", err, - ) - } else { - v.Log.Debug("published attestation", - "slot", slot, - "validator", idx, - "head_root", logging.LongHash(sa.Message.Head.Root), - "target_slot", sa.Message.Target.Slot, - "target_root", logging.LongHash(sa.Message.Target.Root), - "source_slot", sa.Message.Source.Slot, - "source_root", logging.LongHash(sa.Message.Source.Root), - ) - } - } -} + logger.Info(logger.Validator, "produced attestation slot=%d validator=%d", slot, vid) -func countBitlistParticipants(bits []byte) int { - numBits := uint64(statetransition.BitlistLen(bits)) - count := 0 - for i := uint64(0); i < numBits; i++ { - if statetransition.GetBit(bits, i) { - count++ + // Publish to subnet. + if e.P2P != nil { + if err := e.P2P.PublishAttestation(context.Background(), signedAtt, e.CommitteeCount); err != nil { + logger.Error(logger.Network, "publish attestation failed validator=%d: %v", vid, err) + } else { + logger.Info(logger.Network, "published attestation to network slot=%d validator=%d", slot, vid) + } } } - return count -} - -// topicPeerCount safely returns the number of peers subscribed to a pubsub topic. -// Returns 0 if the topic is nil or has no backing PubSub instance (e.g. in tests). -func topicPeerCount(topic *pubsub.Topic) (n int) { - if topic == nil { - return 0 - } - defer func() { recover() }() //nolint:errcheck - return len(topic.ListPeers()) } diff --git a/node/validator_test.go b/node/validator_test.go deleted file mode 100644 index 12f4dd6..0000000 --- a/node/validator_test.go +++ /dev/null @@ -1,206 +0,0 @@ -package node_test - -import ( - "context" - "testing" - - "github.com/geanlabs/gean/chain/forkchoice" - "github.com/geanlabs/gean/chain/statetransition" - "github.com/geanlabs/gean/network/gossipsub" - "github.com/geanlabs/gean/node" - "github.com/geanlabs/gean/observability/logging" - "github.com/geanlabs/gean/storage/memory" - "github.com/geanlabs/gean/types" - pubsub "github.com/libp2p/go-libp2p-pubsub" -) - -type testSigner struct { - sig []byte -} - -func (s *testSigner) Sign(epoch uint32, message [32]byte) ([]byte, error) { - if s.sig != nil { - return s.sig, nil - } - out := make([]byte, 3112) - out[0] = 0xAA - return out, nil -} - -func TestValidatorDuties_TryAttest_SignsAndPublishes(t *testing.T) { - // Setup - numValidators := uint64(3) - state := statetransition.GenerateGenesis(1000, makeTestValidators(numValidators)) - emptyBody := &types.BlockBody{Attestations: []*types.AggregatedAttestation{}} - genesisBlock := &types.Block{ - Slot: 0, - ProposerIndex: 0, - ParentRoot: types.ZeroHash, - StateRoot: types.ZeroHash, - Body: emptyBody, - } - stateRoot, _ := state.HashTreeRoot() - genesisBlock.StateRoot = stateRoot - - store := memory.New() - fc := forkchoice.NewStore(state, genesisBlock, store) - - // Mock keys - keys := make(map[uint64]forkchoice.Signer) - expectedSig := make([]byte, 3112) - expectedSig[0] = 0xAA // Marker - keys[1] = &testSigner{sig: expectedSig} - - // Capture published attestation - var publishedAtt *types.SignedAttestation - publishFunc := func(ctx context.Context, topic *pubsub.Topic, sa *types.SignedAttestation) error { - publishedAtt = sa - return nil - } - - duties := &node.ValidatorDuties{ - Indices: []uint64{1}, - Keys: keys, - FC: fc, - Topics: &gossipsub.Topics{SubnetAttestation: &pubsub.Topic{}}, // Dummy topic - PublishAttestation: publishFunc, - Log: logging.NewComponentLogger(logging.CompValidator), - } - - // Action: validator 1 attests at slot 0 - duties.TryAttest(context.Background(), 0) - - // Verify - if publishedAtt == nil { - t.Fatal("expected PublishAttestation to be called") - } - if publishedAtt.ValidatorID != 1 { - t.Errorf("attester = %d, want 1", publishedAtt.ValidatorID) - } - // Verify signature - if publishedAtt.Signature[0] != 0xAA { - t.Errorf("signature not matching mock signer output") - } -} - -func TestValidatorDuties_TryPropose_SignsAndPublishes(t *testing.T) { - // Setup - numValidators := uint64(3) - state := statetransition.GenerateGenesis(1000, makeTestValidators(numValidators)) - emptyBody := &types.BlockBody{Attestations: []*types.AggregatedAttestation{}} - genesisBlock := &types.Block{ - Slot: 0, - ProposerIndex: 0, - ParentRoot: types.ZeroHash, - StateRoot: types.ZeroHash, - Body: emptyBody, - } - stateRoot, _ := state.HashTreeRoot() - genesisBlock.StateRoot = stateRoot - - store := memory.New() - fc := forkchoice.NewStore(state, genesisBlock, store) - - // Mock keys - keys := make(map[uint64]forkchoice.Signer) - expectedSig := make([]byte, 3112) - expectedSig[0] = 0xBB // Marker - keys[1] = &testSigner{sig: expectedSig} - - // Capture published block - var publishedBlock *types.SignedBlockWithAttestation - publishFunc := func(ctx context.Context, topic *pubsub.Topic, sb *types.SignedBlockWithAttestation) error { - publishedBlock = sb - return nil - } - - duties := &node.ValidatorDuties{ - Indices: []uint64{1}, - Keys: keys, - FC: fc, - Topics: &gossipsub.Topics{Block: &pubsub.Topic{}}, // Dummy topic - PublishBlock: publishFunc, - Log: logging.NewComponentLogger(logging.CompValidator), - } - - // Action: validator 1 proposes at slot 1 - // 3 validators. Proposer = slot % 3. 1 % 3 = 1. Yes. - duties.TryPropose(context.Background(), 1) - - // Verify - if publishedBlock == nil { - t.Fatal("expected PublishBlock to be called") - } - if publishedBlock.Message.Block.ProposerIndex != 1 { - t.Errorf("proposer = %d, want 1", publishedBlock.Message.Block.ProposerIndex) - } - - if publishedBlock.Signature.ProposerSignature[0] != 0xBB { - t.Errorf("signature not matching mock signer output") - } - - // Proposed blocks must not be inserted into forkchoice storage until they are - // processed through the normal block import path (gossip/reqresp/sync). - blockRoot, _ := publishedBlock.Message.Block.HashTreeRoot() - if _, ok := fc.GetSignedBlock(blockRoot); ok { - t.Fatalf("unexpected pre-inserted proposed block %x", blockRoot) - } -} - -func TestValidatorDuties_TryPropose_DuplicateIndexProposesOncePerSlot(t *testing.T) { - // Setup - numValidators := uint64(3) - state := statetransition.GenerateGenesis(1000, makeTestValidators(numValidators)) - emptyBody := &types.BlockBody{Attestations: []*types.AggregatedAttestation{}} - genesisBlock := &types.Block{ - Slot: 0, - ProposerIndex: 0, - ParentRoot: types.ZeroHash, - StateRoot: types.ZeroHash, - Body: emptyBody, - } - stateRoot, _ := state.HashTreeRoot() - genesisBlock.StateRoot = stateRoot - - store := memory.New() - fc := forkchoice.NewStore(state, genesisBlock, store) - - keys := make(map[uint64]forkchoice.Signer) - expectedSig := make([]byte, 3112) - expectedSig[0] = 0xCC - keys[1] = &testSigner{sig: expectedSig} - - publishCount := 0 - publishFunc := func(ctx context.Context, topic *pubsub.Topic, sb *types.SignedBlockWithAttestation) error { - publishCount++ - return nil - } - - duties := &node.ValidatorDuties{ - Indices: []uint64{1, 1}, - Keys: keys, - FC: fc, - Topics: &gossipsub.Topics{Block: &pubsub.Topic{}}, - PublishBlock: publishFunc, - Log: logging.NewComponentLogger(logging.CompValidator), - } - - // Action: slot 1 proposer should run once even if index appears twice. - duties.TryPropose(context.Background(), 1) - - if publishCount != 1 { - t.Fatalf("publish count = %d, want 1", publishCount) - } -} - -// Helpers -func makeTestValidators(n uint64) []*types.Validator { - vals := make([]*types.Validator, n) - for i := uint64(0); i < n; i++ { - vals[i] = &types.Validator{ - Pubkey: [52]byte{}, - Index: i, - } - } - return vals -} diff --git a/node/verify.go b/node/verify.go new file mode 100644 index 0000000..6cf559d --- /dev/null +++ b/node/verify.go @@ -0,0 +1,53 @@ +package node + +import ( + "fmt" + + "github.com/geanlabs/gean/types" + "github.com/geanlabs/gean/xmss" +) + +// verifyAttestation verifies a single XMSS signature. +func verifyAttestation(pubkey [types.PubkeySize]byte, slot uint32, message [32]byte, sig [types.SignatureSize]byte) (bool, error) { + return xmss.VerifySignatureSSZ(pubkey, slot, message, sig) +} + +// verifyAggregatedProof verifies an aggregated XMSS proof against participant pubkeys. +func verifyAggregatedProof( + state *types.State, + participantIDs []uint64, + data *types.AttestationData, + proofData []byte, +) error { + numValidators := uint64(len(state.Validators)) + + // Parse pubkeys for participants. + parsedPubkeys := make([]xmss.CPubKey, len(participantIDs)) + for i, vid := range participantIDs { + if vid >= numValidators { + return fmt.Errorf("validator %d out of range (%d)", vid, numValidators) + } + pk, err := xmss.ParsePublicKey(state.Validators[vid].Pubkey) + if err != nil { + // Free already parsed keys. + for j := 0; j < i; j++ { + xmss.FreePublicKey(parsedPubkeys[j]) + } + return fmt.Errorf("parse pubkey for validator %d: %w", vid, err) + } + parsedPubkeys[i] = pk + } + defer func() { + for _, pk := range parsedPubkeys { + xmss.FreePublicKey(pk) + } + }() + + dataRoot, err := data.HashTreeRoot() + if err != nil { + return fmt.Errorf("hash tree root: %w", err) + } + + slot := uint32(data.Slot) + return xmss.VerifyAggregatedSignature(proofData, parsedPubkeys, dataRoot, slot) +} diff --git a/observability/grafana/client-dashboard.json b/observability/grafana/client-dashboard.json deleted file mode 100644 index 071fda6..0000000 --- a/observability/grafana/client-dashboard.json +++ /dev/null @@ -1,914 +0,0 @@ -{ - "annotations": { - "list": [ - { - "builtIn": 1, - "datasource": { - "type": "grafana", - "uid": "-- Grafana --" - }, - "enable": true, - "hide": true, - "iconColor": "rgba(0, 211, 255, 1)", - "name": "Annotations & Alerts", - "type": "dashboard" - } - ] - }, - "editable": true, - "fiscalYearStartMonth": 0, - "graphTooltip": 0, - "id": null, - "links": [], - "panels": [ - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 0 - }, - "id": 1, - "panels": [], - "title": "Overview", - "type": "row" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "thresholds" - }, - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 3, - "w": 6, - "x": 0, - "y": 1 - }, - "id": 2, - "options": { - "colorMode": "none", - "graphMode": "none", - "justifyMode": "auto", - "orientation": "auto", - "reduceOptions": { - "calcs": [ - "lastNotNull" - ], - "fields": "", - "values": false - }, - "textMode": "value" - }, - "targets": [ - { - "editorMode": "code", - "expr": "lean_head_slot{job=~\"$gean_job\"}", - "range": true, - "refId": "A" - } - ], - "title": "Head Slot", - "type": "stat" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "thresholds" - }, - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 3, - "w": 6, - "x": 6, - "y": 1 - }, - "id": 3, - "options": { - "colorMode": "none", - "graphMode": "none", - "justifyMode": "auto", - "orientation": "auto", - "reduceOptions": { - "calcs": [ - "lastNotNull" - ], - "fields": "", - "values": false - }, - "textMode": "value" - }, - "targets": [ - { - "editorMode": "code", - "expr": "lean_latest_justified_slot{job=~\"$gean_job\"}", - "range": true, - "refId": "A" - } - ], - "title": "Latest Justified Slot", - "type": "stat" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "thresholds" - }, - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 3, - "w": 6, - "x": 12, - "y": 1 - }, - "id": 4, - "options": { - "colorMode": "none", - "graphMode": "none", - "justifyMode": "auto", - "orientation": "auto", - "reduceOptions": { - "calcs": [ - "lastNotNull" - ], - "fields": "", - "values": false - }, - "textMode": "value" - }, - "targets": [ - { - "editorMode": "code", - "expr": "lean_latest_finalized_slot{job=~\"$gean_job\"}", - "range": true, - "refId": "A" - } - ], - "title": "Latest Finalized Slot", - "type": "stat" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "thresholds" - }, - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - }, - "unit": "none" - }, - "overrides": [] - }, - "gridPos": { - "h": 3, - "w": 6, - "x": 18, - "y": 1 - }, - "id": 5, - "options": { - "colorMode": "none", - "graphMode": "none", - "justifyMode": "auto", - "orientation": "auto", - "reduceOptions": { - "calcs": [ - "lastNotNull" - ], - "fields": "", - "values": false - }, - "textMode": "value" - }, - "targets": [ - { - "editorMode": "code", - "expr": "lean_head_slot{job=~\"$gean_job\"} - lean_latest_finalized_slot{job=~\"$gean_job\"}", - "range": true, - "refId": "A" - } - ], - "title": "Head - Finalized (slots)", - "type": "stat" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "unit": "none" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 4 - }, - "id": 6, - "options": { - "legend": { - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "mode": "single", - "sort": "none" - } - }, - "targets": [ - { - "editorMode": "code", - "expr": "lean_current_slot{job=~\"$gean_job\"}", - "legendFormat": "{{job}} current", - "range": true, - "refId": "A" - }, - { - "editorMode": "code", - "expr": "lean_head_slot{job=~\"$gean_job\"}", - "legendFormat": "{{job}} head", - "range": true, - "refId": "B" - } - ], - "title": "Current vs Head Slot", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "thresholds" - }, - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 1 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 4, - "w": 6, - "x": 12, - "y": 4 - }, - "id": 7, - "options": { - "colorMode": "none", - "graphMode": "none", - "justifyMode": "auto", - "orientation": "auto", - "reduceOptions": { - "calcs": [ - "lastNotNull" - ], - "fields": "", - "values": false - }, - "textMode": "value" - }, - "targets": [ - { - "editorMode": "code", - "expr": "lean_validators_count{job=~\"$gean_job\"}", - "range": true, - "refId": "A" - } - ], - "title": "Validators Count", - "type": "stat" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "thresholds" - }, - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 1 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 4, - "w": 6, - "x": 18, - "y": 4 - }, - "id": 8, - "options": { - "colorMode": "none", - "graphMode": "none", - "justifyMode": "auto", - "orientation": "auto", - "reduceOptions": { - "calcs": [ - "lastNotNull" - ], - "fields": "", - "values": false - }, - "textMode": "value" - }, - "targets": [ - { - "editorMode": "code", - "expr": "lean_connected_peers{job=~\"$gean_job\"}", - "range": true, - "refId": "A" - } - ], - "title": "Connected Peers", - "type": "stat" - }, - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 12 - }, - "id": 9, - "panels": [], - "title": "Attestation Validation", - "type": "row" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "unit": "ops" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 13 - }, - "id": 10, - "options": { - "legend": { - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "mode": "single", - "sort": "none" - } - }, - "targets": [ - { - "editorMode": "code", - "expr": "rate(lean_attestations_valid_total{job=~\"$gean_job\"}[$__rate_interval])", - "legendFormat": "{{job}} valid/s", - "range": true, - "refId": "A" - }, - { - "editorMode": "code", - "expr": "rate(lean_attestations_invalid_total{job=~\"$gean_job\"}[$__rate_interval])", - "legendFormat": "{{job}} invalid/s", - "range": true, - "refId": "B" - } - ], - "title": "Valid / Invalid Attestation Rate", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "thresholds" - }, - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 5 - } - ] - }, - "unit": "percent" - }, - "overrides": [] - }, - "gridPos": { - "h": 4, - "w": 6, - "x": 12, - "y": 13 - }, - "id": 11, - "options": { - "colorMode": "none", - "graphMode": "none", - "justifyMode": "auto", - "orientation": "auto", - "reduceOptions": { - "calcs": [ - "lastNotNull" - ], - "fields": "", - "values": false - }, - "textMode": "value" - }, - "targets": [ - { - "editorMode": "code", - "expr": "100 * rate(lean_attestations_invalid_total{job=~\"$gean_job\"}[5m]) / clamp_min(rate(lean_attestations_valid_total{job=~\"$gean_job\"}[5m]) + rate(lean_attestations_invalid_total{job=~\"$gean_job\"}[5m]), 1e-9)", - "range": true, - "refId": "A" - } - ], - "title": "Invalid Attestation Ratio", - "type": "stat" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "unit": "s" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 6, - "x": 18, - "y": 13 - }, - "id": 12, - "options": { - "legend": { - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "mode": "single", - "sort": "none" - } - }, - "targets": [ - { - "editorMode": "code", - "expr": "histogram_quantile(0.50, sum by (le, job) (rate(lean_attestation_validation_time_seconds_bucket{job=~\"$gean_job\"}[$__rate_interval])))", - "legendFormat": "{{job}} p50", - "range": true, - "refId": "A" - }, - { - "editorMode": "code", - "expr": "histogram_quantile(0.95, sum by (le, job) (rate(lean_attestation_validation_time_seconds_bucket{job=~\"$gean_job\"}[$__rate_interval])))", - "legendFormat": "{{job}} p95", - "range": true, - "refId": "B" - }, - { - "editorMode": "code", - "expr": "histogram_quantile(0.99, sum by (le, job) (rate(lean_attestation_validation_time_seconds_bucket{job=~\"$gean_job\"}[$__rate_interval])))", - "legendFormat": "{{job}} p99", - "range": true, - "refId": "C" - } - ], - "title": "Attestation Validation Latency", - "type": "timeseries" - }, - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 21 - }, - "id": 13, - "panels": [], - "title": "Forkchoice + State Transition", - "type": "row" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "unit": "s" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 22 - }, - "id": 14, - "options": { - "legend": { - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "mode": "single", - "sort": "none" - } - }, - "targets": [ - { - "editorMode": "code", - "expr": "histogram_quantile(0.95, sum by (le, job) (rate(lean_fork_choice_block_processing_time_seconds_bucket{job=~\"$gean_job\"}[$__rate_interval])))", - "legendFormat": "{{job}} forkchoice p95", - "range": true, - "refId": "A" - }, - { - "editorMode": "code", - "expr": "histogram_quantile(0.95, sum by (le, job) (rate(lean_state_transition_time_seconds_bucket{job=~\"$gean_job\"}[$__rate_interval])))", - "legendFormat": "{{job}} state transition p95", - "range": true, - "refId": "B" - } - ], - "title": "Forkchoice vs State Transition p95", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "unit": "s" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 22 - }, - "id": 15, - "options": { - "legend": { - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "mode": "single", - "sort": "none" - } - }, - "targets": [ - { - "editorMode": "code", - "expr": "histogram_quantile(0.95, sum by (le, job) (rate(lean_state_transition_slots_processing_time_seconds_bucket{job=~\"$gean_job\"}[$__rate_interval])))", - "legendFormat": "{{job}} slots p95", - "range": true, - "refId": "A" - }, - { - "editorMode": "code", - "expr": "histogram_quantile(0.95, sum by (le, job) (rate(lean_state_transition_block_processing_time_seconds_bucket{job=~\"$gean_job\"}[$__rate_interval])))", - "legendFormat": "{{job}} block p95", - "range": true, - "refId": "B" - }, - { - "editorMode": "code", - "expr": "histogram_quantile(0.95, sum by (le, job) (rate(lean_state_transition_attestations_processing_time_seconds_bucket{job=~\"$gean_job\"}[$__rate_interval])))", - "legendFormat": "{{job}} attestations p95", - "range": true, - "refId": "C" - } - ], - "title": "State Transition Sub-phase p95", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "unit": "ops" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 30 - }, - "id": 16, - "options": { - "legend": { - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "mode": "single", - "sort": "none" - } - }, - "targets": [ - { - "editorMode": "code", - "expr": "rate(lean_state_transition_slots_processed_total{job=~\"$gean_job\"}[$__rate_interval])", - "legendFormat": "{{job}} slots/s", - "range": true, - "refId": "A" - }, - { - "editorMode": "code", - "expr": "rate(lean_state_transition_attestations_processed_total{job=~\"$gean_job\"}[$__rate_interval])", - "legendFormat": "{{job}} attestations/s", - "range": true, - "refId": "B" - } - ], - "title": "State Transition Throughput", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "unit": "none" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 30 - }, - "id": 17, - "options": { - "legend": { - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "mode": "single", - "sort": "none" - } - }, - "targets": [ - { - "editorMode": "code", - "expr": "lean_head_slot{job=~\"$gean_job\"}", - "legendFormat": "{{job}} head", - "range": true, - "refId": "A" - }, - { - "editorMode": "code", - "expr": "lean_latest_justified_slot{job=~\"$gean_job\"}", - "legendFormat": "{{job}} justified", - "range": true, - "refId": "B" - }, - { - "editorMode": "code", - "expr": "lean_latest_finalized_slot{job=~\"$gean_job\"}", - "legendFormat": "{{job}} finalized", - "range": true, - "refId": "C" - } - ], - "title": "Checkpoint Progress", - "type": "timeseries" - } - ], - "preload": false, - "schemaVersion": 41, - "tags": [ - "gean", - "leanmetrics", - "devnet1" - ], - "templating": { - "list": [ - { - "current": { - "selected": false, - "text": "Prometheus", - "value": "prometheus" - }, - "hide": 0, - "includeAll": false, - "label": "Datasource", - "multi": false, - "name": "DS_PROMETHEUS", - "options": [], - "query": "prometheus", - "refresh": 1, - "regex": "", - "skipUrlSync": false, - "type": "datasource" - }, - { - "allValue": ".*", - "current": { - "selected": true, - "text": "All", - "value": "$__all" - }, - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "definition": "label_values(lean_head_slot, job)", - "includeAll": true, - "label": "Gean Job", - "multi": true, - "name": "gean_job", - "options": [], - "query": { - "qryType": 1, - "query": "label_values(lean_head_slot, job)", - "refId": "PrometheusVariableQueryEditor-VariableQuery" - }, - "refresh": 1, - "regex": ".*gean.*", - "type": "query" - } - ] - }, - "time": { - "from": "now-30m", - "to": "now" - }, - "timepicker": {}, - "timezone": "browser", - "title": "Gean Devnet-1 Metrics", - "uid": "gean-devnet1-metrics", - "version": 1 -} \ No newline at end of file diff --git a/observability/grafana/devnet3-lean-ethereum-clients-dashboard.json b/observability/grafana/devnet3-lean-ethereum-clients-dashboard.json deleted file mode 100644 index 919cabc..0000000 --- a/observability/grafana/devnet3-lean-ethereum-clients-dashboard.json +++ /dev/null @@ -1,5518 +0,0 @@ -{ - "__inputs": [ - { - "name": "DS_PROMETHEUS", - "label": "prometheus", - "description": "", - "type": "datasource", - "pluginId": "prometheus", - "pluginName": "Prometheus" - } - ], - "__elements": {}, - "__requires": [ - { - "type": "grafana", - "id": "grafana", - "name": "Grafana", - "version": "12.3.2" - }, - { - "type": "datasource", - "id": "prometheus", - "name": "Prometheus", - "version": "1.0.0" - }, - { - "type": "panel", - "id": "stat", - "name": "Stat", - "version": "" - }, - { - "type": "panel", - "id": "text", - "name": "Text", - "version": "" - }, - { - "type": "panel", - "id": "timeseries", - "name": "Time series", - "version": "" - } - ], - "annotations": { - "list": [ - { - "builtIn": 1, - "datasource": { - "type": "grafana", - "uid": "-- Grafana --" - }, - "enable": true, - "hide": true, - "iconColor": "rgba(0, 211, 255, 1)", - "name": "Annotations & Alerts", - "type": "dashboard" - } - ] - }, - "editable": true, - "fiscalYearStartMonth": 0, - "graphTooltip": 0, - "id": 0, - "links": [], - "panels": [ - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 0 - }, - "id": 16, - "panels": [], - "title": "Overview", - "type": "row" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "fixedColor": "super-light-green", - "mode": "fixed" - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - }, - "unit": "dateTimeAsIso" - }, - "overrides": [] - }, - "gridPos": { - "h": 4, - "w": 4, - "x": 0, - "y": 1 - }, - "id": 76, - "options": { - "colorMode": "value", - "graphMode": "none", - "justifyMode": "center", - "orientation": "auto", - "percentChangeColorMode": "standard", - "reduceOptions": { - "calcs": [ - "lastNotNull" - ], - "fields": "", - "values": false - }, - "showPercentChange": false, - "text": { - "valueSize": 20 - }, - "textMode": "value", - "wideLayout": true - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "editorMode": "code", - "exemplar": false, - "expr": "max (lean_node_start_time_seconds{job=~\"$job\"} * 1000)", - "instant": false, - "legendFormat": "__auto", - "range": true, - "refId": "A" - } - ], - "title": "Latest start time", - "type": "stat" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "thresholds" - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 4, - "w": 4, - "x": 4, - "y": 1 - }, - "id": 40, - "options": { - "colorMode": "none", - "graphMode": "none", - "justifyMode": "auto", - "orientation": "auto", - "percentChangeColorMode": "standard", - "reduceOptions": { - "calcs": [], - "fields": "", - "values": false - }, - "showPercentChange": false, - "textMode": "auto", - "wideLayout": true - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "max(lean_attestation_committee_count{job=~\"$job\"})", - "instant": true, - "legendFormat": "__auto", - "range": false, - "refId": "A" - } - ], - "title": "Number of attestation committees", - "type": "stat" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "fixedColor": "super-light-green", - "mode": "fixed" - }, - "mappings": [ - { - "options": { - "0": { "text": "No", "color": "red" }, - "1": { "text": "Yes", "color": "green" } - }, - "type": "value" - } - ], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "red", - "value": 0 - }, - { - "color": "green", - "value": 1 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 4, - "w": 4, - "x": 8, - "y": 1 - }, - "id": 89, - "options": { - "colorMode": "none", - "graphMode": "none", - "justifyMode": "center", - "orientation": "auto", - "percentChangeColorMode": "standard", - "reduceOptions": { - "calcs": [ - "lastNotNull" - ], - "fields": "", - "values": false - }, - "showPercentChange": false, - "text": {}, - "textMode": "value", - "wideLayout": true - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "editorMode": "code", - "exemplar": false, - "expr": "max(lean_is_aggregator{job=~\"$job\"})", - "instant": false, - "legendFormat": "__auto", - "range": true, - "refId": "A" - } - ], - "title": "Aggregator", - "type": "stat" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "Number of validators attached to each node", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - } - }, - "fieldMinMax": false, - "mappings": [] - }, - "overrides": [ - { - "matcher": { - "id": "byName", - "options": "Total" - }, - "properties": [ - { - "id": "custom.hideFrom", - "value": { - "legend": false, - "tooltip": true, - "viz": true - } - } - ] - } - ] - }, - "gridPos": { - "h": 8, - "w": 6, - "x": 12, - "y": 1 - }, - "id": 51, - "options": { - "legend": { - "displayMode": "table", - "placement": "right", - "showLegend": true, - "values": [ - "value" - ] - }, - "pieType": "donut", - "reduceOptions": { - "calcs": [ - "lastNotNull" - ], - "fields": "", - "values": false - }, - "sort": "desc", - "tooltip": { - "hideZeros": false, - "mode": "single", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "sum (lean_validators_count)", - "hide": false, - "instant": true, - "legendFormat": "Total", - "range": false, - "refId": "A" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "disableTextWrap": false, - "editorMode": "code", - "exemplar": false, - "expr": "sum by (job) (lean_validators_count{job=~\"$job\"})", - "fullMetaSearch": false, - "hide": false, - "includeNullMetadata": true, - "instant": true, - "interval": "", - "legendFormat": "{{job}}", - "range": false, - "refId": "B", - "useBackend": false - } - ], - "title": "Validators per node", - "type": "piechart" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "thresholds" - }, - "custom": { - "align": "auto", - "cellOptions": { - "type": "auto" - }, - "footer": { - "reducers": [] - }, - "inspect": false - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [ - { - "matcher": { - "id": "byName", - "options": "name" - }, - "properties": [ - { - "id": "custom.width", - "value": 112 - } - ] - }, - { - "matcher": { - "id": "byName", - "options": "job" - }, - "properties": [ - { - "id": "custom.width", - "value": 116 - } - ] - } - ] - }, - "gridPos": { - "h": 8, - "w": 6, - "x": 18, - "y": 1 - }, - "id": 75, - "options": { - "cellHeight": "sm", - "showHeader": true - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "editorMode": "code", - "exemplar": false, - "expr": "lean_node_info{job=~\"$job\"}", - "instant": true, - "legendFormat": "__auto", - "range": false, - "refId": "A" - } - ], - "title": "Node info", - "transformations": [ - { - "id": "labelsToFields", - "options": { - "keepLabels": [ - "name", - "version", - "job" - ] - } - }, - { - "id": "merge", - "options": {} - }, - { - "id": "filterFieldsByName", - "options": { - "include": { - "names": [ - "name", - "version", - "job" - ] - } - } - } - ], - "type": "table" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "thresholds" - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 4, - "w": 4, - "x": 0, - "y": 5 - }, - "id": 88, - "options": { - "colorMode": "none", - "graphMode": "none", - "justifyMode": "auto", - "orientation": "auto", - "percentChangeColorMode": "standard", - "reduceOptions": { - "calcs": [], - "fields": "", - "values": false - }, - "showPercentChange": false, - "textMode": "auto", - "wideLayout": true - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "max(lean_latest_finalized_slot{job=~\"$job\"})", - "instant": true, - "legendFormat": "__auto", - "range": false, - "refId": "A" - } - ], - "title": "Latest finalized slot", - "type": "stat" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "thresholds" - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 4, - "w": 4, - "x": 4, - "y": 5 - }, - "id": 86, - "options": { - "colorMode": "none", - "graphMode": "none", - "justifyMode": "auto", - "orientation": "auto", - "percentChangeColorMode": "standard", - "reduceOptions": { - "calcs": [], - "fields": "", - "values": false - }, - "showPercentChange": false, - "textMode": "auto", - "wideLayout": true - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "max(lean_latest_justified_slot{job=~\"$job\"})", - "instant": true, - "legendFormat": "__auto", - "range": false, - "refId": "A" - } - ], - "title": "Latest justified slot", - "type": "stat" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "thresholds" - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 4, - "w": 4, - "x": 8, - "y": 5 - }, - "id": 87, - "options": { - "colorMode": "none", - "graphMode": "none", - "justifyMode": "auto", - "orientation": "auto", - "percentChangeColorMode": "standard", - "reduceOptions": { - "calcs": [], - "fields": "", - "values": false - }, - "showPercentChange": false, - "textMode": "auto", - "wideLayout": true - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "max(lean_head_slot{job=~\"$job\"})", - "instant": true, - "legendFormat": "__auto", - "range": false, - "refId": "A" - } - ], - "title": "Head slot", - "type": "stat" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 10, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 14, - "w": 6, - "x": 0, - "y": 9 - }, - "id": 33, - "options": { - "legend": { - "calcs": [ - "max", - "lastNotNull" - ], - "displayMode": "table", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "sum by (job)(lean_latest_finalized_slot{job=~\"$job\"})", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": " Latest finalized slot", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 10, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 14, - "w": 6, - "x": 6, - "y": 9 - }, - "id": 34, - "options": { - "legend": { - "calcs": [ - "max", - "lastNotNull" - ], - "displayMode": "table", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": " sum by (job) (lean_latest_justified_slot{job=~\"$job\"})", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Latest justified slot", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 10, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "fieldMinMax": false, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 14, - "w": 6, - "x": 12, - "y": 9 - }, - "id": 35, - "options": { - "legend": { - "calcs": [ - "lastNotNull" - ], - "displayMode": "table", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": " sum by (job) (lean_head_slot{job=~\"$job\"})", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Head slot", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 10, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "fieldMinMax": false, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 14, - "w": 6, - "x": 18, - "y": 9 - }, - "id": 66, - "options": { - "legend": { - "calcs": [ - "lastNotNull" - ], - "displayMode": "table", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": " sum by (job) (lean_current_slot{job=~\"$job\"})", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Current slot", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "Total number of processed slots in state transition function", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 23 - }, - "id": 72, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "single", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "changes(lean_node_start_time_seconds{job=~\"$job\"}[1m])", - "instant": false, - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Start time", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "", - "fieldConfig": { - "defaults": { - "color": { - "fixedColor": "text", - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 0, - "fieldMinMax": false, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [ - { - "matcher": { - "id": "byName", - "options": "Total" - }, - "properties": [] - } - ] - }, - "gridPos": { - "h": 8, - "w": 6, - "x": 12, - "y": 23 - }, - "id": 44, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "disableTextWrap": false, - "editorMode": "code", - "expr": " sum by (job) (lean_connected_peers{job=~\"$job\"})", - "fullMetaSearch": false, - "hide": false, - "includeNullMetadata": true, - "instant": false, - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "B", - "useBackend": false - } - ], - "title": "Connected peers per node", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "fieldMinMax": false, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [ - { - "matcher": { - "id": "byName", - "options": "Total" - }, - "properties": [] - } - ] - }, - "gridPos": { - "h": 8, - "w": 6, - "x": 18, - "y": 23 - }, - "id": 90, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "single", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "disableTextWrap": false, - "editorMode": "code", - "exemplar": false, - "expr": "sum by (job) (lean_attestation_committee_subnet{job=~\"$job\"})", - "fullMetaSearch": false, - "hide": false, - "includeNullMetadata": true, - "instant": true, - "interval": "", - "legendFormat": "{{job}}", - "range": false, - "refId": "B", - "useBackend": false - } - ], - "title": "Attestation committee subnet", - "type": "timeseries" - }, - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 31 - }, - "id": 57, - "panels": [], - "title": "Finalization/Justification Delay", - "type": "row" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 10, - "w": 8, - "x": 0, - "y": 32 - }, - "id": 30, - "options": { - "legend": { - "calcs": [ - "min", - "max", - "mean", - "last" - ], - "displayMode": "table", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "avg_over_time(lean_head_slot{job=~\"$job\"}[5m]) - avg_over_time(lean_latest_finalized_slot{job=~\"$job\"}[5m])", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Head - Finalized delay (slots)", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 10, - "w": 8, - "x": 8, - "y": 32 - }, - "id": 29, - "options": { - "legend": { - "calcs": [ - "min", - "max", - "mean", - "last" - ], - "displayMode": "table", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "avg_over_time(lean_head_slot{job=~\"$job\"}[5m]) - avg_over_time(lean_latest_justified_slot{job=~\"$job\"}[5m])", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Head - Justified delay (slots)", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 10, - "w": 8, - "x": 16, - "y": 32 - }, - "id": 31, - "options": { - "legend": { - "calcs": [ - "min", - "max", - "mean", - "last" - ], - "displayMode": "table", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "avg_over_time(lean_latest_justified_slot{job=~\"$job\"}[5m]) - avg_over_time(lean_latest_finalized_slot{job=~\"$job\"}[5m])", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Justified - Finalized delay (slots)", - "type": "timeseries" - }, - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 42 - }, - "id": 53, - "panels": [], - "title": "Peers", - "type": "row" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 9, - "w": 12, - "x": 0, - "y": 43 - }, - "id": 54, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": " sum by (job) (lean_connected_peers{job=~\"$job\"})", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Connected peers per node", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 9, - "w": 12, - "x": 12, - "y": 43 - }, - "id": 73, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": " sum by (job, client) (lean_connected_peers{job=~\"$job\", client!=\"\"})", - "instant": false, - "interval": "", - "legendFormat": "{{job}} - {{client}}", - "range": true, - "refId": "A" - } - ], - "title": "Connected peers per node (detailed)", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 9, - "w": 12, - "x": 0, - "y": 52 - }, - "id": 55, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "sum by (job) (\n increase(lean_peer_connection_events_total{job=~\"$job\"}[$__rate_interval])\n )", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Peer connection events", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 9, - "w": 12, - "x": 12, - "y": 52 - }, - "id": 56, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "sum by (job) (\n increase(lean_peer_disconnection_events_total{job=~\"$job\"}[$__rate_interval])\n )", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Peer disconnection events", - "type": "timeseries" - }, - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 61 - }, - "id": 45, - "panels": [], - "title": "PQ Signatures", - "type": "row" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "Total number of individual attestation signatures", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 8, - "x": 0, - "y": 62 - }, - "id": 60, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "avg_over_time(\n sum by (job) (\n increase(lean_pq_sig_attestation_signatures_total{job=~\"$job\"}[$__rate_interval])\n )[5m:]\n)", - "instant": false, - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Total number of attestation signatures", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "Total number of valid individual attestation signatures", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 8, - "x": 8, - "y": 62 - }, - "id": 64, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "avg_over_time(\n sum by (job) (\n increase(lean_pq_sig_attestation_signatures_valid_total{job=~\"$job\"}[$__rate_interval])\n )[5m:]\n)", - "instant": false, - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Total number of valid attestation signatures", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "Total number of invalid individual attestation signatures", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 8, - "x": 16, - "y": 62 - }, - "id": 65, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "avg_over_time(\n sum by (job) (\n increase(lean_pq_sig_attestation_signatures_invalid_total{job=~\"$job\"}[$__rate_interval])\n )[5m:]\n)", - "instant": false, - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Total number of invalid attestation signatures", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - }, - "unit": "s" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 70 - }, - "id": 46, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "histogram_quantile(0.99, (rate(lean_pq_sig_attestation_signing_time_seconds_bucket{job=~\"$job\"}[$__rate_interval])))", - "instant": false, - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Time taken to sign an attestation", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - }, - "unit": "s" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 70 - }, - "id": 47, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "histogram_quantile(0.99, (rate(lean_pq_sig_attestation_verification_time_seconds_bucket{job=~\"$job\"}[$__rate_interval])))", - "instant": false, - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Time taken to verify an attestation signature", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 78 - }, - "id": 79, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "avg_over_time(\n sum by (job) (\n increase(lean_pq_sig_aggregated_signatures_total{job=~\"$job\"}[$__rate_interval])\n )[5m:]\n)", - "instant": false, - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Total number of aggregated signatures", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 78 - }, - "id": 61, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "avg_over_time(\n sum by (job) (\n increase(lean_pq_sig_attestations_in_aggregated_signatures_total{job=~\"$job\"}[$__rate_interval])\n )[5m:]\n)", - "instant": false, - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Total number of attestations included into aggregated signatures", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 86 - }, - "id": 77, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "avg_over_time(\n sum by (job) (\n increase(lean_pq_sig_aggregated_signatures_valid_total{job=~\"$job\"}[$__rate_interval])\n )[5m:]\n)", - "instant": false, - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Total number of valid aggregated signatures", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 86 - }, - "id": 78, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "avg_over_time(\n sum by (job) (\n increase(lean_pq_sig_aggregated_signatures_invalid_total{job=~\"$job\"}[$__rate_interval])\n )[5m:]\n)", - "instant": false, - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Total number of invalid aggregated signatures", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - }, - "unit": "s" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 94 - }, - "id": 62, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "histogram_quantile(0.99, (rate(lean_pq_sig_aggregated_signatures_building_time_seconds_bucket{job=~\"$job\"}[$__rate_interval])))", - "instant": false, - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Time taken to build an aggregated signature", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - }, - "unit": "s" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 94 - }, - "id": 63, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "histogram_quantile(0.99, (rate(lean_pq_sig_aggregated_signatures_verification_time_seconds_bucket{job=~\"$job\"}[$__rate_interval])))", - "instant": false, - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Time taken to verify an aggregated signature", - "type": "timeseries" - }, - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 102 - }, - "id": 17, - "panels": [], - "title": "Fork-Choice", - "type": "row" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "Time taken to process block in fork-choice", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - }, - "unit": "s" - }, - "overrides": [ - { - "__systemRef": "hideSeriesFrom", - "matcher": { - "id": "byNames", - "options": { - "mode": "exclude", - "names": [ - "zeam" - ], - "prefix": "All except:", - "readOnly": true - } - }, - "properties": [ - { - "id": "custom.hideFrom", - "value": { - "legend": false, - "tooltip": true, - "viz": true - } - } - ] - } - ] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 103 - }, - "id": 19, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "histogram_quantile(0.99, \n rate(lean_fork_choice_block_processing_time_seconds_bucket{job=~\"$job\"}[$__rate_interval])\n)", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Block processing time", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "Time taken to process block in fork-choice", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - }, - "unit": "s" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 103 - }, - "id": 85, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "histogram_quantile(0.99, \n rate(lean_committee_signatures_aggregation_time_seconds_bucket{job=~\"$job\"}[$__rate_interval])\n)", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Time taken to aggregate committee signatures", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 111 - }, - "id": 68, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "sum by (job) (\n increase(lean_fork_choice_reorgs_total{job=~\"$job\"}[$__rate_interval])\n)", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Total number of fork choice reorgs", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 111 - }, - "id": 69, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "sum by (job) (\n increase(lean_fork_choice_reorg_depth_bucket{job=~\"$job\"}[$__rate_interval])\n)", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Depth of fork choice reorgs (in blocks)", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 8, - "x": 0, - "y": 119 - }, - "id": 80, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": " sum by (job) (lean_gossip_signatures{job=~\"$job\"})", - "instant": true, - "interval": "", - "legendFormat": "{{job}}", - "range": false, - "refId": "A" - } - ], - "title": "Number of gossip signatures", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 8, - "x": 8, - "y": 119 - }, - "id": 81, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": " sum by (job) (lean_latest_new_aggregated_payloads{job=~\"$job\"})", - "instant": true, - "interval": "", - "legendFormat": "{{job}}", - "range": false, - "refId": "A" - } - ], - "title": "Number of new aggregated payloads", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 8, - "x": 16, - "y": 119 - }, - "id": 82, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": " sum by (job) (lean_latest_known_aggregated_payloads{job=~\"$job\"})", - "instant": true, - "interval": "", - "legendFormat": "{{job}}", - "range": false, - "refId": "A" - } - ], - "title": "Number of known aggregated payloads", - "type": "timeseries" - }, - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 127 - }, - "id": 8, - "panels": [], - "title": "Attestations", - "type": "row" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 8, - "x": 0, - "y": 128 - }, - "id": 9, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "avg_over_time(\n sum by (job) (\n increase(lean_attestations_valid_total{job=~\"$job\"}[$__rate_interval])\n )[5m:]\n)", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "FC Valid attestations", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 8, - "x": 8, - "y": 128 - }, - "id": 10, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "avg_over_time(\n sum by (job) (increase(lean_attestations_invalid_total{job=~\"$job\"}[$__rate_interval]))[5m:]\n)", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "FC Invalid attestations", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - }, - "unit": "s" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 8, - "x": 16, - "y": 128 - }, - "id": 11, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "histogram_quantile(0.99, rate(lean_attestation_validation_time_seconds_bucket{job=~\"$job\"}[$__rate_interval]))", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "FC Attestations validation time", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 10, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "fieldMinMax": false, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 8, - "x": 0, - "y": 136 - }, - "id": 67, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": " sum by (job) (lean_safe_target_slot{job=~\"$job\"})", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Safe target slot", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "Total number of attestations processed in state transition function", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 8, - "x": 8, - "y": 136 - }, - "id": 27, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "exemplar": false, - "expr": "avg_over_time(\n sum by (job) (increase(lean_state_transition_attestations_processed_total{job=~\"$job\"}[$__rate_interval]))[5m:]\n)", - "instant": false, - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "STF Processed attestations", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "Time taken to process attestations in state transition function", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - }, - "unit": "s" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 8, - "x": 16, - "y": 136 - }, - "id": 28, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "histogram_quantile(0.99,\n rate(lean_state_transition_attestations_processing_time_seconds_bucket{job=~\"$job\"}[$__rate_interval])\n)", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "STF Attestations processing time", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 144 - }, - "id": 83, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "avg_over_time(\n sum by (job) (\n increase(lean_attestations_valid_total{job=~\"$job\"}[$__rate_interval])\n )[5m:]\n)", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "FC Valid attestations", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - }, - "unit": "s" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 144 - }, - "id": 84, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "histogram_quantile(0.99, rate(lean_attestation_validation_time_seconds_bucket{job=~\"$job\"}[$__rate_interval]))", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "FC Attestations validation time", - "type": "timeseries" - }, - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 152 - }, - "id": 21, - "panels": [], - "title": "State Transition", - "type": "row" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "Time taken to process state transition function", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - }, - "unit": "s" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 153 - }, - "id": 23, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "histogram_quantile(0.99, \n rate(lean_state_transition_time_seconds_bucket{job=~\"$job\"}[$__rate_interval])\n)", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "State transition time", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "Time taken to process block in state transition function", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - }, - "unit": "s" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 153 - }, - "id": 24, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "histogram_quantile(0.99, \n rate(lean_state_transition_block_processing_time_seconds_bucket{job=~\"$job\"}[$__rate_interval])\n)", - "interval": "", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Block processing time", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "Total number of processed slots in state transition function", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 161 - }, - "id": 25, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "avg_over_time(\n sum by (job) (increase(lean_state_transition_slots_processed_total{job=~\"$job\"}[$__rate_interval]))[5m:]\n)", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Processed slots", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "Time taken to process slots in state transition function", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - }, - "unit": "s" - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 161 - }, - "id": 26, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "histogram_quantile(0.99,\n rate(lean_state_transition_slots_processing_time_seconds_bucket{job=~\"$job\"}[$__rate_interval])\n)", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Slots processing time", - "type": "timeseries" - }, - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "description": "Total number of processed slots in state transition function", - "fieldConfig": { - "defaults": { - "color": { - "mode": "palette-classic" - }, - "custom": { - "axisBorderShow": false, - "axisCenteredZero": false, - "axisColorMode": "text", - "axisLabel": "", - "axisPlacement": "auto", - "barAlignment": 0, - "barWidthFactor": 0.6, - "drawStyle": "line", - "fillOpacity": 0, - "gradientMode": "none", - "hideFrom": { - "legend": false, - "tooltip": false, - "viz": false - }, - "insertNulls": false, - "lineInterpolation": "linear", - "lineWidth": 1, - "pointSize": 5, - "scaleDistribution": { - "type": "linear" - }, - "showPoints": "auto", - "showValues": false, - "spanNulls": false, - "stacking": { - "group": "A", - "mode": "none" - }, - "thresholdsStyle": { - "mode": "off" - } - }, - "decimals": 1, - "mappings": [], - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "green", - "value": 0 - }, - { - "color": "red", - "value": 80 - } - ] - } - }, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 169 - }, - "id": 70, - "options": { - "legend": { - "calcs": [], - "displayMode": "list", - "placement": "bottom", - "showLegend": true - }, - "tooltip": { - "hideZeros": false, - "mode": "multi", - "sort": "none" - } - }, - "pluginVersion": "12.3.2", - "targets": [ - { - "datasource": { - "type": "prometheus", - "uid": "${DS_PROMETHEUS}" - }, - "editorMode": "code", - "expr": "avg_over_time(\n sum by (job) (increase(lean_finalizations_total{job=~\"$job\"}[$__rate_interval]))[5m:]\n)", - "legendFormat": "{{job}}", - "range": true, - "refId": "A" - } - ], - "title": "Finalizations total", - "type": "timeseries" - }, - { - "fieldConfig": { - "defaults": {}, - "overrides": [] - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 169 - }, - "id": 52, - "options": { - "code": { - "language": "plaintext", - "showLineNumbers": false, - "showMiniMap": false - }, - "content": "", - "mode": "markdown" - }, - "pluginVersion": "12.3.2", - "title": "", - "type": "text" - } - ], - "preload": false, - "schemaVersion": 42, - "tags": [ - "interop", - "Client" - ], - "templating": { - "list": [ - { - "allowCustomValue": false, - "current": { - "text": [ - "grandine" - ], - "value": [ - "grandine" - ] - }, - "includeAll": true, - "label": "Job", - "multi": true, - "name": "job", - "options": [ - { - "selected": false, - "text": "ethlambda", - "value": "ethlambda" - }, - { - "selected": false, - "text": "gean", - "value": "gean" - }, - { - "selected": true, - "text": "grandine", - "value": "grandine" - }, - { - "selected": false, - "text": "lantern", - "value": "lantern" - }, - { - "selected": false, - "text": "lighthouse", - "value": "lighthouse" - }, - { - "selected": false, - "text": "qlean", - "value": "qlean" - }, - { - "selected": false, - "text": "ream", - "value": "ream" - }, - { - "selected": false, - "text": "zeam", - "value": "zeam" - } - ], - "query": "ethlambda,gean,grandine,lantern,lighthouse,qlean,ream,zeam", - "type": "custom" - } - ] - }, - "time": { - "from": "now-15m", - "to": "now" - }, - "timepicker": {}, - "timezone": "utc", - "title": "Lean Ethereum Clients Dashboard", - "uid": "lean-ethereum-clients-dashboard", - "version": 13 -} \ No newline at end of file diff --git a/observability/grafana/prometheus-scrape.example.yml b/observability/grafana/prometheus-scrape.example.yml deleted file mode 100644 index 1d5bff3..0000000 --- a/observability/grafana/prometheus-scrape.example.yml +++ /dev/null @@ -1,14 +0,0 @@ -# Example Prometheus scrape config for gean metrics. -# Gean exposes metrics at: http://:/metrics -# Dashboard queries in observability/grafana/client-dashboard.json filter on: job=~"$gean_job" - -scrape_configs: - - job_name: gean - scrape_interval: 5s - metrics_path: /metrics - static_configs: - - targets: - - 127.0.0.1:8080 - labels: - client: gean - devnet: devnet1 diff --git a/observability/logging/logger.go b/observability/logging/logger.go deleted file mode 100644 index caeee9b..0000000 --- a/observability/logging/logger.go +++ /dev/null @@ -1,185 +0,0 @@ -package logging - -import ( - "context" - "fmt" - "io" - "log/slog" - "os" - "sync" - "time" -) - -// Component names used as log source tags. -const ( - CompNode = "node" - CompValidator = "validator" - CompConsensus = "consensus" - CompForkChoice = "forkchoice" - CompNetwork = "network" - CompGossip = "gossip" - CompReqResp = "reqresp" - CompMetrics = "metrics" - CompAPI = "api" -) - -// ANSI color codes. -const ( - reset = "\033[0m" - dim = "\033[2m" - red = "\033[31m" - yellow = "\033[33m" - cyan = "\033[36m" - green = "\033[32m" - magenta = "\033[35m" -) - -var defaultLogger *slog.Logger -var once sync.Once - -// Init sets up the global logger with the given level. -func Init(level slog.Level) { - once.Do(func() { - handler := &prettyHandler{ - out: os.Stdout, - level: level, - } - defaultLogger = slog.New(handler) - slog.SetDefault(defaultLogger) - }) -} - -// NewComponentLogger returns a logger tagged with a component name. -func NewComponentLogger(component string) *slog.Logger { - if defaultLogger == nil { - Init(slog.LevelInfo) - } - return defaultLogger.With(slog.String("comp", component)) -} - -// ToDo: remove if not needed -// ShortHash returns the first 8 hex chars of a [32]byte hash. -func ShortHash(h [32]byte) string { - return fmt.Sprintf("%x", h[:4]) -} - -func LongHash(h [32]byte) string { - return fmt.Sprintf("0x%x", h[:]) -} - -// prettyHandler is a custom slog.Handler that produces colored, aligned output. -// -// Format: -// -// 2026-02-13 14:23:45.123 INF [node] message key=value key=value -type prettyHandler struct { - out io.Writer - level slog.Level - attrs []slog.Attr - group string -} - -func (h *prettyHandler) Enabled(_ context.Context, level slog.Level) bool { - return level >= h.level -} - -func (h *prettyHandler) Handle(_ context.Context, r slog.Record) error { - // Suppress messages from libraries (no comp attr) unless they're errors. - hasComp := false - for _, a := range h.attrs { - if a.Key == "comp" { - hasComp = true - break - } - } - if !hasComp && r.Level < slog.LevelError { - return nil - } - - timestamp := r.Time.Format("2006-01-02 15:04:05.000") - - var levelStr string - var levelColor string - switch { - case r.Level >= slog.LevelError: - levelStr = "ERR" - levelColor = red - case r.Level >= slog.LevelWarn: - levelStr = "WRN" - levelColor = yellow - case r.Level >= slog.LevelInfo: - levelStr = "INF" - levelColor = green - default: - levelStr = "DBG" - levelColor = dim - } - - // Extract component from pre-set attrs. - comp := "" - var filteredAttrs []slog.Attr - for _, a := range h.attrs { - if a.Key == "comp" { - comp = a.Value.String() - } else { - filteredAttrs = append(filteredAttrs, a) - } - } - - compTag := "" - if comp != "" { - compTag = fmt.Sprintf(" %s[%s]%s", cyan, comp, reset) - } - - // Build attribute string. - attrStr := "" - for _, a := range filteredAttrs { - attrStr += fmt.Sprintf(" %s%s=%s%s", dim, a.Key, a.Value.String(), reset) - } - r.Attrs(func(a slog.Attr) bool { - attrStr += fmt.Sprintf(" %s%s=%s%s", dim, a.Key, a.Value.String(), reset) - return true - }) - - line := fmt.Sprintf("%s%s%s %s%-3s%s%s %s%s\n", - dim, timestamp, reset, - levelColor, levelStr, reset, - compTag, - r.Message, - attrStr, - ) - - _, err := fmt.Fprint(h.out, line) - return err -} - -func (h *prettyHandler) WithAttrs(attrs []slog.Attr) slog.Handler { - newAttrs := make([]slog.Attr, len(h.attrs), len(h.attrs)+len(attrs)) - copy(newAttrs, h.attrs) - newAttrs = append(newAttrs, attrs...) - return &prettyHandler{out: h.out, level: h.level, attrs: newAttrs, group: h.group} -} - -func (h *prettyHandler) WithGroup(name string) slog.Handler { - return &prettyHandler{out: h.out, level: h.level, attrs: h.attrs, group: name} -} - -// Banner prints the startup banner. -func Banner(version string) { - if defaultLogger == nil { - Init(slog.LevelInfo) - } - fmt.Println() - fmt.Printf(" %sgean%s %s%s%s\n", magenta, reset, dim, version, reset) - fmt.Printf(" %sLean Ethereum Go Client%s\n", dim, reset) - fmt.Println() -} - -// TimeSince returns a duration string since the given start time. -func TimeSince(start time.Time) string { - d := time.Since(start) - if d < time.Millisecond { - return fmt.Sprintf("%dµs", d.Microseconds()) - } - return fmt.Sprintf("%dms", d.Milliseconds()) -} diff --git a/observability/metrics/metrics.go b/observability/metrics/metrics.go deleted file mode 100644 index 44dca5c..0000000 --- a/observability/metrics/metrics.go +++ /dev/null @@ -1,343 +0,0 @@ -package metrics - -import ( - "fmt" - "log" - "net/http" - - "github.com/prometheus/client_golang/prometheus" - "github.com/prometheus/client_golang/prometheus/promhttp" -) - -// Histogram bucket presets from leanMetrics spec. -var ( - fastBuckets = []float64{0.005, 0.01, 0.025, 0.05, 0.1, 1} - stfBuckets = []float64{0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 2.5, 3, 4} - reorgBuckets = []float64{1, 2, 3, 5, 7, 10, 20, 30, 50, 100} -) - -// --- Node Info --- - -var NodeInfo = prometheus.NewGaugeVec(prometheus.GaugeOpts{ - Name: "lean_node_info", - Help: "Node information (always 1)", -}, []string{"name", "version"}) - -var NodeStartTime = prometheus.NewGauge(prometheus.GaugeOpts{ - Name: "lean_node_start_time_seconds", - Help: "Start timestamp", -}) - -// --- PQ Signature Metrics --- - -var PQSigAttestationSigningTime = prometheus.NewHistogram(prometheus.HistogramOpts{ - Name: "lean_pq_sig_attestation_signing_time_seconds", - Help: "Time taken to sign an attestation", - Buckets: fastBuckets, -}) - -var PQSigAttestationVerificationTime = prometheus.NewHistogram(prometheus.HistogramOpts{ - Name: "lean_pq_sig_attestation_verification_time_seconds", - Help: "Time taken to verify an attestation signature", - Buckets: fastBuckets, -}) - -var PQSigAggregatedSignaturesTotal = prometheus.NewCounter(prometheus.CounterOpts{ - Name: "lean_pq_sig_aggregated_signatures_total", - Help: "Total number of aggregated signatures", -}) - -var PQSigAttestationsInAggregatedTotal = prometheus.NewCounter(prometheus.CounterOpts{ - Name: "lean_pq_sig_attestations_in_aggregated_signatures_total", - Help: "Total number of attestations included into aggregated signatures", -}) - -var PQSigSignaturesBuildingTime = prometheus.NewHistogram(prometheus.HistogramOpts{ - Name: "lean_pq_sig_aggregated_signatures_building_time_seconds", - Help: "Time taken to build aggregated attestation signatures", - Buckets: []float64{0.1, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 4}, -}) - -var PQSigAggregatedVerificationTime = prometheus.NewHistogram(prometheus.HistogramOpts{ - Name: "lean_pq_sig_aggregated_signatures_verification_time_seconds", - Help: "Time taken to verify an aggregated attestation signature", - Buckets: []float64{0.1, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 4}, -}) - -var PQSigAggregatedValidTotal = prometheus.NewCounter(prometheus.CounterOpts{ - Name: "lean_pq_sig_aggregated_signatures_valid_total", - Help: "Total number of valid aggregated signatures", -}) - -var PQSigAggregatedInvalidTotal = prometheus.NewCounter(prometheus.CounterOpts{ - Name: "lean_pq_sig_aggregated_signatures_invalid_total", - Help: "Total number of invalid aggregated signatures", -}) - -var PQSigAttestationSignaturesTotal = prometheus.NewCounter(prometheus.CounterOpts{ - Name: "lean_pq_sig_attestation_signatures_total", - Help: "Total number of individual attestation signatures", -}) - -var PQSigAttestationSignaturesValidTotal = prometheus.NewCounter(prometheus.CounterOpts{ - Name: "lean_pq_sig_attestation_signatures_valid_total", - Help: "Total number of valid individual attestation signatures", -}) - -var PQSigAttestationSignaturesInvalidTotal = prometheus.NewCounter(prometheus.CounterOpts{ - Name: "lean_pq_sig_attestation_signatures_invalid_total", - Help: "Total number of invalid individual attestation signatures", -}) - -// --- Fork-Choice --- - -var HeadSlot = prometheus.NewGauge(prometheus.GaugeOpts{ - Name: "lean_head_slot", - Help: "Latest slot of the lean chain", -}) - -var CurrentSlot = prometheus.NewGauge(prometheus.GaugeOpts{ - Name: "lean_current_slot", - Help: "Current slot of the lean chain", -}) - -var SafeTargetSlot = prometheus.NewGauge(prometheus.GaugeOpts{ - Name: "lean_safe_target_slot", - Help: "Safe target slot", -}) - -var ForkChoiceBlockProcessingTime = prometheus.NewHistogram(prometheus.HistogramOpts{ - Name: "lean_fork_choice_block_processing_time_seconds", - Help: "Time taken to process block in fork choice", - Buckets: []float64{0.005, 0.01, 0.025, 0.05, 0.1, 1, 1.25, 1.5, 2, 4}, -}) - -var AttestationsValid = prometheus.NewCounterVec(prometheus.CounterOpts{ - Name: "lean_attestations_valid_total", - Help: "Total number of valid attestations", -}, []string{"source"}) - -var AttestationsInvalid = prometheus.NewCounterVec(prometheus.CounterOpts{ - Name: "lean_attestations_invalid_total", - Help: "Total number of invalid attestations", -}, []string{"source"}) - -var AttestationValidationTime = prometheus.NewHistogram(prometheus.HistogramOpts{ - Name: "lean_attestation_validation_time_seconds", - Help: "Time taken to validate attestation", - Buckets: fastBuckets, -}) - -var ForkChoiceReorgsTotal = prometheus.NewCounter(prometheus.CounterOpts{ - Name: "lean_fork_choice_reorgs_total", - Help: "Total number of fork choice reorgs", -}) - -var ForkChoiceReorgDepth = prometheus.NewHistogram(prometheus.HistogramOpts{ - Name: "lean_fork_choice_reorg_depth", - Help: "Depth of fork choice reorgs (in blocks)", - Buckets: reorgBuckets, -}) - -// --- State Transition --- - -var LatestJustifiedSlot = prometheus.NewGauge(prometheus.GaugeOpts{ - Name: "lean_latest_justified_slot", - Help: "Latest justified slot", -}) - -var LatestFinalizedSlot = prometheus.NewGauge(prometheus.GaugeOpts{ - Name: "lean_latest_finalized_slot", - Help: "Latest finalized slot", -}) - -var FinalizationsTotal = prometheus.NewCounterVec(prometheus.CounterOpts{ - Name: "lean_finalizations_total", - Help: "Total number of finalization attempts", -}, []string{"result"}) - -var StateTransitionTime = prometheus.NewHistogram(prometheus.HistogramOpts{ - Name: "lean_state_transition_time_seconds", - Help: "Time to process state transition", - Buckets: stfBuckets, -}) - -var STFSlotsProcessed = prometheus.NewCounter(prometheus.CounterOpts{ - Name: "lean_state_transition_slots_processed_total", - Help: "Total number of processed slots", -}) - -var STFSlotsProcessingTime = prometheus.NewHistogram(prometheus.HistogramOpts{ - Name: "lean_state_transition_slots_processing_time_seconds", - Help: "Time taken to process slots", - Buckets: fastBuckets, -}) - -var STFBlockProcessingTime = prometheus.NewHistogram(prometheus.HistogramOpts{ - Name: "lean_state_transition_block_processing_time_seconds", - Help: "Time taken to process block", - Buckets: fastBuckets, -}) - -var STFAttestationsProcessed = prometheus.NewCounter(prometheus.CounterOpts{ - Name: "lean_state_transition_attestations_processed_total", - Help: "Total number of processed attestations", -}) - -var STFAttestationsProcessingTime = prometheus.NewHistogram(prometheus.HistogramOpts{ - Name: "lean_state_transition_attestations_processing_time_seconds", - Help: "Time taken to process attestations", - Buckets: fastBuckets, -}) - -// --- Validator --- - -var ValidatorsCount = prometheus.NewGauge(prometheus.GaugeOpts{ - Name: "lean_validators_count", - Help: "Number of validators managed by a node", -}) - -// --- Devnet-3 Aggregator --- - -var IsAggregator = prometheus.NewGauge(prometheus.GaugeOpts{ - Name: "lean_is_aggregator", - Help: "Whether the node is acting as an aggregator (1 = yes, 0 = no)", -}) - -var AttestationCommitteeCount = prometheus.NewGauge(prometheus.GaugeOpts{ - Name: "lean_attestation_committee_count", - Help: "Number of attestation committees", -}) - -var AttestationCommitteeSubnet = prometheus.NewGauge(prometheus.GaugeOpts{ - Name: "lean_attestation_committee_subnet", - Help: "Subnet ID assigned to this node's validators", -}) - -var CommitteeSignaturesAggregationTime = prometheus.NewHistogram(prometheus.HistogramOpts{ - Name: "lean_committee_signatures_aggregation_time_seconds", - Help: "Time taken to aggregate committee signatures", - Buckets: []float64{0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 0.75, 1}, -}) - -var GossipSignaturesCount = prometheus.NewGauge(prometheus.GaugeOpts{ - Name: "lean_gossip_signatures", - Help: "Number of gossip signatures in fork-choice store", -}) - -var LatestNewAggregatedPayloads = prometheus.NewGauge(prometheus.GaugeOpts{ - Name: "lean_latest_new_aggregated_payloads", - Help: "Number of new aggregated payload items", -}) - -var LatestKnownAggregatedPayloads = prometheus.NewGauge(prometheus.GaugeOpts{ - Name: "lean_latest_known_aggregated_payloads", - Help: "Number of known aggregated payload items", -}) - -// --- Network --- - -var ConnectedPeers = prometheus.NewGaugeVec(prometheus.GaugeOpts{ - Name: "lean_connected_peers", - Help: "Number of connected peers", -}, []string{"client"}) - -var PeerConnectionEventsTotal = prometheus.NewCounterVec(prometheus.CounterOpts{ - Name: "lean_peer_connection_events_total", - Help: "Total number of peer connection events", -}, []string{"direction", "result"}) - -var PeerDisconnectionEventsTotal = prometheus.NewCounterVec(prometheus.CounterOpts{ - Name: "lean_peer_disconnection_events_total", - Help: "Total number of peer disconnection events", -}, []string{"direction", "reason"}) - -func init() { - prometheus.MustRegister( - // Node info - NodeInfo, - NodeStartTime, - // PQ signatures - PQSigAttestationSigningTime, - PQSigAttestationVerificationTime, - PQSigAggregatedSignaturesTotal, - PQSigAttestationsInAggregatedTotal, - PQSigSignaturesBuildingTime, - PQSigAggregatedVerificationTime, - PQSigAggregatedValidTotal, - PQSigAggregatedInvalidTotal, - // Fork choice - HeadSlot, - CurrentSlot, - SafeTargetSlot, - ForkChoiceBlockProcessingTime, - AttestationsValid, - AttestationsInvalid, - AttestationValidationTime, - ForkChoiceReorgsTotal, - ForkChoiceReorgDepth, - // State transition - LatestJustifiedSlot, - LatestFinalizedSlot, - FinalizationsTotal, - StateTransitionTime, - STFSlotsProcessed, - STFSlotsProcessingTime, - STFBlockProcessingTime, - STFAttestationsProcessed, - STFAttestationsProcessingTime, - // Validator - ValidatorsCount, - // Devnet-3 aggregator - IsAggregator, - AttestationCommitteeCount, - AttestationCommitteeSubnet, - CommitteeSignaturesAggregationTime, - GossipSignaturesCount, - LatestNewAggregatedPayloads, - LatestKnownAggregatedPayloads, - // PQ attestation signatures - PQSigAttestationSignaturesTotal, - PQSigAttestationSignaturesValidTotal, - PQSigAttestationSignaturesInvalidTotal, - // Network - ConnectedPeers, - PeerConnectionEventsTotal, - PeerDisconnectionEventsTotal, - ) - - // Pre-initialize vector counters to 0 to ensure they appear in metrics output - // before any events occur, preventing "No data" in Grafana panels. - for _, source := range []string{"gossip", "block", "subnet", "aggregation"} { - AttestationsValid.WithLabelValues(source).Add(0) - AttestationsInvalid.WithLabelValues(source).Add(0) - } - - for _, dir := range []string{"inbound", "outbound"} { - for _, result := range []string{"success", "timeout", "error"} { - PeerConnectionEventsTotal.WithLabelValues(dir, result).Add(0) - } - for _, reason := range []string{"timeout", "remote_close", "local_close", "error"} { - PeerDisconnectionEventsTotal.WithLabelValues(dir, reason).Add(0) - } - } - - ConnectedPeers.WithLabelValues("gean").Set(0) - - PQSigAttestationSignaturesTotal.Add(0) - PQSigAttestationSignaturesValidTotal.Add(0) - PQSigAttestationSignaturesInvalidTotal.Add(0) - - FinalizationsTotal.WithLabelValues("success").Add(0) - FinalizationsTotal.WithLabelValues("error").Add(0) -} - -// Serve starts the Prometheus metrics HTTP server on the given port. -func Serve(port int) { - http.Handle("/metrics", promhttp.Handler()) - go func() { - if err := http.ListenAndServe(fmt.Sprintf(":%d", port), nil); err != nil { - log.Printf("metrics server error: %v", err) - } - }() -} diff --git a/p2p/bootnode.go b/p2p/bootnode.go new file mode 100644 index 0000000..559106e --- /dev/null +++ b/p2p/bootnode.go @@ -0,0 +1,47 @@ +package p2p + +import ( + "bufio" + "fmt" + "os" + "strings" + + "github.com/multiformats/go-multiaddr" +) + +// LoadBootnodes reads bootnode multiaddrs from a YAML/text file. +// Each line is a multiaddr string (e.g., /ip4/1.2.3.4/udp/9000/quic-v1/p2p/QmPeer...). +func LoadBootnodes(path string) ([]multiaddr.Multiaddr, error) { + f, err := os.Open(path) + if err != nil { + return nil, fmt.Errorf("open bootnodes file: %w", err) + } + defer f.Close() + + var addrs []multiaddr.Multiaddr + scanner := bufio.NewScanner(f) + for scanner.Scan() { + line := strings.TrimSpace(scanner.Text()) + if line == "" || strings.HasPrefix(line, "#") || strings.HasPrefix(line, "-") { + // Skip empty lines, comments, and YAML list markers. + if strings.HasPrefix(line, "- ") { + line = strings.TrimPrefix(line, "- ") + line = strings.Trim(line, "\"' ") + } else { + continue + } + } + + if !strings.HasPrefix(line, "/") { + continue // not a multiaddr + } + + ma, err := multiaddr.NewMultiaddr(line) + if err != nil { + return nil, fmt.Errorf("parse bootnode multiaddr %q: %w", line, err) + } + addrs = append(addrs, ma) + } + + return addrs, scanner.Err() +} diff --git a/p2p/encoding.go b/p2p/encoding.go new file mode 100644 index 0000000..6d41e50 --- /dev/null +++ b/p2p/encoding.go @@ -0,0 +1,143 @@ +package p2p + +import ( + "bytes" + "encoding/binary" + "fmt" + "io" + + "github.com/golang/snappy" +) + +// Max payload sizes rs L6-9. +const ( + MaxPayloadSize = 10 * 1024 * 1024 // 10 MiB uncompressed + MaxCompressedPayloadSize = 32 + MaxPayloadSize + MaxPayloadSize/6 + 1024 // ~12 MiB +) + +// --- Gossipsub encoding: raw snappy --- + +// SnappyRawEncode compresses data using raw snappy (no framing). +func SnappyRawEncode(data []byte) []byte { + return snappy.Encode(nil, data) +} + +// SnappyRawDecode decompresses raw snappy data. +func SnappyRawDecode(data []byte) ([]byte, error) { + decodedLen, err := snappy.DecodedLen(data) + if err != nil { + return nil, fmt.Errorf("snappy decoded len: %w", err) + } + if decodedLen > MaxPayloadSize { + return nil, fmt.Errorf("snappy decoded len %d exceeds max %d", decodedLen, MaxPayloadSize) + } + return snappy.Decode(nil, data) +} + +// --- Req/Resp encoding: snappy framed + varint --- + +// EncodeVarint encodes a uint32 as LEB128 varint. +func EncodeVarint(value uint32) []byte { + buf := make([]byte, binary.MaxVarintLen32) + n := binary.PutUvarint(buf, uint64(value)) + return buf[:n] +} + +// DecodeVarint reads a LEB128 varint from a byte slice. +// Returns the value and remaining bytes. +func DecodeVarint(buf []byte) (uint32, []byte, error) { + val, n := binary.Uvarint(buf) + if n <= 0 { + return 0, nil, fmt.Errorf("invalid varint") + } + if val > uint64(MaxPayloadSize) { + return 0, nil, fmt.Errorf("varint value %d exceeds max payload", val) + } + return uint32(val), buf[n:], nil +} + +// EncodeReqRespPayload encodes a request payload: varint(uncompressed_len) + snappy_framed(data). +// Uses snappy FRAMED format (not block) for req-resp cross-client compatibility. +func EncodeReqRespPayload(data []byte) []byte { + var buf bytes.Buffer + w := snappy.NewBufferedWriter(&buf) + w.Write(data) + w.Close() + framed := buf.Bytes() + + varint := EncodeVarint(uint32(len(data))) + result := make([]byte, len(varint)+len(framed)) + copy(result, varint) + copy(result[len(varint):], framed) + return result +} + +// DecodeReqRespPayload decodes a payload: varint(uncompressed_len) + snappy_framed(data). +// Uses snappy FRAMED format (not block) for req-resp cross-client compatibility. +func DecodeReqRespPayload(buf []byte) ([]byte, error) { + declaredLen, rest, err := DecodeVarint(buf) + if err != nil { + return nil, fmt.Errorf("decode varint: %w", err) + } + + // Try framed format first (cross-client), fall back to block format (self-to-self). + r := snappy.NewReader(bytes.NewReader(rest)) + decoded, err := io.ReadAll(r) + if err != nil { + // Fallback: try block format (for self-to-self communication). + decoded, err = snappy.Decode(nil, rest) + if err != nil { + return nil, fmt.Errorf("snappy decode: %w", err) + } + } + + if declaredLen > 0 && uint32(len(decoded)) != declaredLen { + return nil, fmt.Errorf("length mismatch: declared %d, got %d", declaredLen, len(decoded)) + } + return decoded, nil +} + +// Response codes rs. +const ( + RespSuccess byte = 0x00 + RespInvalidRequest byte = 0x01 + RespServerError byte = 0x02 + RespResourceUnavailable byte = 0x03 +) + +// EncodeResponse encodes a response chunk: code + varint(len) + snappy(data). +func EncodeResponse(code byte, data []byte) []byte { + payload := EncodeReqRespPayload(data) + result := make([]byte, 1+len(payload)) + result[0] = code + copy(result[1:], payload) + return result +} + +// DecodeResponse reads a response chunk from a reader. +// Returns (code, decoded_payload, error). +func DecodeResponse(r io.Reader) (byte, []byte, error) { + // Read response code. + codeBuf := make([]byte, 1) + if _, err := io.ReadFull(r, codeBuf); err != nil { + return 0, nil, fmt.Errorf("read response code: %w", err) + } + code := codeBuf[0] + + // Read remaining bytes. + rest, err := io.ReadAll(io.LimitReader(r, int64(MaxCompressedPayloadSize))) + if err != nil { + return code, nil, fmt.Errorf("read response payload: %w", err) + } + + if len(rest) == 0 { + return code, nil, nil + } + + decoded, err := DecodeReqRespPayload(rest) + if err != nil { + return code, nil, fmt.Errorf("decode response payload: %w", err) + } + + return code, decoded, nil +} diff --git a/p2p/gossip.go b/p2p/gossip.go new file mode 100644 index 0000000..8e22e8a --- /dev/null +++ b/p2p/gossip.go @@ -0,0 +1,92 @@ +package p2p + +import ( + "context" + "fmt" + "strings" + + pubsub "github.com/libp2p/go-libp2p-pubsub" + + "github.com/geanlabs/gean/logger" + "github.com/geanlabs/gean/types" +) + +// MessageHandler defines callbacks for gossipsub messages. +// Engine implements this interface and processes messages on its own goroutine. +type MessageHandler interface { + OnBlock(block *types.SignedBlockWithAttestation) + OnGossipAttestation(att *types.SignedAttestation) + OnGossipAggregatedAttestation(agg *types.SignedAggregatedAttestation) +} + +// StartGossipListeners starts goroutines that read from each subscribed topic +// and dispatch decoded messages to the handler. +func (h *Host) StartGossipListeners(handler MessageHandler) { + for topic, sub := range h.subs { + go h.listenTopic(h.ctx, topic, sub, handler) + } +} + +func (h *Host) listenTopic(ctx context.Context, topic string, sub *pubsub.Subscription, handler MessageHandler) { + for { + msg, err := sub.Next(ctx) + if err != nil { + if ctx.Err() != nil { + return // context cancelled, clean shutdown + } + logger.Error(logger.Gossip, "recv error on %s: %v", topic, err) + return + } + + // Skip messages from ourselves. + if msg.ReceivedFrom == h.host.ID() { + continue + } + + // Decompress raw snappy. + data, err := SnappyRawDecode(msg.Data) + if err != nil { + logger.Error(logger.Gossip, "snappy decode failed on %s: %v", topic, err) + continue + } + + // Dispatch based on topic kind. + if err := h.dispatchMessage(topic, data, handler); err != nil { + logger.Error(logger.Gossip, "dispatch failed on %s: %v", topic, err) + } + } +} + +func (h *Host) dispatchMessage(topic string, data []byte, handler MessageHandler) error { + switch { + case topic == BlockTopic(): + block := &types.SignedBlockWithAttestation{} + if err := block.UnmarshalSSZ(data); err != nil { + return fmt.Errorf("unmarshal block (%d bytes): %w", len(data), err) + } + blockRoot, _ := block.Block.Block.HashTreeRoot() + logger.Info(logger.Gossip, "received block slot=%d proposer=%d block_root=0x%x parent_root=0x%x", + block.Block.Block.Slot, block.Block.Block.ProposerIndex, + blockRoot, block.Block.Block.ParentRoot) + handler.OnBlock(block) + + case strings.Contains(topic, AttestationTopicKind+"_"): + att := &types.SignedAttestation{} + if err := att.UnmarshalSSZ(data); err != nil { + return fmt.Errorf("unmarshal attestation (%d bytes): %w", len(data), err) + } + handler.OnGossipAttestation(att) + + case topic == AggregationTopic(): + agg := &types.SignedAggregatedAttestation{} + if err := agg.UnmarshalSSZ(data); err != nil { + return fmt.Errorf("unmarshal aggregation (%d bytes): %w", len(data), err) + } + handler.OnGossipAggregatedAttestation(agg) + + default: + return fmt.Errorf("unknown topic: %s", topic) + } + + return nil +} diff --git a/p2p/host.go b/p2p/host.go new file mode 100644 index 0000000..1b4b997 --- /dev/null +++ b/p2p/host.go @@ -0,0 +1,251 @@ +package p2p + +import ( + "context" + "encoding/hex" + "fmt" + "os" + "strings" + "time" + + "github.com/libp2p/go-libp2p" + pubsub "github.com/libp2p/go-libp2p-pubsub" + pb "github.com/libp2p/go-libp2p-pubsub/pb" + libp2pcrypto "github.com/libp2p/go-libp2p/core/crypto" + "github.com/libp2p/go-libp2p/core/host" + "github.com/libp2p/go-libp2p/core/network" + "github.com/libp2p/go-libp2p/core/peer" + libp2pquic "github.com/libp2p/go-libp2p/p2p/transport/quic" + "github.com/multiformats/go-multiaddr" + + "github.com/geanlabs/gean/logger" +) + +// GossipSub parameters rs L96-119. +const ( + GossipMeshN = 8 + GossipMeshNLow = 6 + GossipMeshNHigh = 12 + GossipLazy = 6 + GossipHeartbeatInterval = 700 * time.Millisecond + GossipFanoutTTL = 60 * time.Second + GossipHistoryLength = 6 + GossipHistoryGossip = 3 + GossipDuplicateCache = 24 * time.Second // 4s slot * 3 lookback * 2 + GossipMaxTransmitSize = MaxCompressedPayloadSize + GossipMaxMsgPerRPC = 500 +) + +// Host wraps a libp2p host with gossipsub and topic handles. +type Host struct { + host host.Host + pubsub *pubsub.PubSub + topics map[string]*pubsub.Topic + subs map[string]*pubsub.Subscription + ctx context.Context + cancel context.CancelFunc + peerStore *PeerStore + listenPort int +} + +// NewHost creates a libp2p host with QUIC transport and gossipsub. +func NewHost(ctx context.Context, nodeKeyPath string, listenPort int, committeeCount uint64) (*Host, error) { + ctx, cancel := context.WithCancel(ctx) + + // Load secp256k1 identity from hex-encoded node key file. + privKey, err := loadNodeKey(nodeKeyPath) + if err != nil { + cancel() + return nil, fmt.Errorf("load node key: %w", err) + } + + // Create libp2p host with QUIC transport. + listenAddr, _ := multiaddr.NewMultiaddr(fmt.Sprintf("/ip4/0.0.0.0/udp/%d/quic-v1", listenPort)) + + h, err := libp2p.New( + libp2p.Identity(privKey), + libp2p.ListenAddrs(listenAddr), + libp2p.Transport(libp2pquic.NewTransport), + libp2p.DisableRelay(), + ) + if err != nil { + cancel() + return nil, fmt.Errorf("create libp2p host: %w", err) + } + + // Create gossipsub with custom parameters. + // Anonymous message signing (no author, no sequence number). + // + MessageAuthenticity::Anonymous — lean consensus messages have no signature. + ps, err := pubsub.NewGossipSub(ctx, h, + pubsub.WithMessageSignaturePolicy(pubsub.StrictNoSign), + pubsub.WithNoAuthor(), + pubsub.WithMessageIdFn(func(msg *pb.Message) string { + topic := "" + if msg.Topic != nil { + topic = *msg.Topic + } + return string(ComputeMessageID(topic, msg.Data)) + }), + pubsub.WithGossipSubParams(func() pubsub.GossipSubParams { + params := pubsub.DefaultGossipSubParams() + params.D = GossipMeshN + params.Dlo = GossipMeshNLow + params.Dhi = GossipMeshNHigh + params.Dlazy = GossipLazy + params.HeartbeatInterval = GossipHeartbeatInterval + params.FanoutTTL = GossipFanoutTTL + params.HistoryLength = GossipHistoryLength + params.HistoryGossip = GossipHistoryGossip + params.MaxIHaveMessages = GossipMaxMsgPerRPC + return params + }()), + pubsub.WithSeenMessagesTTL(GossipDuplicateCache), + pubsub.WithMaxMessageSize(GossipMaxTransmitSize), + ) + if err != nil { + h.Close() + cancel() + return nil, fmt.Errorf("create gossipsub: %w", err) + } + + p2pHost := &Host{ + host: h, + pubsub: ps, + topics: make(map[string]*pubsub.Topic), + subs: make(map[string]*pubsub.Subscription), + ctx: ctx, + cancel: cancel, + peerStore: NewPeerStore(), + listenPort: listenPort, + } + + // Register connection notifier to track peers. + h.Network().Notify(&network.NotifyBundle{ + ConnectedF: func(n network.Network, conn network.Conn) { + peerID := conn.RemotePeer() + p2pHost.peerStore.Add(peerID) + logger.Info(logger.Network, "peer connected peer_id=%s direction=%s peers=%d", + peerID, conn.Stat().Direction, p2pHost.peerStore.Count()) + }, + DisconnectedF: func(n network.Network, conn network.Conn) { + peerID := conn.RemotePeer() + // Only remove if fully disconnected (no remaining connections). + if n.Connectedness(peerID) != network.Connected { + p2pHost.peerStore.Remove(peerID) + logger.Info(logger.Network, "peer disconnected peer_id=%s peers=%d", + peerID, p2pHost.peerStore.Count()) + } + }, + }) + + // Join default topics. + logger.Info(logger.Network, "joining gossipsub topics") + if err := p2pHost.JoinTopic(BlockTopic()); err != nil { + p2pHost.Close() + return nil, fmt.Errorf("join block topic: %w", err) + } + if err := p2pHost.JoinTopic(AggregationTopic()); err != nil { + p2pHost.Close() + return nil, fmt.Errorf("join aggregation topic: %w", err) + } + + // Join attestation subnet topics. + for i := uint64(0); i < committeeCount; i++ { + if err := p2pHost.JoinTopic(AttestationSubnetTopic(i)); err != nil { + p2pHost.Close() + return nil, fmt.Errorf("join attestation subnet %d: %w", i, err) + } + } + + // Log all subscribed topics. + for topic := range p2pHost.topics { + logger.Info(logger.Network, "subscribed topic=%s", topic) + } + + return p2pHost, nil +} + +// JoinTopic joins a gossipsub topic and subscribes to it. +func (h *Host) JoinTopic(topic string) error { + t, err := h.pubsub.Join(topic) + if err != nil { + return fmt.Errorf("join topic %s: %w", topic, err) + } + sub, err := t.Subscribe() + if err != nil { + return fmt.Errorf("subscribe to %s: %w", topic, err) + } + h.topics[topic] = t + h.subs[topic] = sub + return nil +} + +// PeerID returns this host's peer ID. +func (h *Host) PeerID() peer.ID { + return h.host.ID() +} + +// Addrs returns the host's listen addresses. +func (h *Host) Addrs() []multiaddr.Multiaddr { + return h.host.Addrs() +} + +// ConnectedPeers returns the number of connected peers. +func (h *Host) ConnectedPeers() int { + return h.peerStore.Count() +} + +// TopicMeshSizes returns a map of topic name to mesh peer count. +func (h *Host) TopicMeshSizes() map[string]int { + sizes := make(map[string]int) + for name, topic := range h.topics { + sizes[name] = len(topic.ListPeers()) + } + return sizes +} + +// LibP2PHost returns the underlying libp2p host for req-resp stream handlers. +func (h *Host) LibP2PHost() host.Host { + return h.host +} + +// Close shuts down the host. +func (h *Host) Close() { + h.cancel() + for _, sub := range h.subs { + sub.Cancel() + } + h.host.Close() +} + +// loadNodeKey reads a hex-encoded secp256k1 private key from a file. +func loadNodeKey(path string) (libp2pcrypto.PrivKey, error) { + content, err := os.ReadFile(path) + if err != nil { + return nil, fmt.Errorf("read key file: %w", err) + } + + hexStr := strings.TrimSpace(string(content)) + hexStr = strings.TrimPrefix(hexStr, "0x") + + keyBytes, err := hex.DecodeString(hexStr) + if err != nil { + return nil, fmt.Errorf("decode hex key: %w", err) + } + + privKey, err := libp2pcrypto.UnmarshalSecp256k1PrivateKey(keyBytes) + if err != nil { + return nil, fmt.Errorf("parse secp256k1 key: %w", err) + } + + return privKey, nil +} + +// ConnectPeer connects to a peer at the given multiaddr. +func (h *Host) ConnectPeer(ctx context.Context, addr multiaddr.Multiaddr) error { + peerInfo, err := peer.AddrInfoFromP2pAddr(addr) + if err != nil { + return fmt.Errorf("parse peer addr: %w", err) + } + return h.host.Connect(ctx, *peerInfo) +} diff --git a/p2p/msgid.go b/p2p/msgid.go new file mode 100644 index 0000000..25dd098 --- /dev/null +++ b/p2p/msgid.go @@ -0,0 +1,47 @@ +package p2p + +import ( + "crypto/sha256" + "encoding/binary" +) + +// Message ID domains rs L619-638. +var ( + domainValidSnappy = [4]byte{0x01, 0x00, 0x00, 0x00} + domainInvalidSnappy = [4]byte{0x00, 0x00, 0x00, 0x00} +) + +// ComputeMessageID computes a gossipsub message ID. +// Format: SHA256(domain || uint64_le(topic_len) || topic || data)[:20] +// +// domain = 0x01000000 if snappy decompression succeeds (valid) +// domain = 0x00000000 if snappy decompression fails (invalid) +// data = decompressed bytes (if valid) or raw compressed bytes (if invalid) +func ComputeMessageID(topic string, rawData []byte) []byte { + h := sha256.New() + + // Try to decompress — determines domain and data used for hashing. + decompressed, err := SnappyRawDecode(rawData) + + var domain [4]byte + var data []byte + if err == nil { + domain = domainValidSnappy + data = decompressed + } else { + domain = domainInvalidSnappy + data = rawData + } + + topicBytes := []byte(topic) + var topicLen [8]byte + binary.LittleEndian.PutUint64(topicLen[:], uint64(len(topicBytes))) + + h.Write(domain[:]) + h.Write(topicLen[:]) + h.Write(topicBytes) + h.Write(data) + + hash := h.Sum(nil) + return hash[:20] // truncate to 20 bytes +} diff --git a/p2p/p2p_test.go b/p2p/p2p_test.go new file mode 100644 index 0000000..1c6e552 --- /dev/null +++ b/p2p/p2p_test.go @@ -0,0 +1,281 @@ +package p2p + +import ( + "bytes" + "os" + "testing" + + "github.com/libp2p/go-libp2p/core/peer" +) + +// --- Encoding tests --- + +func TestSnappyRawRoundtrip(t *testing.T) { + data := []byte("hello lean consensus world") + compressed := SnappyRawEncode(data) + decompressed, err := SnappyRawDecode(compressed) + if err != nil { + t.Fatalf("decode: %v", err) + } + if !bytes.Equal(data, decompressed) { + t.Fatal("roundtrip mismatch") + } +} + +func TestVarintRoundtrip(t *testing.T) { + tests := []uint32{0, 1, 127, 128, 255, 256, 16383, 16384, 1<<20 - 1} + for _, v := range tests { + encoded := EncodeVarint(v) + decoded, rest, err := DecodeVarint(encoded) + if err != nil { + t.Fatalf("varint %d: %v", v, err) + } + if decoded != v { + t.Fatalf("varint %d: got %d", v, decoded) + } + if len(rest) != 0 { + t.Fatalf("varint %d: %d trailing bytes", v, len(rest)) + } + } +} + +func TestReqRespPayloadRoundtrip(t *testing.T) { + data := []byte("state transition function data") + encoded := EncodeReqRespPayload(data) + decoded, err := DecodeReqRespPayload(encoded) + if err != nil { + t.Fatalf("decode: %v", err) + } + if !bytes.Equal(data, decoded) { + t.Fatal("payload roundtrip mismatch") + } +} + +func TestResponseEncoding(t *testing.T) { + data := []byte("response payload") + encoded := EncodeResponse(RespSuccess, data) + if encoded[0] != RespSuccess { + t.Fatalf("expected code 0x00, got 0x%02x", encoded[0]) + } + + reader := bytes.NewReader(encoded) + code, decoded, err := DecodeResponse(reader) + if err != nil { + t.Fatalf("decode: %v", err) + } + if code != RespSuccess { + t.Fatalf("code: expected 0, got %d", code) + } + if !bytes.Equal(data, decoded) { + t.Fatal("response roundtrip mismatch") + } +} + +// --- Topic tests --- + +func TestTopicStrings(t *testing.T) { + if BlockTopic() != "/leanconsensus/devnet0/block/ssz_snappy" { + t.Fatalf("block topic: %s", BlockTopic()) + } + if AggregationTopic() != "/leanconsensus/devnet0/aggregation/ssz_snappy" { + t.Fatalf("aggregation topic: %s", AggregationTopic()) + } + if AttestationSubnetTopic(0) != "/leanconsensus/devnet0/attestation_0/ssz_snappy" { + t.Fatalf("attestation subnet 0: %s", AttestationSubnetTopic(0)) + } + if AttestationSubnetTopic(3) != "/leanconsensus/devnet0/attestation_3/ssz_snappy" { + t.Fatalf("attestation subnet 3: %s", AttestationSubnetTopic(3)) + } +} + +func TestSubnetID(t *testing.T) { + if SubnetID(0, 1) != 0 { + t.Fatal("subnet 0%1 != 0") + } + if SubnetID(5, 3) != 2 { + t.Fatal("subnet 5%3 != 2") + } + if SubnetID(7, 4) != 3 { + t.Fatal("subnet 7%4 != 3") + } +} + +// --- Message ID tests --- + +func TestComputeMessageIDDeterministic(t *testing.T) { + topic := BlockTopic() + data := SnappyRawEncode([]byte("block data")) + + id1 := ComputeMessageID(topic, data) + id2 := ComputeMessageID(topic, data) + + if !bytes.Equal(id1, id2) { + t.Fatal("message IDs should be deterministic") + } + if len(id1) != 20 { + t.Fatalf("message ID should be 20 bytes, got %d", len(id1)) + } +} + +func TestComputeMessageIDDifferentTopics(t *testing.T) { + data := SnappyRawEncode([]byte("same data")) + id1 := ComputeMessageID(BlockTopic(), data) + id2 := ComputeMessageID(AggregationTopic(), data) + + if bytes.Equal(id1, id2) { + t.Fatal("different topics should produce different IDs") + } +} + +func TestComputeMessageIDInvalidSnappy(t *testing.T) { + id := ComputeMessageID(BlockTopic(), []byte{0xff, 0xfe, 0xfd}) + if len(id) != 20 { + t.Fatalf("should still produce 20-byte ID, got %d", len(id)) + } +} + +// --- Status message tests --- + +func TestStatusMessageSSZRoundtrip(t *testing.T) { + status := &StatusMessage{ + FinalizedRoot: [32]byte{0xab}, + FinalizedSlot: 42, + HeadRoot: [32]byte{0xcd}, + HeadSlot: 100, + } + data := status.MarshalSSZ() + if len(data) != 80 { + t.Fatalf("status SSZ should be 80 bytes, got %d", len(data)) + } + + decoded := &StatusMessage{} + if err := decoded.UnmarshalSSZ(data); err != nil { + t.Fatalf("unmarshal: %v", err) + } + if decoded.FinalizedSlot != 42 || decoded.HeadSlot != 100 { + t.Fatal("status roundtrip mismatch") + } + if decoded.FinalizedRoot != status.FinalizedRoot { + t.Fatal("finalized root mismatch") + } + if decoded.HeadRoot != status.HeadRoot { + t.Fatal("head root mismatch") + } +} + +func TestBlocksByRootRequestSSZRoundtrip(t *testing.T) { + roots := [][32]byte{ + {0x01, 0x02, 0x03}, + {0xaa, 0xbb, 0xcc}, + } + encoded := EncodeBlocksByRootRequest(roots) + + // SSZ container: 4-byte offset + 2 * 32 bytes = 68 bytes. + if len(encoded) != 68 { + t.Fatalf("expected 68 bytes, got %d", len(encoded)) + } + // First 4 bytes should be offset = 4 (little-endian). + if encoded[0] != 4 || encoded[1] != 0 || encoded[2] != 0 || encoded[3] != 0 { + t.Fatalf("unexpected offset bytes: %v", encoded[:4]) + } + + decoded, err := DecodeBlocksByRootRequest(encoded) + if err != nil { + t.Fatalf("decode: %v", err) + } + if len(decoded) != 2 { + t.Fatalf("expected 2 roots, got %d", len(decoded)) + } + if decoded[0] != roots[0] || decoded[1] != roots[1] { + t.Fatal("root mismatch") + } +} + +func TestBlocksByRootRequestSingleRoot36Bytes(t *testing.T) { + // Simulate what other clients send: 1 root = 36 bytes (4-byte offset + 32 bytes). + roots := [][32]byte{{0xde, 0xad, 0xbe, 0xef}} + encoded := EncodeBlocksByRootRequest(roots) + if len(encoded) != 36 { + t.Fatalf("expected 36 bytes for single root, got %d", len(encoded)) + } + + decoded, err := DecodeBlocksByRootRequest(encoded) + if err != nil { + t.Fatalf("decode: %v", err) + } + if len(decoded) != 1 || decoded[0] != roots[0] { + t.Fatal("single root roundtrip mismatch") + } +} + +// --- Peer store tests --- + +func TestPeerStoreAddRemove(t *testing.T) { + ps := NewPeerStore() + if ps.Count() != 0 { + t.Fatal("should start empty") + } + + ps.Add("peer1") + ps.Add("peer2") + if ps.Count() != 2 { + t.Fatalf("expected 2, got %d", ps.Count()) + } + + ps.Remove("peer1") + if ps.Count() != 1 { + t.Fatalf("expected 1, got %d", ps.Count()) + } +} + +func TestPeerStoreRandomPeer(t *testing.T) { + ps := NewPeerStore() + ps.Add("peer1") + ps.Add("peer2") + ps.Add("peer3") + + exclude := map[peer.ID]bool{"peer1": true, "peer2": true} + p := ps.RandomPeer(exclude) + if p != "peer3" { + t.Fatalf("expected peer3, got %s", p) + } +} + +func TestPeerStoreRandomPeerNoneAvailable(t *testing.T) { + ps := NewPeerStore() + p := ps.RandomPeer(nil) + if p != "" { + t.Fatal("should return empty when no peers") + } +} + +// --- Bootnode loading --- + +func TestLoadBootnodesEmpty(t *testing.T) { + tmpFile := t.TempDir() + "/nodes.yaml" + os.WriteFile(tmpFile, []byte("# empty\n"), 0644) + + addrs, err := LoadBootnodes(tmpFile) + if err != nil { + t.Fatalf("should not error on empty: %v", err) + } + if len(addrs) != 0 { + t.Fatalf("expected 0 addrs, got %d", len(addrs)) + } +} + +func TestLoadBootnodesWithYAMLList(t *testing.T) { + content := `- "/ip4/127.0.0.1/udp/9000/quic-v1/p2p/12D3KooWDpJ7As7BWAwRMfu1VU2WCqNjvq387JEYKDBj4kx6nXTN" +- "/ip4/127.0.0.1/udp/9001/quic-v1/p2p/12D3KooWLc4yBi3vYo4udihGu2HFxCWMWCdJoXYMFNp2CX9otY5A" +` + tmpFile := t.TempDir() + "/nodes.yaml" + os.WriteFile(tmpFile, []byte(content), 0644) + + addrs, err := LoadBootnodes(tmpFile) + if err != nil { + t.Fatalf("load: %v", err) + } + if len(addrs) != 2 { + t.Fatalf("expected 2 addrs, got %d", len(addrs)) + } +} diff --git a/p2p/peers.go b/p2p/peers.go new file mode 100644 index 0000000..fc21998 --- /dev/null +++ b/p2p/peers.go @@ -0,0 +1,257 @@ +package p2p + +import ( + "context" + "fmt" + "math/rand" + "sync" + "time" + + "github.com/libp2p/go-libp2p/core/peer" + "github.com/multiformats/go-multiaddr" + + "github.com/geanlabs/gean/logger" + "github.com/geanlabs/gean/types" +) + +// Retry parameters rs L56-59. +const ( + MaxFetchRetries = 10 + InitialBackoffMs = 5 + BackoffMultiplier = 2 + BootnodeRedialSecs = 12 + // MaxBlocksPerRequest matches leanSpec MAX_BLOCKS_PER_REQUEST. + MaxBlocksPerRequest = 10 +) + +// PeerStore tracks connected peers. +type PeerStore struct { + mu sync.RWMutex + peers map[peer.ID]bool +} + +// NewPeerStore creates an empty peer store. +func NewPeerStore() *PeerStore { + return &PeerStore{peers: make(map[peer.ID]bool)} +} + +// Add registers a connected peer. +func (ps *PeerStore) Add(id peer.ID) { + ps.mu.Lock() + defer ps.mu.Unlock() + ps.peers[id] = true +} + +// Remove unregisters a peer. +func (ps *PeerStore) Remove(id peer.ID) { + ps.mu.Lock() + defer ps.mu.Unlock() + delete(ps.peers, id) +} + +// Count returns number of connected peers. +func (ps *PeerStore) Count() int { + ps.mu.RLock() + defer ps.mu.RUnlock() + return len(ps.peers) +} + +// RandomPeer returns a random connected peer, excluding the given set. +// Returns empty peer.ID if none available. +func (ps *PeerStore) RandomPeer(exclude map[peer.ID]bool) peer.ID { + ps.mu.RLock() + defer ps.mu.RUnlock() + + var candidates []peer.ID + for id := range ps.peers { + if !exclude[id] { + candidates = append(candidates, id) + } + } + if len(candidates) == 0 { + return "" + } + return candidates[rand.Intn(len(candidates))] +} + +// AllPeers returns all connected peer IDs. +func (ps *PeerStore) AllPeers() []peer.ID { + ps.mu.RLock() + defer ps.mu.RUnlock() + ids := make([]peer.ID, 0, len(ps.peers)) + for id := range ps.peers { + ids = append(ids, id) + } + return ids +} + +// ConnectBootnodes connects to a list of bootnode multiaddrs. +func (h *Host) ConnectBootnodes(ctx context.Context, addrs []multiaddr.Multiaddr) { + for _, addr := range addrs { + peerInfo, err := peer.AddrInfoFromP2pAddr(addr) + if err != nil { + logger.Warn(logger.Network, "invalid bootnode addr %s: %v", addr, err) + continue + } + if err := h.host.Connect(ctx, *peerInfo); err != nil { + logger.Warn(logger.Network, "bootnode connect failed %s: %v", addr, err) + } else { + h.peerStore.Add(peerInfo.ID) + logger.Info(logger.Network, "connected to bootnode %s", peerInfo.ID.ShortString()) + } + } +} + +// StartBootnodeRedial periodically reconnects to bootnodes if disconnected. +func (h *Host) StartBootnodeRedial(ctx context.Context, addrs []multiaddr.Multiaddr) { + go func() { + ticker := time.NewTicker(BootnodeRedialSecs * time.Second) + defer ticker.Stop() + for { + select { + case <-ctx.Done(): + return + case <-ticker.C: + for _, addr := range addrs { + peerInfo, err := peer.AddrInfoFromP2pAddr(addr) + if err != nil { + continue + } + if h.host.Network().Connectedness(peerInfo.ID) != 1 { // not connected + if err := h.host.Connect(ctx, *peerInfo); err == nil { + h.peerStore.Add(peerInfo.ID) + logger.Info(logger.Network, "reconnected to bootnode %s", peerInfo.ID.ShortString()) + } + } + } + } + } + }() +} + +// FetchBlocksByRootWithRetry fetches blocks with exponential backoff retry. +// Backoff: 5, 10, 20, 40, 80, 160, 320, 640, 1280, 2560 ms. +// Random peer selection, exclude previously-failed peers per root. +func (h *Host) FetchBlocksByRootWithRetry(ctx context.Context, roots [][32]byte) ([]*SignedBlockWithAttestationResult, error) { + var results []*SignedBlockWithAttestationResult + + for _, root := range roots { + block, err := h.fetchSingleBlockWithRetry(ctx, root) + results = append(results, &SignedBlockWithAttestationResult{ + Root: root, + Block: block, + Err: err, + }) + } + + return results, nil +} + +// FetchBlocksByRootBatchWithRetry fetches up to MaxBlocksPerRequest roots +// from a single peer in one request. Retries up to MaxFetchRetries times, +// rotating peers and excluding previously-failed ones. +// +// Returns the blocks the peer delivered (may be fewer than requested if the +// peer doesn't have all of them) and the set of roots that were not delivered +// after exhausting retries. +func (h *Host) FetchBlocksByRootBatchWithRetry(ctx context.Context, roots [][32]byte) ([]*types.SignedBlockWithAttestation, [][32]byte, error) { + if len(roots) == 0 { + return nil, nil, nil + } + if len(roots) > MaxBlocksPerRequest { + roots = roots[:MaxBlocksPerRequest] + } + + excluded := make(map[peer.ID]bool) + backoff := time.Duration(InitialBackoffMs) * time.Millisecond + + for attempt := 0; attempt < MaxFetchRetries; attempt++ { + peerID := h.peerStore.RandomPeer(excluded) + if peerID == "" { + return nil, roots, fmt.Errorf("no peers available for batch block fetch") + } + + blocks, err := h.FetchBlocksByRoot(ctx, peerID, roots) + if err == nil && len(blocks) > 0 { + missing := computeMissingRoots(roots, blocks) + return blocks, missing, nil + } + + excluded[peerID] = true + reason := "peer returned no blocks" + if err != nil { + reason = err.Error() + } + logger.Warn(logger.Network, "batch block fetch attempt %d/%d failed for %d root(s) peer=%s reason=%s", + attempt+1, MaxFetchRetries, len(roots), peerID, reason) + + select { + case <-ctx.Done(): + return nil, roots, ctx.Err() + case <-time.After(backoff): + backoff *= BackoffMultiplier + } + } + + return nil, roots, fmt.Errorf("batch block fetch failed after %d retries for %d roots", MaxFetchRetries, len(roots)) +} + +// computeMissingRoots returns the roots that the peer did not deliver. +func computeMissingRoots(requested [][32]byte, delivered []*types.SignedBlockWithAttestation) [][32]byte { + deliveredRoots := make(map[[32]byte]bool, len(delivered)) + for _, b := range delivered { + root, err := b.Block.Block.HashTreeRoot() + if err != nil { + continue + } + deliveredRoots[root] = true + } + var missing [][32]byte + for _, r := range requested { + if !deliveredRoots[r] { + missing = append(missing, r) + } + } + return missing +} + +// SignedBlockWithAttestationResult holds the result of fetching a single block. +type SignedBlockWithAttestationResult struct { + Root [32]byte + Block []*types.SignedBlockWithAttestation + Err error +} + +func (h *Host) fetchSingleBlockWithRetry(ctx context.Context, root [32]byte) ([]*types.SignedBlockWithAttestation, error) { + excluded := make(map[peer.ID]bool) + backoff := time.Duration(InitialBackoffMs) * time.Millisecond + + for attempt := 0; attempt < MaxFetchRetries; attempt++ { + peerID := h.peerStore.RandomPeer(excluded) + if peerID == "" { + return nil, fmt.Errorf("no peers available for block fetch") + } + + blocks, err := h.FetchBlocksByRoot(ctx, peerID, [][32]byte{root}) + if err == nil && len(blocks) > 0 { + return blocks, nil + } + + excluded[peerID] = true + reason := "peer returned no blocks" + if err != nil { + reason = err.Error() + } + logger.Warn(logger.Network, "block fetch attempt %d/%d failed for block_root=0x%x peer=%s reason=%s", + attempt+1, MaxFetchRetries, root, peerID, reason) + + select { + case <-ctx.Done(): + return nil, ctx.Err() + case <-time.After(backoff): + backoff *= BackoffMultiplier + } + } + + return nil, fmt.Errorf("block fetch failed after %d retries for %x", MaxFetchRetries, root) +} diff --git a/p2p/publish.go b/p2p/publish.go new file mode 100644 index 0000000..434fe41 --- /dev/null +++ b/p2p/publish.go @@ -0,0 +1,49 @@ +package p2p + +import ( + "context" + "fmt" + + "github.com/geanlabs/gean/types" +) + +// PublishBlock publishes a signed block to the block gossipsub topic. +// SSZ encode -> snappy raw compress -> publish. +func (h *Host) PublishBlock(ctx context.Context, block *types.SignedBlockWithAttestation) error { + data, err := block.MarshalSSZ() + if err != nil { + return fmt.Errorf("marshal block: %w", err) + } + return h.publishToTopic(ctx, BlockTopic(), data) +} + +// PublishAttestation publishes a signed attestation to the appropriate subnet topic. +// Subnet = validator_id % committee_count. +func (h *Host) PublishAttestation(ctx context.Context, att *types.SignedAttestation, committeeCount uint64) error { + data, err := att.MarshalSSZ() + if err != nil { + return fmt.Errorf("marshal attestation: %w", err) + } + subnet := SubnetID(att.ValidatorID, committeeCount) + topic := AttestationSubnetTopic(subnet) + return h.publishToTopic(ctx, topic, data) +} + +// PublishAggregatedAttestation publishes an aggregated attestation to the aggregation topic. +func (h *Host) PublishAggregatedAttestation(ctx context.Context, agg *types.SignedAggregatedAttestation) error { + data, err := agg.MarshalSSZ() + if err != nil { + return fmt.Errorf("marshal aggregation: %w", err) + } + return h.publishToTopic(ctx, AggregationTopic(), data) +} + +// publishToTopic SSZ-encodes, snappy-compresses, and publishes to a topic. +func (h *Host) publishToTopic(ctx context.Context, topic string, sszData []byte) error { + compressed := SnappyRawEncode(sszData) + t, ok := h.topics[topic] + if !ok { + return fmt.Errorf("not subscribed to topic: %s", topic) + } + return t.Publish(ctx, compressed) +} diff --git a/p2p/reqresp.go b/p2p/reqresp.go new file mode 100644 index 0000000..ed917bc --- /dev/null +++ b/p2p/reqresp.go @@ -0,0 +1,267 @@ +package p2p + +import ( + "bytes" + "context" + "encoding/binary" + "fmt" + "io" + "time" + + "github.com/libp2p/go-libp2p/core/network" + "github.com/libp2p/go-libp2p/core/peer" + "github.com/libp2p/go-libp2p/core/protocol" + + "github.com/geanlabs/gean/logger" + "github.com/geanlabs/gean/types" +) + +// Protocol IDs rs. +const ( + StatusProtocol = "/leanconsensus/req/status/1/ssz_snappy" + BlocksByRootProtocol = "/leanconsensus/req/blocks_by_root/1/ssz_snappy" +) + +// Request/response timeout. +const ReqRespTimeout = 15 * time.Second + +// StatusMessage is exchanged on peer connection. +// SSZ wire format: finalized.root (32) | finalized.slot (8) | head.root (32) | head.slot (8) = 80 bytes. +type StatusMessage struct { + FinalizedRoot [32]byte + FinalizedSlot uint64 + HeadRoot [32]byte + HeadSlot uint64 +} + +// MarshalSSZ encodes StatusMessage to SSZ (80 bytes). +// SSZ wire format: finalized (root + slot) then head (root + slot) = 80 bytes. +// where Checkpoint SSZ is { root: H256, slot: u64 }. +func (s *StatusMessage) MarshalSSZ() []byte { + buf := make([]byte, 80) + copy(buf[0:32], s.FinalizedRoot[:]) + putUint64LE(buf[32:40], s.FinalizedSlot) + copy(buf[40:72], s.HeadRoot[:]) + putUint64LE(buf[72:80], s.HeadSlot) + return buf +} + +// UnmarshalSSZ decodes StatusMessage from SSZ. +func (s *StatusMessage) UnmarshalSSZ(buf []byte) error { + if len(buf) < 80 { + return fmt.Errorf("status message too short: %d, need 80", len(buf)) + } + copy(s.FinalizedRoot[:], buf[0:32]) + s.FinalizedSlot = getUint64LE(buf[32:40]) + copy(s.HeadRoot[:], buf[40:72]) + s.HeadSlot = getUint64LE(buf[72:80]) + return nil +} + +func putUint64LE(buf []byte, v uint64) { + for i := 0; i < 8; i++ { + buf[i] = byte(v >> (i * 8)) + } +} + +func getUint64LE(buf []byte) uint64 { + var v uint64 + for i := 0; i < 8; i++ { + v |= uint64(buf[i]) << (i * 8) + } + return v +} + +// RegisterReqRespHandlers registers stream handlers for status and blocks_by_root. +func (h *Host) RegisterReqRespHandlers(statusFn func() *StatusMessage, blockByRootFn func(root [32]byte) *types.SignedBlockWithAttestation) { + // Status handler. + h.host.SetStreamHandler(protocol.ID(StatusProtocol), func(s network.Stream) { + defer s.Close() + handleStatusRequest(s, statusFn) + }) + + // BlocksByRoot handler. + h.host.SetStreamHandler(protocol.ID(BlocksByRootProtocol), func(s network.Stream) { + defer s.Close() + handleBlocksByRootRequest(s, blockByRootFn) + }) +} + +func handleStatusRequest(s network.Stream, statusFn func() *StatusMessage) { + // Read request payload. + reqBuf, err := io.ReadAll(io.LimitReader(s, int64(MaxCompressedPayloadSize))) + if err != nil { + logger.Warn(logger.Network, "status: read request failed: %v", err) + return + } + + if len(reqBuf) > 0 { + // Decode peer's status (optional — we respond regardless). + if payload, err := DecodeReqRespPayload(reqBuf); err == nil { + peerStatus := &StatusMessage{} + peerStatus.UnmarshalSSZ(payload) + logger.Info(logger.Network, "status: peer at slot %d finalized=%d", peerStatus.HeadSlot, peerStatus.FinalizedSlot) + } + } + + // Send our status. + status := statusFn() + resp := EncodeResponse(RespSuccess, status.MarshalSSZ()) + s.Write(resp) +} + +func handleBlocksByRootRequest(s network.Stream, blockByRootFn func(root [32]byte) *types.SignedBlockWithAttestation) { + reqBuf, err := io.ReadAll(io.LimitReader(s, int64(MaxCompressedPayloadSize))) + if err != nil { + logger.Warn(logger.Network, "blocks_by_root: read request failed: %v", err) + return + } + + payload, err := DecodeReqRespPayload(reqBuf) + if err != nil { + logger.Error(logger.Network, "blocks_by_root: decode request failed: %v", err) + return + } + + roots, err := DecodeBlocksByRootRequest(payload) + if err != nil { + logger.Error(logger.Network, "blocks_by_root: %v", err) + return + } + + logger.Info(logger.Network, "blocks_by_root: peer requested %d roots", len(roots)) + + for _, root := range roots { + block := blockByRootFn(root) + if block == nil { + continue // silently skip missing blocks (per spec) + } + + blockData, err := block.MarshalSSZ() + if err != nil { + s.Write(EncodeResponse(RespServerError, []byte("marshal failed"))) + continue + } + + s.Write(EncodeResponse(RespSuccess, blockData)) + } +} + +// SendStatusRequest sends a status request to a peer and returns their status. +func (h *Host) SendStatusRequest(ctx context.Context, peerID peer.ID, ourStatus *StatusMessage) (*StatusMessage, error) { + ctx, cancel := context.WithTimeout(ctx, ReqRespTimeout) + defer cancel() + + s, err := h.host.NewStream(ctx, peerID, protocol.ID(StatusProtocol)) + if err != nil { + return nil, fmt.Errorf("open status stream: %w", err) + } + defer s.Close() + + // Send our status as the request payload. + reqPayload := EncodeReqRespPayload(ourStatus.MarshalSSZ()) + if _, err := s.Write(reqPayload); err != nil { + return nil, fmt.Errorf("write status request: %w", err) + } + s.CloseWrite() + + // Read response. + code, respData, err := DecodeResponse(s) + if err != nil { + return nil, fmt.Errorf("read status response: %w", err) + } + if code != RespSuccess { + return nil, fmt.Errorf("status response error: code=%d", code) + } + + peerStatus := &StatusMessage{} + if err := peerStatus.UnmarshalSSZ(respData); err != nil { + return nil, fmt.Errorf("unmarshal status: %w", err) + } + return peerStatus, nil +} + +// FetchBlocksByRoot requests blocks from a peer by their roots. +// Returns successfully decoded blocks (partial success allowed). +func (h *Host) FetchBlocksByRoot(ctx context.Context, peerID peer.ID, roots [][32]byte) ([]*types.SignedBlockWithAttestation, error) { + ctx, cancel := context.WithTimeout(ctx, ReqRespTimeout) + defer cancel() + + s, err := h.host.NewStream(ctx, peerID, protocol.ID(BlocksByRootProtocol)) + if err != nil { + return nil, fmt.Errorf("open blocks_by_root stream: %w", err) + } + defer s.Close() + + // Encode as SSZ container: BlocksByRootRequest { roots: List[Root, 1024] }. + reqPayload := EncodeReqRespPayload(EncodeBlocksByRootRequest(roots)) + if _, err := s.Write(reqPayload); err != nil { + return nil, fmt.Errorf("write blocks request: %w", err) + } + s.CloseWrite() + + // Read multi-chunk response. + var blocks []*types.SignedBlockWithAttestation + respBuf, err := io.ReadAll(io.LimitReader(s, int64(MaxCompressedPayloadSize)*int64(len(roots)))) + if err != nil { + return nil, fmt.Errorf("read blocks response: %w", err) + } + + reader := bytes.NewReader(respBuf) + for reader.Len() > 0 { + code, blockData, err := DecodeResponse(reader) + if err != nil { + break // end of stream + } + if code != RespSuccess || blockData == nil { + continue // skip errors, partial success + } + block := &types.SignedBlockWithAttestation{} + if err := block.UnmarshalSSZ(blockData); err != nil { + continue + } + blocks = append(blocks, block) + } + + return blocks, nil +} + +// EncodeBlocksByRootRequest encodes roots as SSZ container: BlocksByRootRequest { roots: List[Root, 1024] }. +// SSZ container with one variable-length field: 4-byte offset + concatenated roots. +// Matches leanSpec networking/reqresp/message.py BlocksByRootRequest. +func EncodeBlocksByRootRequest(roots [][32]byte) []byte { + rootsData := make([]byte, len(roots)*32) + for i, root := range roots { + copy(rootsData[i*32:], root[:]) + } + // SSZ container: fixed part is 4-byte offset pointing past fixed section. + container := make([]byte, 4+len(rootsData)) + binary.LittleEndian.PutUint32(container[:4], 4) + copy(container[4:], rootsData) + return container +} + +// DecodeBlocksByRootRequest decodes an SSZ container: BlocksByRootRequest { roots: List[Root, 1024] }. +// Returns the extracted roots. +func DecodeBlocksByRootRequest(payload []byte) ([][32]byte, error) { + if len(payload) < 4 { + return nil, fmt.Errorf("payload too short: %d bytes", len(payload)) + } + + offset := binary.LittleEndian.Uint32(payload[:4]) + if offset != 4 { + return nil, fmt.Errorf("unexpected SSZ offset %d, expected 4", offset) + } + + rootsData := payload[4:] + if len(rootsData)%32 != 0 { + return nil, fmt.Errorf("roots data size %d not multiple of 32", len(rootsData)) + } + + numRoots := len(rootsData) / 32 + roots := make([][32]byte, numRoots) + for i := 0; i < numRoots; i++ { + copy(roots[i][:], rootsData[i*32:(i+1)*32]) + } + return roots, nil +} diff --git a/p2p/topics.go b/p2p/topics.go new file mode 100644 index 0000000..c079162 --- /dev/null +++ b/p2p/topics.go @@ -0,0 +1,40 @@ +package p2p + +import "fmt" + +// Network name rs L192. +const NetworkName = "devnet0" + +// Topic kind constants rs. +const ( + BlockTopicKind = "block" + AttestationTopicKind = "attestation" + AggregationTopicKind = "aggregation" +) + +// TopicString builds a gossipsub topic string. +// Format: /leanconsensus/{network}/{kind}/ssz_snappy +func TopicString(kind string) string { + return fmt.Sprintf("/leanconsensus/%s/%s/ssz_snappy", NetworkName, kind) +} + +// BlockTopic returns the block gossipsub topic. +func BlockTopic() string { + return TopicString(BlockTopicKind) +} + +// AttestationSubnetTopic returns the attestation topic for a given subnet. +// Format: /leanconsensus/{network}/attestation_{subnet_id}/ssz_snappy +func AttestationSubnetTopic(subnetID uint64) string { + return TopicString(fmt.Sprintf("%s_%d", AttestationTopicKind, subnetID)) +} + +// AggregationTopic returns the aggregation gossipsub topic. +func AggregationTopic() string { + return TopicString(AggregationTopicKind) +} + +// SubnetID computes the subnet for a validator. +func SubnetID(validatorID, committeeCount uint64) uint64 { + return validatorID % committeeCount +} diff --git a/scripts/gen_node_keys/main.go b/scripts/gen_node_keys/main.go deleted file mode 100644 index dd0cbcf..0000000 --- a/scripts/gen_node_keys/main.go +++ /dev/null @@ -1,67 +0,0 @@ -package main - -import ( - "crypto/rand" - "flag" - "fmt" - "os" - - "github.com/libp2p/go-libp2p/core/crypto" - "github.com/libp2p/go-libp2p/core/peer" - "gopkg.in/yaml.v3" -) - -type bootnodeEntry struct { - Multiaddr string `yaml:"multiaddr"` -} - -func main() { - nodes := flag.Int("nodes", 3, "Number of node keys to generate") - ip := flag.String("ip", "127.0.0.1", "IP to embed in generated multiaddrs") - basePort := flag.Int("base-port", 9000, "Base TCP port for node multiaddrs (nodeN uses base-port+N)") - outPath := flag.String("out", "nodes.yaml", "Output path for nodes.yaml") - flag.Parse() - - entries := make([]bootnodeEntry, 0, *nodes) - - for i := 0; i < *nodes; i++ { - filename := fmt.Sprintf("node%d.key", i) - - priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader) - if err != nil { - fmt.Fprintf(os.Stderr, "failed to generate secp256k1 key for node%d: %v\n", i, err) - os.Exit(1) - } - - bytes, err := crypto.MarshalPrivateKey(priv) - if err != nil { - fmt.Fprintf(os.Stderr, "failed to marshal private key for node%d: %v\n", i, err) - os.Exit(1) - } - - if err := os.WriteFile(filename, bytes, 0600); err != nil { - fmt.Fprintf(os.Stderr, "failed to write %s: %v\n", filename, err) - os.Exit(1) - } - - pid, err := peer.IDFromPrivateKey(priv) - if err != nil { - fmt.Fprintf(os.Stderr, "failed to derive peer ID for node%d: %v\n", i, err) - os.Exit(1) - } - - addr := fmt.Sprintf("/ip4/%s/tcp/%d/p2p/%s", *ip, *basePort+i, pid.String()) - entries = append(entries, bootnodeEntry{Multiaddr: addr}) - fmt.Fprintf(os.Stderr, "node%d: %s\n", i, pid.String()) - } - - yamlBytes, err := yaml.Marshal(entries) - if err != nil { - fmt.Fprintf(os.Stderr, "failed to marshal nodes.yaml: %v\n", err) - os.Exit(1) - } - if err := os.WriteFile(*outPath, yamlBytes, 0644); err != nil { - fmt.Fprintf(os.Stderr, "failed to write %s: %v\n", *outPath, err) - os.Exit(1) - } -} diff --git a/spectests/converters.go b/spectests/converters.go deleted file mode 100644 index 501d903..0000000 --- a/spectests/converters.go +++ /dev/null @@ -1,177 +0,0 @@ -package spectests - -import ( - "github.com/geanlabs/gean/chain/statetransition" - "github.com/geanlabs/gean/types" -) - -// convertState converts a fixture JSON state to a domain State. -func convertState(fs FixtureState) *types.State { - config := &types.Config{GenesisTime: fs.Config.GenesisTime} - - header := &types.BlockHeader{ - Slot: fs.LatestBlockHeader.Slot, - ProposerIndex: fs.LatestBlockHeader.ProposerIndex, - ParentRoot: [32]byte(fs.LatestBlockHeader.ParentRoot), - StateRoot: [32]byte(fs.LatestBlockHeader.StateRoot), - BodyRoot: [32]byte(fs.LatestBlockHeader.BodyRoot), - } - - latestJustified := &types.Checkpoint{ - Root: [32]byte(fs.LatestJustified.Root), - Slot: fs.LatestJustified.Slot, - } - latestFinalized := &types.Checkpoint{ - Root: [32]byte(fs.LatestFinalized.Root), - Slot: fs.LatestFinalized.Slot, - } - - hashes := make([][32]byte, len(fs.HistoricalBlockHashes.Data)) - for i, h := range fs.HistoricalBlockHashes.Data { - hashes[i] = [32]byte(h) - } - - justifiedSlots := buildBitlist(fs.JustifiedSlots.Data) - - validators := make([]*types.Validator, len(fs.Validators.Data)) - for i, v := range fs.Validators.Data { - validators[i] = &types.Validator{ - Pubkey: [52]byte(v.Pubkey), - Index: v.Index, - } - } - - justificationsRoots := make([][32]byte, len(fs.JustificationsRoots.Data)) - for i, r := range fs.JustificationsRoots.Data { - justificationsRoots[i] = [32]byte(r) - } - - justificationsValidators := buildBoolBitlist(fs.JustificationsValidators.Data) - - return &types.State{ - Config: config, - Slot: fs.Slot, - LatestBlockHeader: header, - LatestJustified: latestJustified, - LatestFinalized: latestFinalized, - HistoricalBlockHashes: hashes, - JustifiedSlots: justifiedSlots, - Validators: validators, - JustificationsRoots: justificationsRoots, - JustificationsValidators: justificationsValidators, - } -} - -// convertBlock converts a fixture JSON block to a domain Block. -func convertBlock(fb FixtureBlock) *types.Block { - atts := make([]*types.AggregatedAttestation, len(fb.Body.Attestations.Data)) - for i, a := range fb.Body.Attestations.Data { - atts[i] = convertAggregatedAttestation(a) - } - return &types.Block{ - Slot: fb.Slot, - ProposerIndex: fb.ProposerIndex, - ParentRoot: [32]byte(fb.ParentRoot), - StateRoot: [32]byte(fb.StateRoot), - Body: &types.BlockBody{Attestations: atts}, - } -} - -// convertAttestation converts a fixture attestation to a domain Attestation. -func convertAttestation(fa FixtureAttestation) *types.Attestation { - return &types.Attestation{ - ValidatorID: fa.ValidatorID, - Data: &types.AttestationData{ - Slot: fa.Data.Slot, - Head: &types.Checkpoint{ - Root: [32]byte(fa.Data.Head.Root), - Slot: fa.Data.Head.Slot, - }, - Target: &types.Checkpoint{ - Root: [32]byte(fa.Data.Target.Root), - Slot: fa.Data.Target.Slot, - }, - Source: &types.Checkpoint{ - Root: [32]byte(fa.Data.Source.Root), - Slot: fa.Data.Source.Slot, - }, - }, - } -} - -// convertSignedAttestation converts a fixture signed attestation to a domain SignedAttestation. -// Uses a zero signature since fixture tests skip signature verification. -func convertSignedAttestation(fa FixtureSignedAttestation) *types.SignedAttestation { - return &types.SignedAttestation{ - ValidatorID: fa.ValidatorID, - Message: &types.AttestationData{ - Slot: fa.Data.Slot, - Head: &types.Checkpoint{ - Root: [32]byte(fa.Data.Head.Root), - Slot: fa.Data.Head.Slot, - }, - Target: &types.Checkpoint{ - Root: [32]byte(fa.Data.Target.Root), - Slot: fa.Data.Target.Slot, - }, - Source: &types.Checkpoint{ - Root: [32]byte(fa.Data.Source.Root), - Slot: fa.Data.Source.Slot, - }, - }, - } -} - -func convertAggregatedAttestation(fa FixtureAggregatedAttestation) *types.AggregatedAttestation { - bits := []byte{0x01} // empty bitlist with sentinel - if fa.AggregationBits != nil { - bits = buildBoolBitlist(fa.AggregationBits.Data) - } else if fa.ValidatorID != nil { - bits = buildSingleBitlist(*fa.ValidatorID) - } - - return &types.AggregatedAttestation{ - AggregationBits: bits, - Data: &types.AttestationData{ - Slot: fa.Data.Slot, - Head: &types.Checkpoint{ - Root: [32]byte(fa.Data.Head.Root), - Slot: fa.Data.Head.Slot, - }, - Target: &types.Checkpoint{ - Root: [32]byte(fa.Data.Target.Root), - Slot: fa.Data.Target.Slot, - }, - Source: &types.Checkpoint{ - Root: [32]byte(fa.Data.Source.Root), - Slot: fa.Data.Source.Slot, - }, - }, - } -} - -// buildBitlist converts a slice of uint64 (0 or 1 values) to an SSZ bitlist. -func buildBitlist(bits []uint64) []byte { - bl := []byte{0x01} // empty bitlist with sentinel - for _, b := range bits { - bl = statetransition.AppendBit(bl, b != 0) - } - return bl -} - -// buildBoolBitlist converts a slice of bools to an SSZ bitlist. -func buildBoolBitlist(bits []bool) []byte { - bl := []byte{0x01} // empty bitlist with sentinel - for _, b := range bits { - bl = statetransition.AppendBit(bl, b) - } - return bl -} - -func buildSingleBitlist(index uint64) []byte { - bl := []byte{0x01} // empty bitlist with sentinel - for i := uint64(0); i <= index; i++ { - bl = statetransition.AppendBit(bl, i == index) - } - return bl -} diff --git a/spectests/fc_spectests_test.go b/spectests/fc_spectests_test.go deleted file mode 100644 index 90c3431..0000000 --- a/spectests/fc_spectests_test.go +++ /dev/null @@ -1,372 +0,0 @@ -//go:build skip_sig_verify - -package spectests - -import ( - "encoding/json" - "os" - "path/filepath" - "testing" - - "github.com/geanlabs/gean/chain/forkchoice" - "github.com/geanlabs/gean/storage/memory" - "github.com/geanlabs/gean/types" -) - -const fcFixtureDir = "../leanSpec/fixtures/consensus/fork_choice" - -func TestForkChoice(t *testing.T) { - files := findJSONFiles(t, fcFixtureDir) - - for _, file := range files { - file := file - relPath, _ := filepath.Rel(fcFixtureDir, file) - t.Run(relPath, func(t *testing.T) { - runForkChoiceFixture(t, file) - }) - } -} - -// findJSONFiles is defined in stf_spectests_test.go - -func runForkChoiceFixture(t *testing.T, path string) { - t.Helper() - - data, err := os.ReadFile(path) - if err != nil { - t.Fatalf("failed to read fixture: %v", err) - } - - var fixture ForkChoiceFixture - if err := json.Unmarshal(data, &fixture); err != nil { - t.Fatalf("failed to unmarshal fixture: %v", err) - } - - for testName, tc := range fixture { - tc := tc - t.Run(testName, func(t *testing.T) { - if tc.Info.FixtureFormat != "fork_choice_test" { - t.Skipf("unsupported fixture format: %s", tc.Info.FixtureFormat) - } - - anchorState := convertState(tc.AnchorState) - anchorBlock := convertBlock(tc.AnchorBlock) - - store := forkchoice.NewStore(anchorState, anchorBlock, memory.New()) - genesisTime := anchorState.Config.GenesisTime - - // Block registry for label→root resolution. - blockRegistry := make(map[string][32]byte) - - for stepIdx, step := range tc.Steps { - var currentBlockRoot *[32]byte - switch step.StepType { - case "block": - if step.Block == nil { - t.Fatalf("[%s] step %d: block step missing block data", testName, stepIdx) - } - blockRoot := processBlockStep(t, testName, stepIdx, store, step, blockRegistry, genesisTime) - currentBlockRoot = &blockRoot - - case "tick": - if step.Time == nil { - t.Fatalf("[%s] step %d: tick step missing time", testName, stepIdx) - } - store.AdvanceTime(*step.Time, false) - - case "attestation": - if step.Attestation == nil { - t.Fatalf("[%s] step %d: attestation step missing attestation data", testName, stepIdx) - } - sa := convertSignedAttestation(*step.Attestation) - store.ProcessAttestation(sa) - - default: - t.Fatalf("[%s] step %d: unsupported step type %q", testName, stepIdx, step.StepType) - } - - // Validate post-step checks. - if step.Checks != nil { - validateStoreChecks(t, testName, stepIdx, store, step.Checks, blockRegistry, currentBlockRoot) - } - } - }) - } -} - -func processBlockStep(t *testing.T, testName string, stepIdx int, store *forkchoice.Store, step ForkChoiceStep, blockRegistry map[string][32]byte, genesisTime uint64) [32]byte { - t.Helper() - - block := convertBlock(step.Block.Block) - blockRoot, err := block.HashTreeRoot() - if err != nil { - t.Fatalf("[%s] step %d: failed to compute block root: %v", testName, stepIdx, err) - } - - // Advance time to the block's slot before processing. - blockTime := block.Slot*types.SecondsPerSlot + genesisTime - store.AdvanceTime(blockTime, true) - - // Build the signed block envelope. - var proposerAtt *types.Attestation - if step.Block.ProposerAttestation != nil { - proposerAtt = convertAttestation(*step.Block.ProposerAttestation) - } else { - status := store.GetStatus() - proposerAtt = &types.Attestation{ - ValidatorID: block.ProposerIndex, - Data: &types.AttestationData{ - Slot: block.Slot, - Head: &types.Checkpoint{ - Root: blockRoot, - Slot: block.Slot, - }, - Target: &types.Checkpoint{ - Root: status.JustifiedRoot, - Slot: status.JustifiedSlot, - }, - Source: &types.Checkpoint{ - Root: status.JustifiedRoot, - Slot: status.JustifiedSlot, - }, - }, - } - } - - envelope := &types.SignedBlockWithAttestation{ - Message: &types.BlockWithAttestation{ - Block: block, - ProposerAttestation: proposerAtt, - }, - Signature: makeZeroBlockSignatures(len(block.Body.Attestations)), - } - - err = store.ProcessBlock(envelope) - - if step.Valid { - if err != nil { - t.Fatalf("[%s] step %d: expected valid block but got error: %v", testName, stepIdx, err) - } - } else { - if err == nil { - t.Fatalf("[%s] step %d: expected invalid block but processing succeeded", testName, stepIdx) - } - } - - return blockRoot -} - -func validateStoreChecks(t *testing.T, testName string, stepIdx int, store *forkchoice.Store, checks *StoreChecks, blockRegistry map[string][32]byte, currentBlockRoot *[32]byte) { - t.Helper() - - status := store.GetStatus() - justifiedRoot := status.JustifiedRoot - - if checks.HeadSlot != nil { - if status.HeadSlot != *checks.HeadSlot { - t.Errorf("[%s] step %d: headSlot mismatch: got %d, want %d", - testName, stepIdx, status.HeadSlot, *checks.HeadSlot) - } - } - - if checks.HeadRoot != nil { - expected := [32]byte(*checks.HeadRoot) - if status.Head != expected { - t.Errorf("[%s] step %d: headRoot mismatch: got %x, want %x", - testName, stepIdx, status.Head, expected) - } - } - - if checks.HeadRootLabel != nil { - label := *checks.HeadRootLabel - labelRoot := status.Head - if checks.HeadRoot != nil { - labelRoot = [32]byte(*checks.HeadRoot) - } - existingRoot, exists := blockRegistry[label] - if !exists { - blockRegistry[label] = labelRoot - } else if existingRoot != labelRoot { - t.Errorf("[%s] step %d: headRootLabel %q remapped: got %x, want %x", - testName, stepIdx, label, labelRoot, existingRoot) - } - if status.Head != blockRegistry[label] { - t.Errorf("[%s] step %d: headRootLabel %q mismatch: got %x, want %x", - testName, stepIdx, label, status.Head, blockRegistry[label]) - } - } - - if checks.LatestJustifiedSlot != nil { - if status.JustifiedSlot != *checks.LatestJustifiedSlot { - t.Errorf("[%s] step %d: latestJustified.slot mismatch: got %d, want %d", - testName, stepIdx, status.JustifiedSlot, *checks.LatestJustifiedSlot) - } - } - - if checks.LatestJustifiedRoot != nil { - expected := [32]byte(*checks.LatestJustifiedRoot) - if justifiedRoot != expected { - t.Errorf("[%s] step %d: latestJustified.root mismatch: got %x, want %x", - testName, stepIdx, justifiedRoot, expected) - } - } - if checks.LatestJustifiedRootLabel != nil { - label := *checks.LatestJustifiedRootLabel - labelRoot := justifiedRoot - if checks.LatestJustifiedRoot != nil { - labelRoot = [32]byte(*checks.LatestJustifiedRoot) - } - existingRoot, exists := blockRegistry[label] - if !exists { - blockRegistry[label] = labelRoot - } else if existingRoot != labelRoot { - t.Errorf("[%s] step %d: latestJustifiedRootLabel %q remapped: got %x, want %x", - testName, stepIdx, label, labelRoot, existingRoot) - } - if justifiedRoot != blockRegistry[label] { - t.Errorf("[%s] step %d: latestJustifiedRootLabel %q mismatch: got %x, want %x", - testName, stepIdx, label, justifiedRoot, blockRegistry[label]) - } - } - - if checks.LatestFinalizedSlot != nil { - if status.FinalizedSlot != *checks.LatestFinalizedSlot { - t.Errorf("[%s] step %d: latestFinalized.slot mismatch: got %d, want %d", - testName, stepIdx, status.FinalizedSlot, *checks.LatestFinalizedSlot) - } - } - - if checks.LatestFinalizedRoot != nil { - expected := [32]byte(*checks.LatestFinalizedRoot) - if status.FinalizedRoot != expected { - t.Errorf("[%s] step %d: latestFinalized.root mismatch: got %x, want %x", - testName, stepIdx, status.FinalizedRoot, expected) - } - } - if checks.LatestFinalizedRootLabel != nil { - label := *checks.LatestFinalizedRootLabel - labelRoot := status.FinalizedRoot - if checks.LatestFinalizedRoot != nil { - labelRoot = [32]byte(*checks.LatestFinalizedRoot) - } - existingRoot, exists := blockRegistry[label] - if !exists { - blockRegistry[label] = labelRoot - } else if existingRoot != labelRoot { - t.Errorf("[%s] step %d: latestFinalizedRootLabel %q remapped: got %x, want %x", - testName, stepIdx, label, labelRoot, existingRoot) - } - if status.FinalizedRoot != blockRegistry[label] { - t.Errorf("[%s] step %d: latestFinalizedRootLabel %q mismatch: got %x, want %x", - testName, stepIdx, label, status.FinalizedRoot, blockRegistry[label]) - } - } - - if len(checks.AttestationChecks) > 0 { - for _, ac := range checks.AttestationChecks { - var sa *types.SignedAttestation - var found bool - var locationName string - if ac.Location == "known" { - sa, found = store.GetKnownAttestation(ac.Validator) - locationName = "latest_known_attestations" - } else { - sa, found = store.GetNewAttestation(ac.Validator) - locationName = "latest_new_attestations" - } - - if !found { - t.Errorf("[%s] step %d: validator %d not found in %s", - testName, stepIdx, ac.Validator, locationName) - continue - } - - if ac.AttestationSlot != nil && sa.Message.Slot != *ac.AttestationSlot { - t.Errorf("[%s] step %d: validator %d %s attestation slot: got %d, want %d", - testName, stepIdx, ac.Validator, locationName, sa.Message.Slot, *ac.AttestationSlot) - } - if ac.HeadSlot != nil && sa.Message.Head.Slot != *ac.HeadSlot { - t.Errorf("[%s] step %d: validator %d %s head slot: got %d, want %d", - testName, stepIdx, ac.Validator, locationName, sa.Message.Head.Slot, *ac.HeadSlot) - } - if ac.SourceSlot != nil && sa.Message.Source.Slot != *ac.SourceSlot { - t.Errorf("[%s] step %d: validator %d %s source slot: got %d, want %d", - testName, stepIdx, ac.Validator, locationName, sa.Message.Source.Slot, *ac.SourceSlot) - } - if ac.TargetSlot != nil && sa.Message.Target.Slot != *ac.TargetSlot { - t.Errorf("[%s] step %d: validator %d %s target slot: got %d, want %d", - testName, stepIdx, ac.Validator, locationName, sa.Message.Target.Slot, *ac.TargetSlot) - } - } - } - - if len(checks.LexicographicHeadAmong) > 0 { - validateLexicographicHead(t, testName, stepIdx, store, checks.LexicographicHeadAmong, blockRegistry, currentBlockRoot) - } -} - -func validateLexicographicHead( - t *testing.T, - testName string, - stepIdx int, - store *forkchoice.Store, - labels []string, - blockRegistry map[string][32]byte, - currentBlockRoot *[32]byte, -) { - t.Helper() - - headRoot := store.GetStatus().Head - - missing := make([]string, 0, len(labels)) - for _, label := range labels { - if _, ok := blockRegistry[label]; !ok { - missing = append(missing, label) - } - } - if len(missing) > 0 { - // Devnet-1 fixtures may omit one competing fork label and imply it is - // the block introduced by the current step. - if currentBlockRoot != nil && len(missing) == 1 { - blockRegistry[missing[0]] = *currentBlockRoot - } else { - t.Fatalf("[%s] step %d: unresolved lexicographic labels %v", testName, stepIdx, missing) - } - } - - resolved := make([][32]byte, 0, len(labels)) - for _, label := range labels { - resolved = append(resolved, blockRegistry[label]) - } - - highestRoot := resolved[0] - for _, root := range resolved[1:] { - if hashGreater(root, highestRoot) { - highestRoot = root - } - } - - if headRoot != highestRoot { - t.Errorf("[%s] step %d: lexicographic tiebreaker failed for labels %v: head=%x, expected highest=%x", - testName, stepIdx, labels, headRoot, highestRoot) - } -} - -// hashGreater returns true if a > b lexicographically. -func hashGreater(a, b [32]byte) bool { - for i := 0; i < 32; i++ { - if a[i] > b[i] { - return true - } - if a[i] < b[i] { - return false - } - } - return false -} - -func makeZeroBlockSignatures(attestationCount int) types.BlockSignatures { - return types.BlockSignatures{ - AttestationSignatures: make([]*types.AggregatedSignatureProof, attestationCount), - } -} diff --git a/spectests/fixture.go b/spectests/fixture.go new file mode 100644 index 0000000..bffe5e4 --- /dev/null +++ b/spectests/fixture.go @@ -0,0 +1,442 @@ +//go:build spectests + +package spectests + +import ( + "encoding/hex" + "encoding/json" + "fmt" + "strings" + + "github.com/geanlabs/gean/types" +) + +// TestFixture wraps the top-level JSON (key = test name, value = test data). +type TestFixture map[string]StateTransitionTest + +type StateTransitionTest struct { + Network string `json:"network"` + LeanEnv string `json:"leanEnv"` + Pre TestState `json:"pre"` + Blocks []TestBlock `json:"blocks"` + Post *TestPostState `json:"post"` + ExpectException string `json:"expectException"` + ExpectExceptionMessage string `json:"expectExceptionMessage"` +} + +type TestState struct { + Config TestConfig `json:"config"` + Slot uint64 `json:"slot"` + LatestBlockHeader TestBlockHeader `json:"latestBlockHeader"` + LatestJustified TestCheckpoint `json:"latestJustified"` + LatestFinalized TestCheckpoint `json:"latestFinalized"` + HistoricalBlockHashes TestDataList `json:"historicalBlockHashes"` + JustifiedSlots TestDataList `json:"justifiedSlots"` + Validators TestValidatorList `json:"validators"` + JustificationsRoots TestDataList `json:"justificationsRoots"` + JustificationsValidators TestDataList `json:"justificationsValidators"` +} + +type TestConfig struct { + GenesisTime uint64 `json:"genesisTime"` +} + +type TestBlockHeader struct { + Slot uint64 `json:"slot"` + ProposerIndex uint64 `json:"proposerIndex"` + ParentRoot string `json:"parentRoot"` + StateRoot string `json:"stateRoot"` + BodyRoot string `json:"bodyRoot"` +} + +type TestCheckpoint struct { + Root string `json:"root"` + Slot uint64 `json:"slot"` +} + +type TestBlock struct { + Slot uint64 `json:"slot"` + ProposerIndex uint64 `json:"proposerIndex"` + ParentRoot string `json:"parentRoot"` + StateRoot string `json:"stateRoot"` + Body TestBlockBody `json:"body"` +} + +type TestBlockBody struct { + Attestations TestDataList `json:"attestations"` +} + +// TestDataList wraps the { "data": [...] } pattern used in fixtures. +type TestDataList struct { + Data []json.RawMessage `json:"data"` +} + +type TestValidator struct { + Pubkey string `json:"pubkey"` + Index uint64 `json:"index"` +} + +type TestValidatorList struct { + Data []TestValidator `json:"data"` +} + +type TestAggregatedAttestation struct { + AggregationBits TestDataList `json:"aggregationBits"` + Data TestAttData `json:"data"` +} + +type TestAttData struct { + Slot uint64 `json:"slot"` + Head TestCheckpoint `json:"head"` + Target TestCheckpoint `json:"target"` + Source TestCheckpoint `json:"source"` +} + +type TestPostState struct { + Slot *uint64 `json:"slot"` + LatestBlockHeaderSlot *uint64 `json:"latestBlockHeaderSlot"` + LatestBlockHeaderStateRoot *string `json:"latestBlockHeaderStateRoot"` + LatestBlockHeaderProposerIndex *uint64 `json:"latestBlockHeaderProposerIndex"` + LatestBlockHeaderParentRoot *string `json:"latestBlockHeaderParentRoot"` + LatestBlockHeaderBodyRoot *string `json:"latestBlockHeaderBodyRoot"` + LatestJustifiedSlot *uint64 `json:"latestJustifiedSlot"` + LatestJustifiedRoot *string `json:"latestJustifiedRoot"` + LatestFinalizedSlot *uint64 `json:"latestFinalizedSlot"` + LatestFinalizedRoot *string `json:"latestFinalizedRoot"` + HistoricalBlockHashesCount *uint64 `json:"historicalBlockHashesCount"` + ValidatorCount *uint64 `json:"validatorCount"` + ConfigGenesisTime *uint64 `json:"configGenesisTime"` + HistoricalBlockHashes *TestDataList `json:"historicalBlockHashes"` + JustifiedSlots *TestDataList `json:"justifiedSlots"` + JustificationsRoots *TestDataList `json:"justificationsRoots"` + JustificationsValidators *TestDataList `json:"justificationsValidators"` +} + +func parseHexRoot(s string) [32]byte { + s = strings.TrimPrefix(s, "0x") + b, err := hex.DecodeString(s) + if err != nil { + panic(fmt.Sprintf("parseHexRoot: invalid hex %q: %v", s, err)) + } + var root [32]byte + copy(root[:], b) + return root +} + +func parseHexBytes(s string) []byte { + s = strings.TrimPrefix(s, "0x") + b, err := hex.DecodeString(s) + if err != nil { + panic(fmt.Sprintf("parseHexBytes: invalid hex %q: %v", s, err)) + } + return b +} + +func parseHexPubkey(s string) [types.PubkeySize]byte { + b := parseHexBytes(s) + var pk [types.PubkeySize]byte + copy(pk[:], b) + return pk +} + +// ToState converts test JSON pre-state to gean's types.State. +func (ts *TestState) ToState() *types.State { + state := &types.State{ + Config: &types.ChainConfig{ + GenesisTime: ts.Config.GenesisTime, + }, + Slot: ts.Slot, + LatestBlockHeader: &types.BlockHeader{ + Slot: ts.LatestBlockHeader.Slot, + ProposerIndex: ts.LatestBlockHeader.ProposerIndex, + ParentRoot: parseHexRoot(ts.LatestBlockHeader.ParentRoot), + StateRoot: parseHexRoot(ts.LatestBlockHeader.StateRoot), + BodyRoot: parseHexRoot(ts.LatestBlockHeader.BodyRoot), + }, + LatestJustified: &types.Checkpoint{ + Root: parseHexRoot(ts.LatestJustified.Root), + Slot: ts.LatestJustified.Slot, + }, + LatestFinalized: &types.Checkpoint{ + Root: parseHexRoot(ts.LatestFinalized.Root), + Slot: ts.LatestFinalized.Slot, + }, + } + + // Validators + for _, v := range ts.Validators.Data { + state.Validators = append(state.Validators, &types.Validator{ + Pubkey: parseHexPubkey(v.Pubkey), + Index: v.Index, + }) + } + + // HistoricalBlockHashes: array of hex strings + for _, raw := range ts.HistoricalBlockHashes.Data { + var s string + if err := json.Unmarshal(raw, &s); err != nil { + panic(fmt.Sprintf("HistoricalBlockHashes: %v", err)) + } + b := parseHexBytes(s) + // Ensure 32 bytes + h := make([]byte, 32) + copy(h, b) + state.HistoricalBlockHashes = append(state.HistoricalBlockHashes, h) + } + + // JustifiedSlots: array of boolean values representing bit positions. + // Convert to SSZ bitlist. + state.JustifiedSlots = parseBoolBitlist(ts.JustifiedSlots.Data) + + // JustificationsRoots: array of hex strings + for _, raw := range ts.JustificationsRoots.Data { + var s string + if err := json.Unmarshal(raw, &s); err != nil { + panic(fmt.Sprintf("JustificationsRoots: %v", err)) + } + b := parseHexBytes(s) + h := make([]byte, 32) + copy(h, b) + state.JustificationsRoots = append(state.JustificationsRoots, h) + } + + // JustificationsValidators: array of booleans -> bitlist + state.JustificationsValidators = parseBoolBitlist(ts.JustificationsValidators.Data) + + return state +} + +// parseBoolBitlist converts a JSON array of booleans to an SSZ bitlist. +// An empty array returns a minimal bitlist (just the delimiter bit). +func parseBoolBitlist(data []json.RawMessage) []byte { + length := uint64(len(data)) + if length == 0 { + return types.NewBitlistSSZ(0) + } + bl := types.NewBitlistSSZ(length) + for i, raw := range data { + var val bool + if err := json.Unmarshal(raw, &val); err != nil { + // Try parsing as integer (0/1) + var intVal int + if err2 := json.Unmarshal(raw, &intVal); err2 != nil { + panic(fmt.Sprintf("parseBoolBitlist index %d: %v / %v", i, err, err2)) + } + val = intVal != 0 + } + if val { + types.BitlistSet(bl, uint64(i)) + } + } + return bl +} + +// ToBlock converts test JSON block to gean's types.Block. +func (tb *TestBlock) ToBlock() *types.Block { + block := &types.Block{ + Slot: tb.Slot, + ProposerIndex: tb.ProposerIndex, + ParentRoot: parseHexRoot(tb.ParentRoot), + StateRoot: parseHexRoot(tb.StateRoot), + Body: &types.BlockBody{ + Attestations: make([]*types.AggregatedAttestation, 0), + }, + } + + for _, raw := range tb.Body.Attestations.Data { + var ta TestAggregatedAttestation + if err := json.Unmarshal(raw, &ta); err != nil { + panic(fmt.Sprintf("attestation unmarshal: %v", err)) + } + + att := &types.AggregatedAttestation{ + AggregationBits: parseBoolBitlist(ta.AggregationBits.Data), + Data: &types.AttestationData{ + Slot: ta.Data.Slot, + Head: &types.Checkpoint{ + Root: parseHexRoot(ta.Data.Head.Root), + Slot: ta.Data.Head.Slot, + }, + Target: &types.Checkpoint{ + Root: parseHexRoot(ta.Data.Target.Root), + Slot: ta.Data.Target.Slot, + }, + Source: &types.Checkpoint{ + Root: parseHexRoot(ta.Data.Source.Root), + Slot: ta.Data.Source.Slot, + }, + }, + } + block.Body.Attestations = append(block.Body.Attestations, att) + } + + return block +} + +// Validate checks fields in the post-state expectation against actual state. +// Only non-nil fields are checked (selective validation). +func (tp *TestPostState) Validate(state *types.State) error { + if tp.Slot != nil { + if state.Slot != *tp.Slot { + return fmt.Errorf("slot: got %d, want %d", state.Slot, *tp.Slot) + } + } + + if tp.LatestBlockHeaderSlot != nil { + if state.LatestBlockHeader.Slot != *tp.LatestBlockHeaderSlot { + return fmt.Errorf("latestBlockHeader.slot: got %d, want %d", + state.LatestBlockHeader.Slot, *tp.LatestBlockHeaderSlot) + } + } + + if tp.LatestBlockHeaderStateRoot != nil { + want := parseHexRoot(*tp.LatestBlockHeaderStateRoot) + if state.LatestBlockHeader.StateRoot != want { + return fmt.Errorf("latestBlockHeader.stateRoot: got 0x%x, want 0x%x", + state.LatestBlockHeader.StateRoot, want) + } + } + + if tp.LatestBlockHeaderProposerIndex != nil { + if state.LatestBlockHeader.ProposerIndex != *tp.LatestBlockHeaderProposerIndex { + return fmt.Errorf("latestBlockHeader.proposerIndex: got %d, want %d", + state.LatestBlockHeader.ProposerIndex, *tp.LatestBlockHeaderProposerIndex) + } + } + + if tp.LatestBlockHeaderParentRoot != nil { + want := parseHexRoot(*tp.LatestBlockHeaderParentRoot) + if state.LatestBlockHeader.ParentRoot != want { + return fmt.Errorf("latestBlockHeader.parentRoot: got 0x%x, want 0x%x", + state.LatestBlockHeader.ParentRoot, want) + } + } + + if tp.LatestBlockHeaderBodyRoot != nil { + want := parseHexRoot(*tp.LatestBlockHeaderBodyRoot) + if state.LatestBlockHeader.BodyRoot != want { + return fmt.Errorf("latestBlockHeader.bodyRoot: got 0x%x, want 0x%x", + state.LatestBlockHeader.BodyRoot, want) + } + } + + if tp.LatestJustifiedSlot != nil { + if state.LatestJustified.Slot != *tp.LatestJustifiedSlot { + return fmt.Errorf("latestJustified.slot: got %d, want %d", + state.LatestJustified.Slot, *tp.LatestJustifiedSlot) + } + } + + if tp.LatestJustifiedRoot != nil { + want := parseHexRoot(*tp.LatestJustifiedRoot) + if state.LatestJustified.Root != want { + return fmt.Errorf("latestJustified.root: got 0x%x, want 0x%x", + state.LatestJustified.Root, want) + } + } + + if tp.LatestFinalizedSlot != nil { + if state.LatestFinalized.Slot != *tp.LatestFinalizedSlot { + return fmt.Errorf("latestFinalized.slot: got %d, want %d", + state.LatestFinalized.Slot, *tp.LatestFinalizedSlot) + } + } + + if tp.LatestFinalizedRoot != nil { + want := parseHexRoot(*tp.LatestFinalizedRoot) + if state.LatestFinalized.Root != want { + return fmt.Errorf("latestFinalized.root: got 0x%x, want 0x%x", + state.LatestFinalized.Root, want) + } + } + + if tp.HistoricalBlockHashesCount != nil { + got := uint64(len(state.HistoricalBlockHashes)) + if got != *tp.HistoricalBlockHashesCount { + return fmt.Errorf("historicalBlockHashes count: got %d, want %d", + got, *tp.HistoricalBlockHashesCount) + } + } + + if tp.ValidatorCount != nil { + got := state.NumValidators() + if got != *tp.ValidatorCount { + return fmt.Errorf("validator count: got %d, want %d", got, *tp.ValidatorCount) + } + } + + if tp.ConfigGenesisTime != nil { + if state.Config.GenesisTime != *tp.ConfigGenesisTime { + return fmt.Errorf("config.genesisTime: got %d, want %d", + state.Config.GenesisTime, *tp.ConfigGenesisTime) + } + } + + if tp.HistoricalBlockHashes != nil { + wantLen := len(tp.HistoricalBlockHashes.Data) + gotLen := len(state.HistoricalBlockHashes) + if gotLen != wantLen { + return fmt.Errorf("historicalBlockHashes length: got %d, want %d", gotLen, wantLen) + } + for i, raw := range tp.HistoricalBlockHashes.Data { + var s string + if err := json.Unmarshal(raw, &s); err != nil { + return fmt.Errorf("historicalBlockHashes[%d] unmarshal: %v", i, err) + } + want := parseHexBytes(s) + got := state.HistoricalBlockHashes[i] + if !bytesEqual(got, want) { + return fmt.Errorf("historicalBlockHashes[%d]: got 0x%x, want 0x%x", i, got, want) + } + } + } + + if tp.JustifiedSlots != nil { + wantBitlist := parseBoolBitlist(tp.JustifiedSlots.Data) + if !bytesEqual(state.JustifiedSlots, wantBitlist) { + return fmt.Errorf("justifiedSlots mismatch: got %x, want %x", + state.JustifiedSlots, wantBitlist) + } + } + + if tp.JustificationsRoots != nil { + wantLen := len(tp.JustificationsRoots.Data) + gotLen := len(state.JustificationsRoots) + if gotLen != wantLen { + return fmt.Errorf("justificationsRoots length: got %d, want %d", gotLen, wantLen) + } + for i, raw := range tp.JustificationsRoots.Data { + var s string + if err := json.Unmarshal(raw, &s); err != nil { + return fmt.Errorf("justificationsRoots[%d] unmarshal: %v", i, err) + } + want := parseHexBytes(s) + got := state.JustificationsRoots[i] + if !bytesEqual(got, want) { + return fmt.Errorf("justificationsRoots[%d]: got 0x%x, want 0x%x", i, got, want) + } + } + } + + if tp.JustificationsValidators != nil { + wantBitlist := parseBoolBitlist(tp.JustificationsValidators.Data) + if !bytesEqual(state.JustificationsValidators, wantBitlist) { + return fmt.Errorf("justificationsValidators mismatch: got %x, want %x", + state.JustificationsValidators, wantBitlist) + } + } + + return nil +} + +func bytesEqual(a, b []byte) bool { + if len(a) != len(b) { + return false + } + for i := range a { + if a[i] != b[i] { + return false + } + } + return true +} diff --git a/spectests/fixture_types.go b/spectests/fixture_types.go deleted file mode 100644 index d4afb42..0000000 --- a/spectests/fixture_types.go +++ /dev/null @@ -1,224 +0,0 @@ -package spectests - -import ( - "encoding/hex" - "encoding/json" - "fmt" - "strings" -) - -// HexRoot is a 32-byte root that deserializes from "0x..." hex strings. -type HexRoot [32]byte - -func (h *HexRoot) UnmarshalJSON(data []byte) error { - var s string - if err := json.Unmarshal(data, &s); err != nil { - return err - } - s = strings.TrimPrefix(s, "0x") - b, err := hex.DecodeString(s) - if err != nil { - return fmt.Errorf("invalid hex root: %w", err) - } - if len(b) != 32 { - return fmt.Errorf("root must be 32 bytes, got %d", len(b)) - } - copy(h[:], b) - return nil -} - -// HexPubkey is a 52-byte XMSS public key that deserializes from "0x..." hex strings. -type HexPubkey [52]byte - -func (h *HexPubkey) UnmarshalJSON(data []byte) error { - var s string - if err := json.Unmarshal(data, &s); err != nil { - return err - } - s = strings.TrimPrefix(s, "0x") - b, err := hex.DecodeString(s) - if err != nil { - return fmt.Errorf("invalid hex pubkey: %w", err) - } - if len(b) != 52 { - return fmt.Errorf("pubkey must be 52 bytes, got %d", len(b)) - } - copy(h[:], b) - return nil -} - -// Container wraps the {"data": [...]} pattern used in leanSpec JSON fixtures. -type Container[T any] struct { - Data []T `json:"data"` -} - -// --- Shared fixture types --- - -type FixtureInfo struct { - Hash string `json:"hash"` - Comment string `json:"comment"` - TestID string `json:"testId"` - Description string `json:"description"` - FixtureFormat string `json:"fixtureFormat"` -} - -type FixtureConfig struct { - GenesisTime uint64 `json:"genesisTime"` -} - -type FixtureCheckpoint struct { - Root HexRoot `json:"root"` - Slot uint64 `json:"slot"` -} - -type FixtureBlockHeader struct { - Slot uint64 `json:"slot"` - ProposerIndex uint64 `json:"proposerIndex"` - ParentRoot HexRoot `json:"parentRoot"` - StateRoot HexRoot `json:"stateRoot"` - BodyRoot HexRoot `json:"bodyRoot"` -} - -type FixtureValidator struct { - Pubkey HexPubkey `json:"pubkey"` - Index uint64 `json:"index"` -} - -type FixtureState struct { - Config FixtureConfig `json:"config"` - Slot uint64 `json:"slot"` - LatestBlockHeader FixtureBlockHeader `json:"latestBlockHeader"` - LatestJustified FixtureCheckpoint `json:"latestJustified"` - LatestFinalized FixtureCheckpoint `json:"latestFinalized"` - HistoricalBlockHashes Container[HexRoot] `json:"historicalBlockHashes"` - JustifiedSlots Container[uint64] `json:"justifiedSlots"` - Validators Container[FixtureValidator] `json:"validators"` - JustificationsRoots Container[HexRoot] `json:"justificationsRoots"` - JustificationsValidators Container[bool] `json:"justificationsValidators"` -} - -type FixtureBlockBody struct { - Attestations Container[FixtureAggregatedAttestation] `json:"attestations"` -} - -type FixtureBlock struct { - Slot uint64 `json:"slot"` - ProposerIndex uint64 `json:"proposerIndex"` - ParentRoot HexRoot `json:"parentRoot"` - StateRoot HexRoot `json:"stateRoot"` - Body FixtureBlockBody `json:"body"` -} - -type FixtureAttestationData struct { - Slot uint64 `json:"slot"` - Head FixtureCheckpoint `json:"head"` - Target FixtureCheckpoint `json:"target"` - Source FixtureCheckpoint `json:"source"` -} - -type FixtureAttestation struct { - ValidatorID uint64 `json:"validatorId"` - Data FixtureAttestationData `json:"data"` -} - -// FixtureAggregatedAttestation supports devnet-2 aggregated attestation fixtures. -// Some fixture sources may still provide a single validator_id instead of aggregation_bits. -type FixtureAggregatedAttestation struct { - AggregationBits *Container[bool] `json:"aggregationBits"` - ValidatorID *uint64 `json:"validatorId"` - Data FixtureAttestationData `json:"data"` -} - -type FixtureSignedAttestation struct { - ValidatorID uint64 `json:"validatorId"` - Data FixtureAttestationData `json:"data"` -} - -// --- State Transition fixture types --- - -// StateTransitionFixture is the root JSON object: test_name -> test case. -type StateTransitionFixture map[string]StateTransitionTestCase - -type StateTransitionTestCase struct { - Network string `json:"network"` - Pre FixtureState `json:"pre"` - Blocks []FixtureBlock `json:"blocks"` - Post *PostState `json:"post"` - ExpectException *string `json:"expectException"` - Info FixtureInfo `json:"_info"` -} - -// PostState contains optional expected fields for selective validation. -// Nil pointer fields are not checked. -type PostState struct { - Slot *uint64 `json:"slot"` - LatestJustifiedSlot *uint64 `json:"latestJustifiedSlot"` - LatestJustifiedRoot *HexRoot `json:"latestJustifiedRoot"` - LatestFinalizedSlot *uint64 `json:"latestFinalizedSlot"` - LatestFinalizedRoot *HexRoot `json:"latestFinalizedRoot"` - ValidatorCount *uint64 `json:"validatorCount"` - ConfigGenesisTime *uint64 `json:"configGenesisTime"` - LatestBlockHeaderSlot *uint64 `json:"latestBlockHeaderSlot"` - LatestBlockHeaderProposerIndex *uint64 `json:"latestBlockHeaderProposerIndex"` - LatestBlockHeaderParentRoot *HexRoot `json:"latestBlockHeaderParentRoot"` - LatestBlockHeaderStateRoot *HexRoot `json:"latestBlockHeaderStateRoot"` - LatestBlockHeaderBodyRoot *HexRoot `json:"latestBlockHeaderBodyRoot"` - HistoricalBlockHashesCount *uint64 `json:"historicalBlockHashesCount"` - HistoricalBlockHashes *Container[HexRoot] `json:"historicalBlockHashes"` - JustifiedSlots *Container[uint64] `json:"justifiedSlots"` - JustificationsRoots *Container[HexRoot] `json:"justificationsRoots"` - JustificationsValidators *Container[bool] `json:"justificationsValidators"` -} - -// --- Fork Choice fixture types --- - -// ForkChoiceFixture is the root JSON object: test_name -> test case. -type ForkChoiceFixture map[string]ForkChoiceTestCase - -type ForkChoiceTestCase struct { - Network string `json:"network"` - AnchorState FixtureState `json:"anchorState"` - AnchorBlock FixtureBlock `json:"anchorBlock"` - Steps []ForkChoiceStep `json:"steps"` - MaxSlot uint64 `json:"maxSlot"` - Info FixtureInfo `json:"_info"` -} - -type ForkChoiceStep struct { - StepType string `json:"stepType"` - Valid bool `json:"valid"` - Checks *StoreChecks `json:"checks"` - Block *BlockStepData `json:"block"` - Time *uint64 `json:"time"` - Attestation *FixtureSignedAttestation `json:"attestation"` -} - -type BlockStepData struct { - Block FixtureBlock `json:"block"` - ProposerAttestation *FixtureAttestation `json:"proposerAttestation"` -} - -// StoreChecks contains optional expected fields for selective fork choice validation. -type StoreChecks struct { - Time *uint64 `json:"time"` - HeadSlot *uint64 `json:"headSlot"` - HeadRoot *HexRoot `json:"headRoot"` - HeadRootLabel *string `json:"headRootLabel"` - LatestJustifiedSlot *uint64 `json:"latestJustifiedSlot"` - LatestJustifiedRoot *HexRoot `json:"latestJustifiedRoot"` - LatestJustifiedRootLabel *string `json:"latestJustifiedRootLabel"` - LatestFinalizedSlot *uint64 `json:"latestFinalizedSlot"` - LatestFinalizedRoot *HexRoot `json:"latestFinalizedRoot"` - LatestFinalizedRootLabel *string `json:"latestFinalizedRootLabel"` - AttestationChecks []AttestationCheck `json:"attestationChecks"` - LexicographicHeadAmong []string `json:"lexicographicHeadAmong"` -} - -type AttestationCheck struct { - Validator uint64 `json:"validator"` - AttestationSlot *uint64 `json:"attestationSlot"` - HeadSlot *uint64 `json:"headSlot"` - SourceSlot *uint64 `json:"sourceSlot"` - TargetSlot *uint64 `json:"targetSlot"` - Location string `json:"location"` // "new" or "known" -} diff --git a/spectests/forkchoice_test.go b/spectests/forkchoice_test.go new file mode 100644 index 0000000..99afd3a --- /dev/null +++ b/spectests/forkchoice_test.go @@ -0,0 +1,514 @@ +//go:build spectests + +package spectests + +import ( + "encoding/hex" + "encoding/json" + "fmt" + "os" + "path/filepath" + "strings" + "testing" + + "github.com/geanlabs/gean/forkchoice" + "github.com/geanlabs/gean/logger" + "github.com/geanlabs/gean/node" + "github.com/geanlabs/gean/storage" + "github.com/geanlabs/gean/types" +) + +// --- Fixture types for fork choice tests --- + +type fcFixture map[string]fcTest + +type fcTest struct { + Network string `json:"network"` + LeanEnv string `json:"leanEnv"` + AnchorState fcState `json:"anchorState"` + AnchorBlock fcBlock `json:"anchorBlock"` + Steps []fcStep `json:"steps"` +} + +type fcState struct { + Config fcConfig `json:"config"` + Slot uint64 `json:"slot"` + LatestBlockHeader fcBlockHeader `json:"latestBlockHeader"` + LatestJustified fcCheckpoint `json:"latestJustified"` + LatestFinalized fcCheckpoint `json:"latestFinalized"` + HistoricalBlockHashes fcDataList `json:"historicalBlockHashes"` + JustifiedSlots fcDataList `json:"justifiedSlots"` + Validators fcValidatorList `json:"validators"` + JustificationsRoots fcDataList `json:"justificationsRoots"` + JustificationsValidators fcDataList `json:"justificationsValidators"` +} + +type fcConfig struct { + GenesisTime uint64 `json:"genesisTime"` +} + +type fcBlockHeader struct { + Slot uint64 `json:"slot"` + ProposerIndex uint64 `json:"proposerIndex"` + ParentRoot string `json:"parentRoot"` + StateRoot string `json:"stateRoot"` + BodyRoot string `json:"bodyRoot"` +} + +type fcCheckpoint struct { + Root string `json:"root"` + Slot uint64 `json:"slot"` +} + +type fcDataList struct { + Data []json.RawMessage `json:"data"` +} + +type fcValidator struct { + Pubkey string `json:"pubkey"` + Index uint64 `json:"index"` +} + +type fcValidatorList struct { + Data []fcValidator `json:"data"` +} + +type fcBlock struct { + Slot uint64 `json:"slot"` + ProposerIndex uint64 `json:"proposerIndex"` + ParentRoot string `json:"parentRoot"` + StateRoot string `json:"stateRoot"` + Body fcBlockBody `json:"body"` +} + +type fcBlockBody struct { + Attestations fcDataList `json:"attestations"` +} + +type fcStep struct { + StepType string `json:"stepType"` + Valid bool `json:"valid"` + Block *fcStepBlock `json:"block,omitempty"` + Checks *fcChecks `json:"checks,omitempty"` + Time *uint64 `json:"time,omitempty"` +} + +type fcStepBlock struct { + Block fcBlock `json:"block"` + ProposerAttestation *fcAttestation `json:"proposerAttestation,omitempty"` + BlockRootLabel string `json:"blockRootLabel,omitempty"` +} + +type fcAttestation struct { + ValidatorID uint64 `json:"validatorId"` + Data fcAttData `json:"data"` +} + +type fcAttData struct { + Slot uint64 `json:"slot"` + Head fcCheckpoint `json:"head"` + Target fcCheckpoint `json:"target"` + Source fcCheckpoint `json:"source"` +} + +type fcAggregatedAttestation struct { + AggregationBits fcDataList `json:"aggregationBits"` + Data fcAttData `json:"data"` +} + +type fcChecks struct { + HeadSlot *uint64 `json:"headSlot,omitempty"` + HeadRoot *string `json:"headRoot,omitempty"` + HeadRootLabel *string `json:"headRootLabel,omitempty"` +} + +// --- Parsing helpers --- + +func fcParseHexRoot(s string) [32]byte { + s = strings.TrimPrefix(s, "0x") + b, err := hex.DecodeString(s) + if err != nil { + panic(fmt.Sprintf("fcParseHexRoot: invalid hex %q: %v", s, err)) + } + var root [32]byte + copy(root[:], b) + return root +} + +func fcParseHexBytes(s string) []byte { + s = strings.TrimPrefix(s, "0x") + b, err := hex.DecodeString(s) + if err != nil { + panic(fmt.Sprintf("fcParseHexBytes: invalid hex %q: %v", s, err)) + } + return b +} + +func fcParseHexPubkey(s string) [types.PubkeySize]byte { + b := fcParseHexBytes(s) + var pk [types.PubkeySize]byte + copy(pk[:], b) + return pk +} + +func fcParseBoolBitlist(data []json.RawMessage) []byte { + length := uint64(len(data)) + if length == 0 { + return types.NewBitlistSSZ(0) + } + bl := types.NewBitlistSSZ(length) + for i, raw := range data { + var val bool + if err := json.Unmarshal(raw, &val); err != nil { + var intVal int + if err2 := json.Unmarshal(raw, &intVal); err2 != nil { + panic(fmt.Sprintf("fcParseBoolBitlist index %d: %v / %v", i, err, err2)) + } + val = intVal != 0 + } + if val { + types.BitlistSet(bl, uint64(i)) + } + } + return bl +} + +// toState converts fixture anchor state to types.State. +func (fs *fcState) toState() *types.State { + state := &types.State{ + Config: &types.ChainConfig{ + GenesisTime: fs.Config.GenesisTime, + }, + Slot: fs.Slot, + LatestBlockHeader: &types.BlockHeader{ + Slot: fs.LatestBlockHeader.Slot, + ProposerIndex: fs.LatestBlockHeader.ProposerIndex, + ParentRoot: fcParseHexRoot(fs.LatestBlockHeader.ParentRoot), + StateRoot: fcParseHexRoot(fs.LatestBlockHeader.StateRoot), + BodyRoot: fcParseHexRoot(fs.LatestBlockHeader.BodyRoot), + }, + LatestJustified: &types.Checkpoint{ + Root: fcParseHexRoot(fs.LatestJustified.Root), + Slot: fs.LatestJustified.Slot, + }, + LatestFinalized: &types.Checkpoint{ + Root: fcParseHexRoot(fs.LatestFinalized.Root), + Slot: fs.LatestFinalized.Slot, + }, + } + + for _, v := range fs.Validators.Data { + state.Validators = append(state.Validators, &types.Validator{ + Pubkey: fcParseHexPubkey(v.Pubkey), + Index: v.Index, + }) + } + + for _, raw := range fs.HistoricalBlockHashes.Data { + var s string + if err := json.Unmarshal(raw, &s); err != nil { + panic(fmt.Sprintf("HistoricalBlockHashes: %v", err)) + } + b := fcParseHexBytes(s) + h := make([]byte, 32) + copy(h, b) + state.HistoricalBlockHashes = append(state.HistoricalBlockHashes, h) + } + + state.JustifiedSlots = fcParseBoolBitlist(fs.JustifiedSlots.Data) + + for _, raw := range fs.JustificationsRoots.Data { + var s string + if err := json.Unmarshal(raw, &s); err != nil { + panic(fmt.Sprintf("JustificationsRoots: %v", err)) + } + b := fcParseHexBytes(s) + h := make([]byte, 32) + copy(h, b) + state.JustificationsRoots = append(state.JustificationsRoots, h) + } + + state.JustificationsValidators = fcParseBoolBitlist(fs.JustificationsValidators.Data) + + return state +} + +// toBlock converts a fixture block to types.Block. +func (fb *fcBlock) toBlock() *types.Block { + block := &types.Block{ + Slot: fb.Slot, + ProposerIndex: fb.ProposerIndex, + ParentRoot: fcParseHexRoot(fb.ParentRoot), + StateRoot: fcParseHexRoot(fb.StateRoot), + Body: &types.BlockBody{ + Attestations: make([]*types.AggregatedAttestation, 0), + }, + } + + for _, raw := range fb.Body.Attestations.Data { + var ta fcAggregatedAttestation + if err := json.Unmarshal(raw, &ta); err != nil { + panic(fmt.Sprintf("attestation unmarshal: %v", err)) + } + att := &types.AggregatedAttestation{ + AggregationBits: fcParseBoolBitlist(ta.AggregationBits.Data), + Data: &types.AttestationData{ + Slot: ta.Data.Slot, + Head: &types.Checkpoint{ + Root: fcParseHexRoot(ta.Data.Head.Root), + Slot: ta.Data.Head.Slot, + }, + Target: &types.Checkpoint{ + Root: fcParseHexRoot(ta.Data.Target.Root), + Slot: ta.Data.Target.Slot, + }, + Source: &types.Checkpoint{ + Root: fcParseHexRoot(ta.Data.Source.Root), + Slot: ta.Data.Source.Slot, + }, + }, + } + block.Body.Attestations = append(block.Body.Attestations, att) + } + + return block +} + +// toAttestation converts a fixture proposer attestation to types.Attestation. +func (fa *fcAttestation) toAttestation() *types.Attestation { + return &types.Attestation{ + ValidatorID: fa.ValidatorID, + Data: &types.AttestationData{ + Slot: fa.Data.Slot, + Head: &types.Checkpoint{ + Root: fcParseHexRoot(fa.Data.Head.Root), + Slot: fa.Data.Head.Slot, + }, + Target: &types.Checkpoint{ + Root: fcParseHexRoot(fa.Data.Target.Root), + Slot: fa.Data.Target.Slot, + }, + Source: &types.Checkpoint{ + Root: fcParseHexRoot(fa.Data.Source.Root), + Slot: fa.Data.Source.Slot, + }, + }, + } +} + +// --- Test runner --- + +func TestSpecForkChoice(t *testing.T) { + logger.Quiet = true + defer func() { logger.Quiet = false }() + + fixtureDir := "../leanSpec/fixtures/consensus/fork_choice" + + var files []string + err := filepath.Walk(fixtureDir, func(path string, info os.FileInfo, err error) error { + if err != nil { + return err + } + if !info.IsDir() && filepath.Ext(path) == ".json" { + files = append(files, path) + } + return nil + }) + if err != nil { + t.Fatalf("walking fixture dir %s: %v", fixtureDir, err) + } + + if len(files) == 0 { + t.Fatalf("no fixture files found in %s", fixtureDir) + } + + for _, file := range files { + file := file + relPath, _ := filepath.Rel(fixtureDir, file) + t.Run(relPath, func(t *testing.T) { + data, err := os.ReadFile(file) + if err != nil { + t.Fatalf("reading %s: %v", file, err) + } + + var fixture fcFixture + if err := json.Unmarshal(data, &fixture); err != nil { + t.Fatalf("unmarshalling %s: %v", file, err) + } + + for testName, tt := range fixture { + tt := tt + t.Run(testName, func(t *testing.T) { + runForkChoiceTest(t, &tt) + }) + } + }) + } +} + +func runForkChoiceTest(t *testing.T, tt *fcTest) { + t.Helper() + + // 1. Convert anchor state and block. + anchorState := tt.AnchorState.toState() + anchorBlock := tt.AnchorBlock.toBlock() + + // Compute anchor block root. + anchorRoot, err := anchorBlock.HashTreeRoot() + if err != nil { + t.Fatalf("computing anchor block root: %v", err) + } + + // 2. Initialize store with in-memory backend. + backend := storage.NewInMemoryBackend() + s := node.NewConsensusStore(backend) + + // Store config from anchor state. + s.SetConfig(anchorState.Config) + + // Store anchor state + block header. + anchorHeader := &types.BlockHeader{ + Slot: anchorBlock.Slot, + ProposerIndex: anchorBlock.ProposerIndex, + ParentRoot: anchorBlock.ParentRoot, + StateRoot: anchorBlock.StateRoot, + } + if anchorBlock.Body != nil { + bodyRoot, _ := anchorBlock.Body.HashTreeRoot() + anchorHeader.BodyRoot = bodyRoot + } + + // Cache state root in anchor state's latest block header. + anchorState.LatestBlockHeader.StateRoot = anchorBlock.StateRoot + + s.InsertBlockHeader(anchorRoot, anchorHeader) + s.InsertState(anchorRoot, anchorState) + s.InsertLiveChainEntry(anchorBlock.Slot, anchorRoot, anchorBlock.ParentRoot) + s.SetHead(anchorRoot) + s.SetLatestJustified(&types.Checkpoint{Root: anchorRoot, Slot: anchorBlock.Slot}) + s.SetLatestFinalized(&types.Checkpoint{Root: anchorRoot, Slot: anchorBlock.Slot}) + + // Store anchor as signed block. + anchorSigned := &types.SignedBlockWithAttestation{ + Block: &types.BlockWithAttestation{ + Block: anchorBlock, + ProposerAttestation: nil, + }, + Signature: nil, + } + s.StorePendingBlock(anchorRoot, anchorSigned) + + // 3. Initialize fork choice with anchor. + fc := forkchoice.New(anchorBlock.Slot, anchorRoot) + + // Label -> root map for resolving blockRootLabel references. + labelRoots := make(map[string][32]byte) + + // 4. Process steps. + for i, step := range tt.Steps { + switch step.StepType { + case "block": + if step.Block == nil { + t.Fatalf("step %d: block step without block data", i) + } + + block := step.Block.Block.toBlock() + + var proposerAtt *types.Attestation + if step.Block.ProposerAttestation != nil { + proposerAtt = step.Block.ProposerAttestation.toAttestation() + } + + signedBlock := &types.SignedBlockWithAttestation{ + Block: &types.BlockWithAttestation{ + Block: block, + ProposerAttestation: proposerAtt, + }, + Signature: nil, + } + + // Process block through store (no signature verification). + if err := node.OnBlockWithoutVerification(s, signedBlock); err != nil { + if step.Valid { + t.Fatalf("step %d: OnBlockWithoutVerification failed: %v", i, err) + } + continue + } + + // Compute block root and register label. + blockRoot, _ := block.HashTreeRoot() + if step.Block.BlockRootLabel != "" { + labelRoots[step.Block.BlockRootLabel] = blockRoot + } + + // Register block in fork choice. + fc.OnBlock(block.Slot, blockRoot, block.ParentRoot) + + // Update head: extract known attestations, feed to fork choice, compute head. + attestations := s.ExtractLatestKnownAttestations() + justifiedRoot := s.LatestJustified().Root + + for vid, data := range attestations { + idx := fc.NodeIndex(data.Head.Root) + if idx >= 0 { + fc.Votes.SetKnown(vid, idx, data.Slot, data) + } + } + + newHead := fc.UpdateHead(justifiedRoot) + s.SetHead(newHead) + + // Process proposer attestation AFTER updateHead. + node.ProcessProposerAttestation(s, signedBlock, false) + + // Promote new payloads to known (so next updateHead sees them). + s.PromoteNewToKnown() + + // Validate checks if present. + if step.Checks != nil { + validateChecks(t, i, step.Checks, s, fc, labelRoots) + } + + case "tick": + if step.Time == nil { + t.Fatalf("step %d: tick step without time", i) + } + s.SetTime(*step.Time) + + default: + t.Fatalf("step %d: unknown step type %q", i, step.StepType) + } + } +} + +func validateChecks(t *testing.T, stepIdx int, checks *fcChecks, s *node.ConsensusStore, fc *forkchoice.ForkChoice, labelRoots map[string][32]byte) { + t.Helper() + + headRoot := s.Head() + + if checks.HeadSlot != nil { + headHeader := s.GetBlockHeader(headRoot) + if headHeader == nil { + t.Fatalf("step %d check: head block header not found for root 0x%x", stepIdx, headRoot) + } + if headHeader.Slot != *checks.HeadSlot { + t.Fatalf("step %d check: headSlot got %d, want %d", stepIdx, headHeader.Slot, *checks.HeadSlot) + } + } + + if checks.HeadRoot != nil { + wantRoot := fcParseHexRoot(*checks.HeadRoot) + if headRoot != wantRoot { + t.Fatalf("step %d check: headRoot got 0x%x, want 0x%x", stepIdx, headRoot, wantRoot) + } + } + + if checks.HeadRootLabel != nil { + if labelRoot, ok := labelRoots[*checks.HeadRootLabel]; ok { + if headRoot != labelRoot { + t.Fatalf("step %d check: headRootLabel %q got 0x%x, want 0x%x", + stepIdx, *checks.HeadRootLabel, headRoot, labelRoot) + } + } + } +} diff --git a/spectests/signatures_test.go b/spectests/signatures_test.go new file mode 100644 index 0000000..2171562 --- /dev/null +++ b/spectests/signatures_test.go @@ -0,0 +1,326 @@ +//go:build spectests + +package spectests + +import ( + "encoding/hex" + "encoding/json" + "fmt" + "os" + "path/filepath" + "strings" + "testing" + + "github.com/geanlabs/gean/logger" + "github.com/geanlabs/gean/node" + "github.com/geanlabs/gean/storage" + "github.com/geanlabs/gean/types" +) + +// Fixture types for signature verification tests. + +type sigFixture map[string]sigTest + +type sigTest struct { + Network string `json:"network"` + LeanEnv string `json:"leanEnv"` + AnchorState sigState `json:"anchorState"` + SignedBlockWithAttestation sigSBA `json:"signedBlockWithAttestation"` + ExpectException *string `json:"expectException"` +} + +type sigState struct { + Config fcConfig `json:"config"` + Slot uint64 `json:"slot"` + LatestBlockHeader fcBlockHeader `json:"latestBlockHeader"` + LatestJustified fcCheckpoint `json:"latestJustified"` + LatestFinalized fcCheckpoint `json:"latestFinalized"` + HistoricalBlockHashes fcDataList `json:"historicalBlockHashes"` + JustifiedSlots fcDataList `json:"justifiedSlots"` + Validators fcValidatorList `json:"validators"` + JustificationsRoots fcDataList `json:"justificationsRoots"` + JustificationsValidators fcDataList `json:"justificationsValidators"` +} + +type sigSBA struct { + Message sigMessage `json:"message"` + Signature sigSignature `json:"signature"` +} + +type sigMessage struct { + Block fcBlock `json:"block"` + ProposerAttestation *fcAttestation `json:"proposerAttestation,omitempty"` +} + +type sigSignature struct { + ProposerSignature string `json:"proposerSignature"` + AttestationSignatures sigAttSigList `json:"attestationSignatures"` +} + +type sigAttSigList struct { + Data []sigAttSigProof `json:"data"` +} + +type sigAttSigProof struct { + Participants sigBoolList `json:"participants"` + ProofData sigProofData `json:"proofData"` +} + +type sigBoolList struct { + Data []json.RawMessage `json:"data"` +} + +type sigProofData struct { + Data string `json:"data"` +} + +// Parsing helpers (duplicated from fork choice tests to keep file self-contained). + +func sigParseHexRoot(s string) [32]byte { + s = strings.TrimPrefix(s, "0x") + b, err := hex.DecodeString(s) + if err != nil { + panic(fmt.Sprintf("sigParseHexRoot: invalid hex %q: %v", s, err)) + } + var root [32]byte + copy(root[:], b) + return root +} + +func sigParseHexBytes(s string) []byte { + s = strings.TrimPrefix(s, "0x") + b, err := hex.DecodeString(s) + if err != nil { + panic(fmt.Sprintf("sigParseHexBytes: invalid hex %q: %v", s, err)) + } + return b +} + +func sigParseHexPubkey(s string) [types.PubkeySize]byte { + b := sigParseHexBytes(s) + var pk [types.PubkeySize]byte + copy(pk[:], b) + return pk +} + +func sigParseBoolBitlist(data []json.RawMessage) []byte { + length := uint64(len(data)) + if length == 0 { + return types.NewBitlistSSZ(0) + } + bl := types.NewBitlistSSZ(length) + for i, raw := range data { + var val bool + if err := json.Unmarshal(raw, &val); err != nil { + var intVal int + if err2 := json.Unmarshal(raw, &intVal); err2 != nil { + panic(fmt.Sprintf("sigParseBoolBitlist index %d: %v / %v", i, err, err2)) + } + val = intVal != 0 + } + if val { + types.BitlistSet(bl, uint64(i)) + } + } + return bl +} + +func sigParseHexSignature(s string) [types.SignatureSize]byte { + b := sigParseHexBytes(s) + var sig [types.SignatureSize]byte + copy(sig[:], b) + return sig +} + +// toState converts fixture anchor state to types.State. +func (fs *sigState) toState() *types.State { + state := &types.State{ + Config: &types.ChainConfig{ + GenesisTime: fs.Config.GenesisTime, + }, + Slot: fs.Slot, + LatestBlockHeader: &types.BlockHeader{ + Slot: fs.LatestBlockHeader.Slot, + ProposerIndex: fs.LatestBlockHeader.ProposerIndex, + ParentRoot: sigParseHexRoot(fs.LatestBlockHeader.ParentRoot), + StateRoot: sigParseHexRoot(fs.LatestBlockHeader.StateRoot), + BodyRoot: sigParseHexRoot(fs.LatestBlockHeader.BodyRoot), + }, + LatestJustified: &types.Checkpoint{ + Root: sigParseHexRoot(fs.LatestJustified.Root), + Slot: fs.LatestJustified.Slot, + }, + LatestFinalized: &types.Checkpoint{ + Root: sigParseHexRoot(fs.LatestFinalized.Root), + Slot: fs.LatestFinalized.Slot, + }, + } + + for _, v := range fs.Validators.Data { + state.Validators = append(state.Validators, &types.Validator{ + Pubkey: sigParseHexPubkey(v.Pubkey), + Index: v.Index, + }) + } + + for _, raw := range fs.HistoricalBlockHashes.Data { + var s string + if err := json.Unmarshal(raw, &s); err != nil { + panic(fmt.Sprintf("HistoricalBlockHashes: %v", err)) + } + b := sigParseHexBytes(s) + h := make([]byte, 32) + copy(h, b) + state.HistoricalBlockHashes = append(state.HistoricalBlockHashes, h) + } + + state.JustifiedSlots = sigParseBoolBitlist(fs.JustifiedSlots.Data) + + for _, raw := range fs.JustificationsRoots.Data { + var s string + if err := json.Unmarshal(raw, &s); err != nil { + panic(fmt.Sprintf("JustificationsRoots: %v", err)) + } + b := sigParseHexBytes(s) + h := make([]byte, 32) + copy(h, b) + state.JustificationsRoots = append(state.JustificationsRoots, h) + } + + state.JustificationsValidators = sigParseBoolBitlist(fs.JustificationsValidators.Data) + + return state +} + +// toSignedBlock converts fixture signed block with attestation to types.SignedBlockWithAttestation. +func (sba *sigSBA) toSignedBlock() *types.SignedBlockWithAttestation { + block := sba.Message.Block.toBlock() + + var proposerAtt *types.Attestation + if sba.Message.ProposerAttestation != nil { + proposerAtt = sba.Message.ProposerAttestation.toAttestation() + } + + proposerSig := sigParseHexSignature(sba.Signature.ProposerSignature) + + var attSigs []*types.AggregatedSignatureProof + for _, proof := range sba.Signature.AttestationSignatures.Data { + participants := sigParseBoolBitlist(proof.Participants.Data) + proofData := sigParseHexBytes(proof.ProofData.Data) + attSigs = append(attSigs, &types.AggregatedSignatureProof{ + Participants: participants, + ProofData: proofData, + }) + } + + return &types.SignedBlockWithAttestation{ + Block: &types.BlockWithAttestation{ + Block: block, + ProposerAttestation: proposerAtt, + }, + Signature: &types.BlockSignatures{ + ProposerSignature: proposerSig, + AttestationSignatures: attSigs, + }, + } +} + +// Test runner. + +func TestSpecSignatures(t *testing.T) { + logger.Quiet = true + defer func() { logger.Quiet = false }() + + fixtureDir := "../leanSpec/fixtures/consensus/verify_signatures" + + var files []string + err := filepath.Walk(fixtureDir, func(path string, info os.FileInfo, err error) error { + if err != nil { + return err + } + if !info.IsDir() && filepath.Ext(path) == ".json" { + files = append(files, path) + } + return nil + }) + if err != nil { + t.Fatalf("walking fixture dir %s: %v", fixtureDir, err) + } + + if len(files) == 0 { + t.Fatalf("no fixture files found in %s", fixtureDir) + } + + for _, file := range files { + file := file + relPath, _ := filepath.Rel(fixtureDir, file) + t.Run(relPath, func(t *testing.T) { + data, err := os.ReadFile(file) + if err != nil { + t.Fatalf("reading %s: %v", file, err) + } + + var fixture sigFixture + if err := json.Unmarshal(data, &fixture); err != nil { + t.Fatalf("unmarshalling %s: %v", file, err) + } + + for testName, tt := range fixture { + tt := tt + t.Run(testName, func(t *testing.T) { + runSignatureTest(t, &tt) + }) + } + }) + } +} + +func runSignatureTest(t *testing.T, tt *sigTest) { + t.Helper() + + // 1. Convert anchor state. + anchorState := tt.AnchorState.toState() + + // 2. Fill state root in header if zero (genesis case), then compute anchor block root. + // Matches initStoreFromState in main.go. + stateRoot, _ := anchorState.HashTreeRoot() + header := anchorState.LatestBlockHeader + if header.StateRoot == types.ZeroRoot { + header.StateRoot = stateRoot + } + anchorRoot, err := header.HashTreeRoot() + if err != nil { + t.Fatalf("computing anchor block root: %v", err) + } + + // 3. Initialize store with in-memory backend. + backend := storage.NewInMemoryBackend() + s := node.NewConsensusStore(backend) + + s.SetConfig(anchorState.Config) + s.InsertBlockHeader(anchorRoot, header) + s.InsertState(anchorRoot, anchorState) + s.InsertLiveChainEntry(header.Slot, anchorRoot, header.ParentRoot) + s.SetHead(anchorRoot) + s.SetLatestJustified(&types.Checkpoint{Root: anchorRoot, Slot: header.Slot}) + s.SetLatestFinalized(&types.Checkpoint{Root: anchorRoot, Slot: header.Slot}) + + // 4. Convert fixture signed block. + signedBlock := tt.SignedBlockWithAttestation.toSignedBlock() + + // 5. Call OnBlock WITH signature verification. + err = node.OnBlock(s, signedBlock, nil) + + // 6. Check result against expectation. + expectFailure := tt.ExpectException != nil + + if expectFailure { + if err == nil { + t.Fatalf("expected failure (expectException=%q) but OnBlock succeeded", *tt.ExpectException) + } + } else { + if err != nil { + t.Fatalf("expected success but OnBlock failed: %v", err) + } + } +} diff --git a/spectests/stf_spectests_test.go b/spectests/stf_spectests_test.go deleted file mode 100644 index cee370b..0000000 --- a/spectests/stf_spectests_test.go +++ /dev/null @@ -1,220 +0,0 @@ -//go:build skip_sig_verify - -package spectests - -import ( - "encoding/json" - "fmt" - "os" - "path/filepath" - "testing" - - "github.com/geanlabs/gean/chain/statetransition" - "github.com/geanlabs/gean/types" -) - -const stfFixtureDir = "../leanSpec/fixtures/consensus/state_transition" - -func TestStateTransition(t *testing.T) { - files := findJSONFiles(t, stfFixtureDir) - - for _, file := range files { - file := file - relPath, _ := filepath.Rel(stfFixtureDir, file) - t.Run(relPath, func(t *testing.T) { - runStateTransitionFixture(t, file) - }) - } -} - -func findJSONFiles(t *testing.T, root string) []string { - t.Helper() - var files []string - err := filepath.WalkDir(root, func(path string, d os.DirEntry, err error) error { - if err != nil { - return err - } - if !d.IsDir() && filepath.Ext(path) == ".json" { - files = append(files, path) - } - return nil - }) - if err != nil { - t.Fatalf("failed to walk fixture directory %s: %v", root, err) - } - if len(files) == 0 { - t.Fatalf("no fixture files found in %s — run 'make leanSpec/fixtures' first", root) - } - return files -} - -func runStateTransitionFixture(t *testing.T, path string) { - t.Helper() - - data, err := os.ReadFile(path) - if err != nil { - t.Fatalf("failed to read fixture: %v", err) - } - - var fixture StateTransitionFixture - if err := json.Unmarshal(data, &fixture); err != nil { - t.Fatalf("failed to unmarshal fixture: %v", err) - } - - for testName, tc := range fixture { - tc := tc - t.Run(testName, func(t *testing.T) { - if tc.Info.FixtureFormat != "state_transition_test" { - t.Skipf("unsupported fixture format: %s", tc.Info.FixtureFormat) - } - - state := convertState(tc.Pre) - expectFailure := tc.ExpectException != nil || tc.Post == nil - - var transitionErr error - for _, fb := range tc.Blocks { - block := convertBlock(fb) - state, transitionErr = statetransition.StateTransition(state, block) - if transitionErr != nil { - break - } - } - - if expectFailure { - if transitionErr == nil && len(tc.Blocks) > 0 { - t.Fatalf("[%s] expected failure but state transition succeeded", testName) - } - return - } - - if transitionErr != nil { - t.Fatalf("[%s] unexpected state transition error: %v", testName, transitionErr) - } - - validatePostState(t, testName, state, tc.Post) - }) - } -} - -func validatePostState(t *testing.T, testName string, state *types.State, post *PostState) { - t.Helper() - if post == nil { - return - } - - check := func(field string, got, want interface{}) { - if fmt.Sprintf("%v", got) != fmt.Sprintf("%v", want) { - t.Errorf("[%s] %s mismatch: got %v, want %v", testName, field, got, want) - } - } - - if post.Slot != nil { - check("slot", state.Slot, *post.Slot) - } - if post.LatestJustifiedSlot != nil { - check("latestJustified.slot", state.LatestJustified.Slot, *post.LatestJustifiedSlot) - } - if post.LatestJustifiedRoot != nil { - check("latestJustified.root", state.LatestJustified.Root, [32]byte(*post.LatestJustifiedRoot)) - } - if post.LatestFinalizedSlot != nil { - check("latestFinalized.slot", state.LatestFinalized.Slot, *post.LatestFinalizedSlot) - } - if post.LatestFinalizedRoot != nil { - check("latestFinalized.root", state.LatestFinalized.Root, [32]byte(*post.LatestFinalizedRoot)) - } - if post.ValidatorCount != nil { - check("validatorCount", uint64(len(state.Validators)), *post.ValidatorCount) - } - if post.ConfigGenesisTime != nil { - check("config.genesisTime", state.Config.GenesisTime, *post.ConfigGenesisTime) - } - if post.LatestBlockHeaderSlot != nil { - check("latestBlockHeader.slot", state.LatestBlockHeader.Slot, *post.LatestBlockHeaderSlot) - } - if post.LatestBlockHeaderProposerIndex != nil { - check("latestBlockHeader.proposerIndex", state.LatestBlockHeader.ProposerIndex, *post.LatestBlockHeaderProposerIndex) - } - if post.LatestBlockHeaderParentRoot != nil { - check("latestBlockHeader.parentRoot", state.LatestBlockHeader.ParentRoot, [32]byte(*post.LatestBlockHeaderParentRoot)) - } - if post.LatestBlockHeaderStateRoot != nil { - check("latestBlockHeader.stateRoot", state.LatestBlockHeader.StateRoot, [32]byte(*post.LatestBlockHeaderStateRoot)) - } - if post.LatestBlockHeaderBodyRoot != nil { - check("latestBlockHeader.bodyRoot", state.LatestBlockHeader.BodyRoot, [32]byte(*post.LatestBlockHeaderBodyRoot)) - } - if post.HistoricalBlockHashesCount != nil { - check("historicalBlockHashes.count", uint64(len(state.HistoricalBlockHashes)), *post.HistoricalBlockHashesCount) - } - if post.HistoricalBlockHashes != nil { - expected := make([][32]byte, len(post.HistoricalBlockHashes.Data)) - for i, h := range post.HistoricalBlockHashes.Data { - expected[i] = [32]byte(h) - } - if len(state.HistoricalBlockHashes) != len(expected) { - t.Errorf("[%s] historicalBlockHashes length mismatch: got %d, want %d", - testName, len(state.HistoricalBlockHashes), len(expected)) - } else { - for i := range expected { - if state.HistoricalBlockHashes[i] != expected[i] { - t.Errorf("[%s] historicalBlockHashes[%d] mismatch: got %x, want %x", - testName, i, state.HistoricalBlockHashes[i], expected[i]) - } - } - } - } - if post.JustifiedSlots != nil { - expectedBitlist := buildBitlist(post.JustifiedSlots.Data) - actualLen := statetransition.BitlistLen(state.JustifiedSlots) - expectedLen := statetransition.BitlistLen(expectedBitlist) - if actualLen != expectedLen { - t.Errorf("[%s] justifiedSlots length mismatch: got %d bits, want %d bits", - testName, actualLen, expectedLen) - } else { - for i := 0; i < actualLen; i++ { - a := statetransition.GetBit(state.JustifiedSlots, uint64(i)) - e := statetransition.GetBit(expectedBitlist, uint64(i)) - if a != e { - t.Errorf("[%s] justifiedSlots[%d] mismatch: got %v, want %v", - testName, i, a, e) - } - } - } - } - if post.JustificationsRoots != nil { - expected := make([][32]byte, len(post.JustificationsRoots.Data)) - for i, r := range post.JustificationsRoots.Data { - expected[i] = [32]byte(r) - } - if len(state.JustificationsRoots) != len(expected) { - t.Errorf("[%s] justificationsRoots length mismatch: got %d, want %d", - testName, len(state.JustificationsRoots), len(expected)) - } else { - for i := range expected { - if state.JustificationsRoots[i] != expected[i] { - t.Errorf("[%s] justificationsRoots[%d] mismatch: got %x, want %x", - testName, i, state.JustificationsRoots[i], expected[i]) - } - } - } - } - if post.JustificationsValidators != nil { - expectedBitlist := buildBoolBitlist(post.JustificationsValidators.Data) - actualLen := statetransition.BitlistLen(state.JustificationsValidators) - expectedLen := statetransition.BitlistLen(expectedBitlist) - if actualLen != expectedLen { - t.Errorf("[%s] justificationsValidators length mismatch: got %d bits, want %d bits", - testName, actualLen, expectedLen) - } else { - for i := 0; i < actualLen; i++ { - a := statetransition.GetBit(state.JustificationsValidators, uint64(i)) - e := statetransition.GetBit(expectedBitlist, uint64(i)) - if a != e { - t.Errorf("[%s] justificationsValidators[%d] mismatch: got %v, want %v", - testName, i, a, e) - } - } - } - } -} diff --git a/spectests/stf_test.go b/spectests/stf_test.go new file mode 100644 index 0000000..32bf4b8 --- /dev/null +++ b/spectests/stf_test.go @@ -0,0 +1,93 @@ +//go:build spectests + +package spectests + +import ( + "encoding/json" + "os" + "path/filepath" + "testing" + + "github.com/geanlabs/gean/logger" + "github.com/geanlabs/gean/statetransition" +) + +func TestSpecStateTransition(t *testing.T) { + logger.Quiet = true + defer func() { logger.Quiet = false }() + + fixtureDir := "../leanSpec/fixtures/consensus/state_transition" + + var files []string + err := filepath.Walk(fixtureDir, func(path string, info os.FileInfo, err error) error { + if err != nil { + return err + } + if !info.IsDir() && filepath.Ext(path) == ".json" { + files = append(files, path) + } + return nil + }) + if err != nil { + t.Fatalf("walking fixture dir %s: %v", fixtureDir, err) + } + + if len(files) == 0 { + t.Fatalf("no fixture files found in %s", fixtureDir) + } + + for _, file := range files { + file := file + relPath, _ := filepath.Rel(fixtureDir, file) + t.Run(relPath, func(t *testing.T) { + data, err := os.ReadFile(file) + if err != nil { + t.Fatalf("reading %s: %v", file, err) + } + + var fixture TestFixture + if err := json.Unmarshal(data, &fixture); err != nil { + t.Fatalf("unmarshalling %s: %v", file, err) + } + + for testName, tt := range fixture { + tt := tt + t.Run(testName, func(t *testing.T) { + runStateTransitionTest(t, &tt) + }) + } + }) + } +} + +func runStateTransitionTest(t *testing.T, tt *StateTransitionTest) { + t.Helper() + + expectError := tt.ExpectException != "" || tt.Post == nil + + state := tt.Pre.ToState() + + var lastErr error + for i, tb := range tt.Blocks { + block := tb.ToBlock() + if err := statetransition.StateTransition(state, block); err != nil { + lastErr = err + if expectError { + t.Logf("block %d (slot %d): expected error: %v", i, tb.Slot, err) + return + } + t.Fatalf("block %d (slot %d): unexpected error: %v", i, tb.Slot, err) + } + } + + if expectError && lastErr == nil { + t.Fatal("expected error but all blocks processed successfully") + } + + // No blocks and no error expected: genesis validation test. + if tt.Post != nil { + if err := tt.Post.Validate(state); err != nil { + t.Fatalf("post-state validation failed: %v", err) + } + } +} diff --git a/statetransition/attestations.go b/statetransition/attestations.go new file mode 100644 index 0000000..7cf40e0 --- /dev/null +++ b/statetransition/attestations.go @@ -0,0 +1,286 @@ +package statetransition + +import ( + "sort" + + "github.com/geanlabs/gean/types" +) + +// ProcessAttestations processes all aggregated attestations in a block body, +// updating justification and finalization state. +func ProcessAttestations(state *types.State, attestations []*types.AggregatedAttestation) error { + validatorCount := int(state.NumValidators()) + if validatorCount == 0 { + return nil + } + + // Precondition: justifications_roots must not contain zero hashes (spec state.py L389). + for _, root := range state.JustificationsRoots { + var r [32]byte + copy(r[:], root) + if types.IsZeroRoot(r) { + return ErrZeroHashInJustificationRoots + } + } + + // Reconstruct pending justifications from flat SSZ storage into a map. + // Key: target root, Value: per-validator vote booleans. + justifications := reconstructJustifications(state, validatorCount) + + // Build root → slot lookup for finalization pruning. + rootToSlot := buildRootToSlot(state) + + for _, agg := range attestations { + source := agg.Data.Source + target := agg.Data.Target + + if !isValidVote(state, source, target) { + continue + } + + // Get or create vote tracking for this target root. + votes, exists := justifications[target.Root] + if !exists { + votes = make([]bool, validatorCount) + justifications[target.Root] = votes + } + + // Reject oversized aggregation_bits (spec would crash on OOB). + bitsLen := types.BitlistLen(agg.AggregationBits) + if bitsLen > uint64(validatorCount) { + continue + } + + // Mark validators as having voted. + for i := uint64(0); i < bitsLen; i++ { + if types.BitlistGet(agg.AggregationBits, i) { + votes[i] = true + } + } + + // Check supermajority: 3 * votes >= 2 * validators. + voteCount := countTrue(votes) + if 3*voteCount >= 2*validatorCount { + // Justify the target. + state.LatestJustified = target + setSlotJustified(state, state.LatestFinalized.Slot, target.Slot) + + // Remove from pending (now justified). + delete(justifications, target.Root) + + // Try to finalize source. + tryFinalize(state, source, target, &justifications, rootToSlot) + } + } + + // Serialize back to flat SSZ storage (sorted for determinism). + serializeJustifications(state, justifications, validatorCount) + + return nil +} + +// isValidVote checks the 6 validation rules for an attestation vote. +func isValidVote(state *types.State, source, target *types.Checkpoint) bool { + finalizedSlot := state.LatestFinalized.Slot + + // 1. Source must already be justified. + if !isSlotJustified(state, finalizedSlot, source.Slot) { + return false + } + // 2. Target must not already be justified. + if isSlotJustified(state, finalizedSlot, target.Slot) { + return false + } + // 3. Neither root can be zero. + if types.IsZeroRoot(source.Root) || types.IsZeroRoot(target.Root) { + return false + } + // 4. Both checkpoints must exist in historical_block_hashes. + if !checkpointExists(state, source) || !checkpointExists(state, target) { + return false + } + // 5. Time flows forward. + if target.Slot <= source.Slot { + return false + } + // 6. Target is justifiable after finalized (3SF-mini). + if !SlotIsJustifiableAfter(target.Slot, finalizedSlot) { + return false + } + return true +} + +// tryFinalize attempts to advance finalization from source to target. +// Finalization succeeds when there are no justifiable slots between +// source.slot and target.slot (exclusive). +func tryFinalize( + state *types.State, + source, target *types.Checkpoint, + justifications *map[[32]byte][]bool, + rootToSlot map[[32]byte]uint64, +) { + // Check for any justifiable slot in the gap. + for s := source.Slot + 1; s < target.Slot; s++ { + if SlotIsJustifiableAfter(s, state.LatestFinalized.Slot) { + return // gap exists, cannot finalize + } + } + + oldFinalizedSlot := state.LatestFinalized.Slot + state.LatestFinalized = source + + // Shift justified_slots window forward. + delta := state.LatestFinalized.Slot - oldFinalizedSlot + shiftJustifiedSlots(state, delta) + + // Prune justifications whose roots are at or below the new finalized slot. + for root := range *justifications { + slot, found := rootToSlot[root] + if !found || slot <= state.LatestFinalized.Slot { + delete(*justifications, root) + } + } +} + +// --- justified_slots operations --- + +// isSlotJustified checks if a slot is justified. +// Slots at or before finalized are implicitly justified. +func isSlotJustified(state *types.State, finalizedSlot, slot uint64) bool { + if slot <= finalizedSlot { + return true + } + relIndex := slot - finalizedSlot - 1 + return types.BitlistGet(state.JustifiedSlots, relIndex) +} + +// setSlotJustified marks a slot as justified. +func setSlotJustified(state *types.State, finalizedSlot, slot uint64) { + if slot <= finalizedSlot { + return + } + relIndex := slot - finalizedSlot - 1 + jsLen := types.BitlistLen(state.JustifiedSlots) + if relIndex >= jsLen { + state.JustifiedSlots = types.BitlistExtend(state.JustifiedSlots, relIndex+1) + } + types.BitlistSet(state.JustifiedSlots, relIndex) +} + +// shiftJustifiedSlots drops `delta` bits from the front when finalization advances. +func shiftJustifiedSlots(state *types.State, delta uint64) { + if delta == 0 { + return + } + oldLen := types.BitlistLen(state.JustifiedSlots) + if delta >= oldLen { + state.JustifiedSlots = types.NewBitlistSSZ(0) + return + } + newLen := oldLen - delta + newBits := types.NewBitlistSSZ(newLen) + for i := uint64(0); i < newLen; i++ { + if types.BitlistGet(state.JustifiedSlots, i+delta) { + types.BitlistSet(newBits, i) + } + } + state.JustifiedSlots = newBits +} + +// --- helpers --- + +func checkpointExists(state *types.State, cp *types.Checkpoint) bool { + slot := cp.Slot + if slot >= uint64(len(state.HistoricalBlockHashes)) { + return false + } + var stored [32]byte + copy(stored[:], state.HistoricalBlockHashes[slot]) + return stored == cp.Root +} + +func countTrue(votes []bool) int { + count := 0 + for _, v := range votes { + if v { + count++ + } + } + return count +} + +// reconstructJustifications converts flat SSZ storage into a vote map. +func reconstructJustifications(state *types.State, validatorCount int) map[[32]byte][]bool { + justifications := make(map[[32]byte][]bool) + for i, rootBytes := range state.JustificationsRoots { + var root [32]byte + copy(root[:], rootBytes) + votes := make([]bool, validatorCount) + for v := 0; v < validatorCount; v++ { + bitIdx := uint64(i*validatorCount + v) + if types.BitlistGet(state.JustificationsValidators, bitIdx) { + votes[v] = true + } + } + justifications[root] = votes + } + return justifications +} + +// buildRootToSlot maps each root to its latest slot in historical_block_hashes. +func buildRootToSlot(state *types.State) map[[32]byte]uint64 { + rootToSlot := make(map[[32]byte]uint64) + start := state.LatestFinalized.Slot + 1 + for i := start; i < uint64(len(state.HistoricalBlockHashes)); i++ { + var root [32]byte + copy(root[:], state.HistoricalBlockHashes[i]) + if existing, ok := rootToSlot[root]; !ok || i > existing { + rootToSlot[root] = i + } + } + return rootToSlot +} + +// serializeJustifications converts vote map back to flat SSZ storage. +// Roots are sorted for deterministic output. +func serializeJustifications(state *types.State, justifications map[[32]byte][]bool, validatorCount int) { + // Sort roots for deterministic output. + roots := make([][32]byte, 0, len(justifications)) + for root := range justifications { + roots = append(roots, root) + } + sort.Slice(roots, func(i, j int) bool { + for k := 0; k < 32; k++ { + if roots[i][k] != roots[j][k] { + return roots[i][k] < roots[j][k] + } + } + return false + }) + + // Rebuild justifications_roots. + sszRoots := make([][]byte, len(roots)) + for i, root := range roots { + r := make([]byte, 32) + copy(r, root[:]) + sszRoots[i] = r + } + state.JustificationsRoots = sszRoots + + // Rebuild justifications_validators. + totalBits := uint64(len(roots)) * uint64(validatorCount) + if totalBits == 0 { + state.JustificationsValidators = types.NewBitlistSSZ(0) + return + } + bits := types.NewBitlistSSZ(totalBits) + for i, root := range roots { + votes := justifications[root] + for v := 0; v < validatorCount; v++ { + if votes[v] { + types.BitlistSet(bits, uint64(i*validatorCount+v)) + } + } + } + state.JustificationsValidators = bits +} diff --git a/statetransition/block.go b/statetransition/block.go new file mode 100644 index 0000000..4289270 --- /dev/null +++ b/statetransition/block.go @@ -0,0 +1,100 @@ +package statetransition + +import ( + "github.com/geanlabs/gean/types" +) + +// ProcessBlock validates a block and applies it to the state. +func ProcessBlock(state *types.State, block *types.Block) error { + if err := ProcessBlockHeader(state, block); err != nil { + return err + } + return ProcessAttestations(state, block.Body.Attestations) +} + +// ProcessBlockHeader validates the block header and updates the state. +func ProcessBlockHeader(state *types.State, block *types.Block) error { + numValidators := state.NumValidators() + if numValidators == 0 { + return ErrNoValidators + } + + // Block slot must match state slot (after process_slots). + if block.Slot != state.Slot { + return &SlotMismatchError{StateSlot: state.Slot, BlockSlot: block.Slot} + } + + // Block must be newer than parent. + parentHeader := state.LatestBlockHeader + if block.Slot <= parentHeader.Slot { + return &ParentSlotIsNewerError{ParentSlot: parentHeader.Slot, BlockSlot: block.Slot} + } + + // Proposer must be correct: slot % num_validators. + expectedProposer := block.Slot % numValidators + if block.ProposerIndex != expectedProposer { + return &InvalidProposerError{Expected: expectedProposer, Found: block.ProposerIndex} + } + + // Parent root must match hash of latest block header. + parentRoot, err := parentHeader.HashTreeRoot() + if err != nil { + return err + } + if block.ParentRoot != parentRoot { + return &InvalidParentError{Expected: parentRoot, Found: block.ParentRoot} + } + + // Genesis parent special case: initialize justified/finalized checkpoints. + if parentHeader.Slot == 0 { + state.LatestJustified.Root = parentRoot + state.LatestFinalized.Root = parentRoot + } + + // Guard against overflowing historical_block_hashes. + numEmptySlots := block.Slot - parentHeader.Slot - 1 + newEntries := 1 + numEmptySlots + if uint64(len(state.HistoricalBlockHashes))+newEntries > types.HistoricalRootsLimit { + return &SlotGapTooLargeError{ + Gap: newEntries, + Current: state.Slot, + Max: types.HistoricalRootsLimit, + } + } + + // Append parent root + zeros for skipped slots to historical_block_hashes. + parentRootBytes := parentRoot[:] + state.HistoricalBlockHashes = append(state.HistoricalBlockHashes, parentRootBytes) + for i := uint64(0); i < numEmptySlots; i++ { + state.HistoricalBlockHashes = append(state.HistoricalBlockHashes, make([]byte, 32)) + } + + // Extend justified_slots to cover slots up to block.slot - 1, relative to finalized boundary. + // Matches leanSpec state.py extend_to_slot(finalized_slot, last_materialized_slot) + + lastMaterializedSlot := block.Slot - 1 + if lastMaterializedSlot > state.LatestFinalized.Slot { + requiredLen := lastMaterializedSlot - state.LatestFinalized.Slot + currentLen := types.BitlistLen(state.JustifiedSlots) + if requiredLen > currentLen { + state.JustifiedSlots = types.BitlistExtend(state.JustifiedSlots, requiredLen) + } + } + + // Compute body root for the new block header. + bodyRoot, err := block.Body.HashTreeRoot() + if err != nil { + return err + } + + // Update latest block header (state_root intentionally zeroed; filled in process_slots). + state.LatestBlockHeader = &types.BlockHeader{ + Slot: block.Slot, + ProposerIndex: block.ProposerIndex, + ParentRoot: block.ParentRoot, + StateRoot: types.ZeroRoot, + BodyRoot: bodyRoot, + } + + return nil +} diff --git a/statetransition/errors.go b/statetransition/errors.go new file mode 100644 index 0000000..bbbceff --- /dev/null +++ b/statetransition/errors.go @@ -0,0 +1,70 @@ +package statetransition + +import "fmt" + +type StateSlotIsNewerError struct { + TargetSlot uint64 + CurrentSlot uint64 +} + +func (e *StateSlotIsNewerError) Error() string { + return fmt.Sprintf("state slot %d >= target slot %d", e.CurrentSlot, e.TargetSlot) +} + +type SlotMismatchError struct { + StateSlot uint64 + BlockSlot uint64 +} + +func (e *SlotMismatchError) Error() string { + return fmt.Sprintf("state slot %d != block slot %d", e.StateSlot, e.BlockSlot) +} + +type ParentSlotIsNewerError struct { + ParentSlot uint64 + BlockSlot uint64 +} + +func (e *ParentSlotIsNewerError) Error() string { + return fmt.Sprintf("parent slot %d >= block slot %d", e.ParentSlot, e.BlockSlot) +} + +type InvalidProposerError struct { + Expected uint64 + Found uint64 +} + +func (e *InvalidProposerError) Error() string { + return fmt.Sprintf("invalid proposer: expected %d, got %d", e.Expected, e.Found) +} + +type InvalidParentError struct { + Expected [32]byte + Found [32]byte +} + +func (e *InvalidParentError) Error() string { + return fmt.Sprintf("invalid parent root: expected %x, got %x", e.Expected[:4], e.Found[:4]) +} + +type StateRootMismatchError struct { + Expected [32]byte + Computed [32]byte +} + +func (e *StateRootMismatchError) Error() string { + return fmt.Sprintf("state root mismatch: block has %x, computed %x", e.Expected[:4], e.Computed[:4]) +} + +type SlotGapTooLargeError struct { + Gap uint64 + Current uint64 + Max uint64 +} + +func (e *SlotGapTooLargeError) Error() string { + return fmt.Sprintf("slot gap %d at slot %d exceeds max %d", e.Gap, e.Current, e.Max) +} + +var ErrNoValidators = fmt.Errorf("state has no validators") +var ErrZeroHashInJustificationRoots = fmt.Errorf("zero hash found in justifications_roots") diff --git a/statetransition/justifiable.go b/statetransition/justifiable.go new file mode 100644 index 0000000..0a62358 --- /dev/null +++ b/statetransition/justifiable.go @@ -0,0 +1,33 @@ +package statetransition + +import "math" + +// SlotIsJustifiableAfter returns true if slot can be justified given the finalized slot. +// 3SF-mini rules: distance must be <= 5, a perfect square, or a pronic number. +func SlotIsJustifiableAfter(slot, finalizedSlot uint64) bool { + if slot <= finalizedSlot { + return false + } + delta := slot - finalizedSlot + + // Rule 1: first 5 slots after finalization are always justifiable + if delta <= 5 { + return true + } + + // Rule 2: perfect square (1, 4, 9, 16, 25, ...) + sqrt := uint64(math.Sqrt(float64(delta))) + if sqrt*sqrt == delta { + return true + } + + // Rule 3: pronic number n*(n+1) (2, 6, 12, 20, 30, ...) + // Check: 4*delta + 1 is an odd perfect square + val := 4*delta + 1 + sqrtVal := uint64(math.Sqrt(float64(val))) + if sqrtVal*sqrtVal == val && sqrtVal%2 == 1 { + return true + } + + return false +} diff --git a/statetransition/justifiable_test.go b/statetransition/justifiable_test.go new file mode 100644 index 0000000..9ba14dc --- /dev/null +++ b/statetransition/justifiable_test.go @@ -0,0 +1,54 @@ +package statetransition + +import "testing" + +func TestSlotIsJustifiableAfter(t *testing.T) { + tests := []struct { + slot, finalized uint64 + want bool + }{ + // Rule 1: delta <= 5 + {1, 0, true}, // delta=1 + {5, 0, true}, // delta=5 + {6, 1, true}, // delta=5 + + // Rule 2: perfect squares + {9, 0, true}, // delta=9 = 3^2 + {16, 0, true}, // delta=16 = 4^2 + {25, 0, true}, // delta=25 = 5^2 + {36, 0, true}, // delta=36 = 6^2 + {100, 0, true}, // delta=100 = 10^2 + + // Rule 3: pronic numbers n*(n+1) + {6, 0, true}, // delta=6 = 2*3 (also <= 5+1, but pronic) + {12, 0, true}, // delta=12 = 3*4 + {20, 0, true}, // delta=20 = 4*5 + {30, 0, true}, // delta=30 = 5*6 + {42, 0, true}, // delta=42 = 6*7 + + // NOT justifiable: delta > 5, not square, not pronic + {7, 0, false}, // 7 + {8, 0, false}, // 8 + {10, 0, false}, // 10 + {11, 0, false}, // 11 + {13, 0, false}, // 13 + {14, 0, false}, // 14 + {15, 0, false}, // 15 + {17, 0, false}, // 17 + {18, 0, false}, // 18 + {19, 0, false}, // 19 + {21, 0, false}, // 21 + + // Edge: slot <= finalized + {0, 0, false}, + {5, 10, false}, + } + + for _, tt := range tests { + got := SlotIsJustifiableAfter(tt.slot, tt.finalized) + if got != tt.want { + t.Errorf("SlotIsJustifiableAfter(%d, %d) = %v, want %v", + tt.slot, tt.finalized, got, tt.want) + } + } +} diff --git a/statetransition/slots.go b/statetransition/slots.go new file mode 100644 index 0000000..9fb858c --- /dev/null +++ b/statetransition/slots.go @@ -0,0 +1,26 @@ +package statetransition + +import ( + "github.com/geanlabs/gean/types" +) + +// ProcessSlots advances the state to targetSlot, caching the state root in the +// block header if it hasn't been set yet. +func ProcessSlots(state *types.State, targetSlot uint64) error { + if state.Slot >= targetSlot { + return &StateSlotIsNewerError{TargetSlot: targetSlot, CurrentSlot: state.Slot} + } + + // Cache state root in latest_block_header if zero (first call after genesis). + if state.LatestBlockHeader.StateRoot == types.ZeroRoot { + root, err := state.HashTreeRoot() + if err != nil { + return err + } + state.LatestBlockHeader.StateRoot = root + } + + // Advance directly to target slot. + state.Slot = targetSlot + return nil +} diff --git a/statetransition/transition.go b/statetransition/transition.go new file mode 100644 index 0000000..33631a3 --- /dev/null +++ b/statetransition/transition.go @@ -0,0 +1,33 @@ +package statetransition + +import ( + "github.com/geanlabs/gean/types" +) + +// StateTransition applies a block to a state, producing the post-block state. +// Steps: process_slots → process_block → verify state_root. +func StateTransition(state *types.State, block *types.Block) error { + // 1. Advance through empty slots to the block's slot. + if err := ProcessSlots(state, block.Slot); err != nil { + return err + } + + // 2. Validate header and process attestations. + if err := ProcessBlock(state, block); err != nil { + return err + } + + // 3. Verify computed state root matches the block's claim. + computedRoot, err := state.HashTreeRoot() + if err != nil { + return err + } + if computedRoot != block.StateRoot { + return &StateRootMismatchError{ + Expected: block.StateRoot, + Computed: computedRoot, + } + } + + return nil +} diff --git a/statetransition/transition_test.go b/statetransition/transition_test.go new file mode 100644 index 0000000..a195b14 --- /dev/null +++ b/statetransition/transition_test.go @@ -0,0 +1,184 @@ +package statetransition + +import ( + "testing" + + "github.com/geanlabs/gean/types" +) + +// makeGenesisState creates a minimal genesis state with n validators. +func makeGenesisState(n int) *types.State { + validators := make([]*types.Validator, n) + for i := 0; i < n; i++ { + var pubkey [types.PubkeySize]byte + pubkey[0] = byte(i + 1) + validators[i] = &types.Validator{Pubkey: pubkey, Index: uint64(i)} + } + return &types.State{ + Config: &types.ChainConfig{GenesisTime: 1000}, + Slot: 0, + LatestBlockHeader: &types.BlockHeader{Slot: 0}, + LatestJustified: &types.Checkpoint{}, + LatestFinalized: &types.Checkpoint{}, + HistoricalBlockHashes: nil, + JustifiedSlots: types.NewBitlistSSZ(0), + Validators: validators, + JustificationsRoots: nil, + JustificationsValidators: types.NewBitlistSSZ(0), + } +} + +func TestProcessSlotsAdvancesSlot(t *testing.T) { + state := makeGenesisState(3) + if err := ProcessSlots(state, 5); err != nil { + t.Fatal(err) + } + if state.Slot != 5 { + t.Fatalf("expected slot 5, got %d", state.Slot) + } +} + +func TestProcessSlotsCachesStateRoot(t *testing.T) { + state := makeGenesisState(3) + if state.LatestBlockHeader.StateRoot != types.ZeroRoot { + t.Fatal("genesis header should have zero state root") + } + if err := ProcessSlots(state, 1); err != nil { + t.Fatal(err) + } + if state.LatestBlockHeader.StateRoot == types.ZeroRoot { + t.Fatal("state root should be cached after process_slots") + } +} + +func TestProcessSlotsRejectsOlderSlot(t *testing.T) { + state := makeGenesisState(3) + state.Slot = 10 + err := ProcessSlots(state, 5) + if err == nil { + t.Fatal("should reject target slot < current slot") + } + if _, ok := err.(*StateSlotIsNewerError); !ok { + t.Fatalf("expected StateSlotIsNewerError, got %T", err) + } +} + +func TestProcessBlockHeaderValidatesSlot(t *testing.T) { + state := makeGenesisState(3) + state.Slot = 1 + + block := &types.Block{ + Slot: 2, // doesn't match state.Slot + ProposerIndex: 0, + Body: &types.BlockBody{}, + } + err := ProcessBlockHeader(state, block) + if err == nil { + t.Fatal("should reject slot mismatch") + } +} + +func TestProcessBlockHeaderValidatesProposer(t *testing.T) { + state := makeGenesisState(3) + // Advance state to slot 1 via process_slots. + ProcessSlots(state, 1) + + parentRoot, _ := state.LatestBlockHeader.HashTreeRoot() + block := &types.Block{ + Slot: 1, + ProposerIndex: 2, // wrong: slot 1 % 3 = 1 + ParentRoot: parentRoot, + Body: &types.BlockBody{}, + } + err := ProcessBlockHeader(state, block) + if err == nil { + t.Fatal("should reject wrong proposer") + } + if _, ok := err.(*InvalidProposerError); !ok { + t.Fatalf("expected InvalidProposerError, got %T: %v", err, err) + } +} + +func TestProcessBlockHeaderValidatesParentRoot(t *testing.T) { + state := makeGenesisState(3) + ProcessSlots(state, 1) + + block := &types.Block{ + Slot: 1, + ProposerIndex: 1, // correct: 1 % 3 = 1 + ParentRoot: [32]byte{0xff}, // wrong parent root + Body: &types.BlockBody{}, + } + err := ProcessBlockHeader(state, block) + if err == nil { + t.Fatal("should reject wrong parent root") + } + if _, ok := err.(*InvalidParentError); !ok { + t.Fatalf("expected InvalidParentError, got %T: %v", err, err) + } +} + +func TestProcessBlockHeaderUpdatesState(t *testing.T) { + state := makeGenesisState(3) + ProcessSlots(state, 1) + + parentRoot, _ := state.LatestBlockHeader.HashTreeRoot() + block := &types.Block{ + Slot: 1, + ProposerIndex: 1, // 1 % 3 = 1 + ParentRoot: parentRoot, + Body: &types.BlockBody{}, + } + if err := ProcessBlockHeader(state, block); err != nil { + t.Fatal(err) + } + + // Block header should be updated. + if state.LatestBlockHeader.Slot != 1 { + t.Fatalf("expected header slot 1, got %d", state.LatestBlockHeader.Slot) + } + if state.LatestBlockHeader.StateRoot != types.ZeroRoot { + t.Fatal("new header should have zero state root") + } + + // Historical block hashes should have parent root. + if len(state.HistoricalBlockHashes) != 1 { + t.Fatalf("expected 1 historical hash, got %d", len(state.HistoricalBlockHashes)) + } +} + +func TestProcessBlockHeaderSkippedSlots(t *testing.T) { + state := makeGenesisState(3) + ProcessSlots(state, 4) + + parentRoot, _ := state.LatestBlockHeader.HashTreeRoot() + block := &types.Block{ + Slot: 4, + ProposerIndex: 1, // 4 % 3 = 1 + ParentRoot: parentRoot, + Body: &types.BlockBody{}, + } + if err := ProcessBlockHeader(state, block); err != nil { + t.Fatal(err) + } + + // Parent was at slot 0, block at slot 4: 1 parent + 3 empty = 4 entries. + if len(state.HistoricalBlockHashes) != 4 { + t.Fatalf("expected 4 historical hashes, got %d", len(state.HistoricalBlockHashes)) + } + + // First should be parent root, rest should be zero. + var zeroHash [32]byte + var first [32]byte + copy(first[:], state.HistoricalBlockHashes[0]) + if first == zeroHash { + t.Fatal("first entry should be parent root, not zero") + } + for i := 1; i < 4; i++ { + var h [32]byte + copy(h[:], state.HistoricalBlockHashes[i]) + if h != zeroHash { + t.Fatalf("entry %d should be zero for skipped slot", i) + } + } +} diff --git a/storage/bolt/bolt.go b/storage/bolt/bolt.go deleted file mode 100644 index 7e5f4b8..0000000 --- a/storage/bolt/bolt.go +++ /dev/null @@ -1,207 +0,0 @@ -package bolt - -import ( - "fmt" - "log" - - "github.com/geanlabs/gean/types" - bolt "go.etcd.io/bbolt" -) - -var ( - blocksBucket = []byte("blocks") - signedBlockBucket = []byte("signed_blocks") - statesBucket = []byte("states") -) - -// Store is a bbolt-backed implementation of storage.Store. -type Store struct { - db *bolt.DB -} - -// New opens (or creates) a bbolt database at path and initialises all buckets. -func New(path string) (*Store, error) { - db, err := bolt.Open(path, 0600, nil) - if err != nil { - return nil, fmt.Errorf("open bolt db: %w", err) - } - - err = db.Update(func(tx *bolt.Tx) error { - for _, b := range [][]byte{blocksBucket, signedBlockBucket, statesBucket} { - if _, err := tx.CreateBucketIfNotExists(b); err != nil { - return err - } - } - return nil - }) - if err != nil { - db.Close() - return nil, fmt.Errorf("create buckets: %w", err) - } - - return &Store{db: db}, nil -} - -// Close closes the underlying bbolt database. -func (s *Store) Close() error { - return s.db.Close() -} - -// --- storage.Store implementation --- - -func (s *Store) GetBlock(root [32]byte) (*types.Block, bool) { - var blk types.Block - found := s.get(blocksBucket, root[:], &blk) - if !found { - return nil, false - } - return &blk, true -} - -func (s *Store) PutBlock(root [32]byte, block *types.Block) { - s.put(blocksBucket, root[:], block) -} - -func (s *Store) DeleteBlock(root [32]byte) { - s.delete(blocksBucket, root[:]) -} - -func (s *Store) GetSignedBlock(root [32]byte) (*types.SignedBlockWithAttestation, bool) { - var sb types.SignedBlockWithAttestation - found := s.get(signedBlockBucket, root[:], &sb) - if !found { - return nil, false - } - return &sb, true -} - -func (s *Store) PutSignedBlock(root [32]byte, sb *types.SignedBlockWithAttestation) { - s.put(signedBlockBucket, root[:], sb) -} - -func (s *Store) DeleteSignedBlock(root [32]byte) { - s.delete(signedBlockBucket, root[:]) -} - -func (s *Store) GetState(root [32]byte) (*types.State, bool) { - var st types.State - found := s.get(statesBucket, root[:], &st) - if !found { - return nil, false - } - return &st, true -} - -func (s *Store) PutState(root [32]byte, state *types.State) { - s.put(statesBucket, root[:], state) -} - -func (s *Store) DeleteState(root [32]byte) { - s.delete(statesBucket, root[:]) -} - -func (s *Store) GetAllBlocks() map[[32]byte]*types.Block { - result := make(map[[32]byte]*types.Block) - s.db.View(func(tx *bolt.Tx) error { - b := tx.Bucket(blocksBucket) - return b.ForEach(func(k, v []byte) error { - var blk types.Block - if err := blk.UnmarshalSSZ(v); err != nil { - return nil // skip corrupt entries - } - var key [32]byte - copy(key[:], k) - result[key] = &blk - return nil - }) - }) - return result -} - -func (s *Store) GetAllStates() map[[32]byte]*types.State { - result := make(map[[32]byte]*types.State) - s.db.View(func(tx *bolt.Tx) error { - b := tx.Bucket(statesBucket) - return b.ForEach(func(k, v []byte) error { - var st types.State - if err := st.UnmarshalSSZ(v); err != nil { - return nil // skip corrupt entries - } - var key [32]byte - copy(key[:], k) - result[key] = &st - return nil - }) - }) - return result -} - -func (s *Store) ForEachBlock(fn func(root [32]byte, block *types.Block) bool) { - s.db.View(func(tx *bolt.Tx) error { - b := tx.Bucket(blocksBucket) - return b.ForEach(func(k, v []byte) error { - var blk types.Block - if err := blk.UnmarshalSSZ(v); err != nil { - return nil // skip corrupt entries - } - var key [32]byte - copy(key[:], k) - if !fn(key, &blk) { - return fmt.Errorf("stop") // break iteration - } - return nil - }) - }) -} - -// --- SSZ helpers --- - -type sszMarshaler interface { - MarshalSSZ() ([]byte, error) -} - -type sszUnmarshaler interface { - UnmarshalSSZ([]byte) error -} - -func (s *Store) put(bucket, key []byte, val sszMarshaler) { - data, err := val.MarshalSSZ() - if err != nil { - log.Fatalf("bolt: marshal ssz: %v", err) - } - err = s.db.Update(func(tx *bolt.Tx) error { - return tx.Bucket(bucket).Put(key, data) - }) - if err != nil { - log.Fatalf("bolt: write %s: %v", bucket, err) - } -} - -func (s *Store) delete(bucket, key []byte) { - err := s.db.Update(func(tx *bolt.Tx) error { - return tx.Bucket(bucket).Delete(key) - }) - if err != nil { - log.Printf("bolt: delete from %s: %v", bucket, err) - } -} - -func (s *Store) get(bucket, key []byte, dst sszUnmarshaler) bool { - var found bool - s.db.View(func(tx *bolt.Tx) error { - v := tx.Bucket(bucket).Get(key) - if v == nil { - return nil - } - // Copy value since bolt memory is only valid inside tx. - buf := make([]byte, len(v)) - copy(buf, v) - if err := dst.UnmarshalSSZ(buf); err != nil { - log.Printf("bolt: unmarshal from %s: %v", bucket, err) - return nil - } - found = true - return nil - }) - return found -} diff --git a/storage/bolt/bolt_test.go b/storage/bolt/bolt_test.go deleted file mode 100644 index 9cfeac5..0000000 --- a/storage/bolt/bolt_test.go +++ /dev/null @@ -1,151 +0,0 @@ -package bolt_test - -import ( - "path/filepath" - "testing" - - boltstore "github.com/geanlabs/gean/storage/bolt" - "github.com/geanlabs/gean/types" -) - -func newTestStore(t *testing.T) *boltstore.Store { - t.Helper() - s, err := boltstore.New(filepath.Join(t.TempDir(), "test.db")) - if err != nil { - t.Fatalf("open bolt store: %v", err) - } - t.Cleanup(func() { s.Close() }) - return s -} - -func TestPutGetBlock(t *testing.T) { - s := newTestStore(t) - root := [32]byte{1} - block := &types.Block{Slot: 5, Body: &types.BlockBody{}} - - s.PutBlock(root, block) - - got, ok := s.GetBlock(root) - if !ok { - t.Fatal("expected block to be found") - } - if got.Slot != 5 { - t.Fatalf("block slot = %d, want 5", got.Slot) - } -} - -func TestPutGetState(t *testing.T) { - s := newTestStore(t) - root := [32]byte{2} - state := &types.State{ - Slot: 10, - Config: &types.Config{GenesisTime: 1000}, - LatestJustified: &types.Checkpoint{}, - LatestFinalized: &types.Checkpoint{}, - LatestBlockHeader: &types.BlockHeader{}, - JustifiedSlots: []byte{0x01}, - JustificationsValidators: []byte{0x01}, - } - - s.PutState(root, state) - - got, ok := s.GetState(root) - if !ok { - t.Fatal("expected state to be found") - } - if got.Slot != 10 { - t.Fatalf("state slot = %d, want 10", got.Slot) - } -} - -func TestPutGetSignedBlock(t *testing.T) { - s := newTestStore(t) - root := [32]byte{3} - sb := &types.SignedBlockWithAttestation{ - Message: &types.BlockWithAttestation{ - Block: &types.Block{Slot: 7, Body: &types.BlockBody{}}, - }, - } - - s.PutSignedBlock(root, sb) - - got, ok := s.GetSignedBlock(root) - if !ok { - t.Fatal("expected signed block to be found") - } - if got.Message.Block.Slot != 7 { - t.Fatalf("signed block slot = %d, want 7", got.Message.Block.Slot) - } -} - -func TestGetMissingBlockReturnsFalse(t *testing.T) { - s := newTestStore(t) - _, ok := s.GetBlock([32]byte{0xff}) - if ok { - t.Fatal("expected missing block to return false") - } -} - -func TestGetMissingStateReturnsFalse(t *testing.T) { - s := newTestStore(t) - _, ok := s.GetState([32]byte{0xff}) - if ok { - t.Fatal("expected missing state to return false") - } -} - -func TestGetMissingSignedBlockReturnsFalse(t *testing.T) { - s := newTestStore(t) - _, ok := s.GetSignedBlock([32]byte{0xff}) - if ok { - t.Fatal("expected missing signed block to return false") - } -} - -func TestGetAllBlocksCopiesMap(t *testing.T) { - s := newTestStore(t) - root := [32]byte{1} - block := &types.Block{Slot: 1, Body: &types.BlockBody{}} - s.PutBlock(root, block) - - all := s.GetAllBlocks() - delete(all, root) - - _, ok := s.GetBlock(root) - if !ok { - t.Fatal("deleting from GetAllBlocks result should not affect store") - } -} - -func TestGetAllStatesCopiesMap(t *testing.T) { - s := newTestStore(t) - root := [32]byte{1} - state := &types.State{ - Slot: 1, - Config: &types.Config{GenesisTime: 1000}, - LatestJustified: &types.Checkpoint{}, - LatestFinalized: &types.Checkpoint{}, - LatestBlockHeader: &types.BlockHeader{}, - JustifiedSlots: []byte{0x01}, - JustificationsValidators: []byte{0x01}, - } - s.PutState(root, state) - - all := s.GetAllStates() - delete(all, root) - - _, ok := s.GetState(root) - if !ok { - t.Fatal("deleting from GetAllStates result should not affect store") - } -} - -func TestClose(t *testing.T) { - s, err := boltstore.New(filepath.Join(t.TempDir(), "close.db")) - if err != nil { - t.Fatalf("open: %v", err) - } - if err := s.Close(); err != nil { - t.Fatalf("close: %v", err) - } -} diff --git a/storage/interface.go b/storage/interface.go index c563a5c..c1001a1 100644 --- a/storage/interface.go +++ b/storage/interface.go @@ -1,21 +1,58 @@ package storage -import "github.com/geanlabs/gean/types" - -// Store is a storage interface for blocks and states. -type Store interface { - GetBlock(root [32]byte) (*types.Block, bool) - PutBlock(root [32]byte, block *types.Block) - DeleteBlock(root [32]byte) - GetSignedBlock(root [32]byte) (*types.SignedBlockWithAttestation, bool) - PutSignedBlock(root [32]byte, sb *types.SignedBlockWithAttestation) - DeleteSignedBlock(root [32]byte) - GetState(root [32]byte) (*types.State, bool) - PutState(root [32]byte, state *types.State) - DeleteState(root [32]byte) - GetAllBlocks() map[[32]byte]*types.Block - GetAllStates() map[[32]byte]*types.State - // ForEachBlock iterates over all blocks without copying the full map. - // Return false from fn to stop iteration early. - ForEachBlock(fn func(root [32]byte, block *types.Block) bool) +// Backend is a pluggable storage backend. +type Backend interface { + // BeginRead returns a read-only view of the storage. + BeginRead() (ReadView, error) + + // BeginWrite returns an atomic write batch. + BeginWrite() (WriteBatch, error) + + // EstimateTableBytes returns estimated live data size for a table. + EstimateTableBytes(table Table) uint64 + + // Close releases backend resources. + Close() error +} + +// ReadView provides read-only access to storage. +type ReadView interface { + // Get retrieves a value by key from a table. Returns nil if not found. + Get(table Table, key []byte) ([]byte, error) + + // PrefixIterator iterates over all entries with a given key prefix. + PrefixIterator(table Table, prefix []byte) (Iterator, error) +} + +// WriteBatch provides atomic batched writes. +type WriteBatch interface { + // PutBatch inserts multiple key-value pairs into a table. + PutBatch(table Table, entries []KV) error + + // DeleteBatch removes multiple keys from a table. + DeleteBatch(table Table, keys [][]byte) error + + // Commit atomically writes all batched operations. + Commit() error +} + +// Iterator yields key-value pairs. +type Iterator interface { + // Next advances the iterator. Returns false when exhausted. + Next() bool + + // Key returns the current key. + Key() []byte + + // Value returns the current value. + Value() []byte + + // Close releases iterator resources. + Close() +} + +// KV is a key-value pair for batch operations. +type KV struct { + Key []byte + Value []byte } diff --git a/storage/keys.go b/storage/keys.go new file mode 100644 index 0000000..3b55c9e --- /dev/null +++ b/storage/keys.go @@ -0,0 +1,36 @@ +package storage + +import "encoding/binary" + +// Metadata keys rs L62-72. +var ( + KeyTime = []byte("time") + KeyConfig = []byte("config") + KeyHead = []byte("head") + KeySafeTarget = []byte("safe_target") + KeyLatestJustified = []byte("latest_justified") + KeyLatestFinalized = []byte("latest_finalized") +) + +// Retention constants rs L75-78. +const ( + BlocksToKeep = 21_600 // ~1 day at 4s slots + StatesToKeep = 3_000 // ~3.3 hours at 4s slots +) + +// EncodeLiveChainKey encodes a LiveChain key: slot (8 bytes big-endian) || root (32 bytes). +// Big-endian ensures lexicographic ordering matches numeric ordering. +func EncodeLiveChainKey(slot uint64, root [32]byte) []byte { + key := make([]byte, 8+32) + binary.BigEndian.PutUint64(key[:8], slot) + copy(key[8:], root[:]) + return key +} + +// DecodeLiveChainKey decodes a LiveChain key into (slot, root). +func DecodeLiveChainKey(key []byte) (uint64, [32]byte) { + slot := binary.BigEndian.Uint64(key[:8]) + var root [32]byte + copy(root[:], key[8:]) + return slot, root +} diff --git a/storage/memory.go b/storage/memory.go new file mode 100644 index 0000000..49e0f86 --- /dev/null +++ b/storage/memory.go @@ -0,0 +1,177 @@ +package storage + +import ( + "bytes" + "sort" + "sync" +) + +// InMemoryBackend is a thread-safe in-memory storage backend for tests. +type InMemoryBackend struct { + mu sync.RWMutex + tables map[Table]map[string][]byte +} + +// NewInMemoryBackend creates a new in-memory backend with all tables initialized. +func NewInMemoryBackend() *InMemoryBackend { + tables := make(map[Table]map[string][]byte) + for _, t := range AllTables { + tables[t] = make(map[string][]byte) + } + return &InMemoryBackend{tables: tables} +} + +func (b *InMemoryBackend) BeginRead() (ReadView, error) { + return &inMemoryReadView{backend: b}, nil +} + +func (b *InMemoryBackend) BeginWrite() (WriteBatch, error) { + return &inMemoryWriteBatch{backend: b}, nil +} + +func (b *InMemoryBackend) EstimateTableBytes(table Table) uint64 { + b.mu.RLock() + defer b.mu.RUnlock() + t, ok := b.tables[table] + if !ok { + return 0 + } + var total uint64 + for k, v := range t { + total += uint64(len(k) + len(v)) + } + return total +} + +func (b *InMemoryBackend) Close() error { return nil } + +// CountEntries returns the number of entries in a table (for tests). +func (b *InMemoryBackend) CountEntries(table Table) int { + b.mu.RLock() + defer b.mu.RUnlock() + return len(b.tables[table]) +} + +// --- ReadView --- + +type inMemoryReadView struct { + backend *InMemoryBackend +} + +func (v *inMemoryReadView) Get(table Table, key []byte) ([]byte, error) { + v.backend.mu.RLock() + defer v.backend.mu.RUnlock() + t, ok := v.backend.tables[table] + if !ok { + return nil, nil + } + val, ok := t[string(key)] + if !ok { + return nil, nil + } + cp := make([]byte, len(val)) + copy(cp, val) + return cp, nil +} + +func (v *inMemoryReadView) PrefixIterator(table Table, prefix []byte) (Iterator, error) { + v.backend.mu.RLock() + defer v.backend.mu.RUnlock() + t, ok := v.backend.tables[table] + if !ok { + return &sliceIterator{}, nil + } + + var entries []KV + for k, val := range t { + kb := []byte(k) + if bytes.HasPrefix(kb, prefix) { + kcopy := make([]byte, len(kb)) + copy(kcopy, kb) + vcopy := make([]byte, len(val)) + copy(vcopy, val) + entries = append(entries, KV{Key: kcopy, Value: vcopy}) + } + } + // Sort by key for deterministic iteration. + sort.Slice(entries, func(i, j int) bool { + return bytes.Compare(entries[i].Key, entries[j].Key) < 0 + }) + return &sliceIterator{entries: entries, pos: -1}, nil +} + +// --- WriteBatch --- + +type inMemoryWriteBatch struct { + backend *InMemoryBackend + ops []batchOp +} + +type batchOp struct { + table Table + key string + value []byte // nil = delete +} + +func (b *inMemoryWriteBatch) PutBatch(table Table, entries []KV) error { + for _, e := range entries { + val := make([]byte, len(e.Value)) + copy(val, e.Value) + b.ops = append(b.ops, batchOp{table: table, key: string(e.Key), value: val}) + } + return nil +} + +func (b *inMemoryWriteBatch) DeleteBatch(table Table, keys [][]byte) error { + for _, k := range keys { + b.ops = append(b.ops, batchOp{table: table, key: string(k), value: nil}) + } + return nil +} + +func (b *inMemoryWriteBatch) Commit() error { + b.backend.mu.Lock() + defer b.backend.mu.Unlock() + for _, op := range b.ops { + t, ok := b.backend.tables[op.table] + if !ok { + t = make(map[string][]byte) + b.backend.tables[op.table] = t + } + if op.value == nil { + delete(t, op.key) + } else { + t[op.key] = op.value + } + } + b.ops = nil + return nil +} + +// --- sliceIterator --- + +type sliceIterator struct { + entries []KV + pos int +} + +func (it *sliceIterator) Next() bool { + it.pos++ + return it.pos < len(it.entries) +} + +func (it *sliceIterator) Key() []byte { + if it.pos < 0 || it.pos >= len(it.entries) { + return nil + } + return it.entries[it.pos].Key +} + +func (it *sliceIterator) Value() []byte { + if it.pos < 0 || it.pos >= len(it.entries) { + return nil + } + return it.entries[it.pos].Value +} + +func (it *sliceIterator) Close() {} diff --git a/storage/memory/memory.go b/storage/memory/memory.go deleted file mode 100644 index 121762a..0000000 --- a/storage/memory/memory.go +++ /dev/null @@ -1,111 +0,0 @@ -package memory - -import ( - "sync" - - "github.com/geanlabs/gean/types" -) - -// Store is an in-memory implementation of storage.Store. -type Store struct { - mu sync.RWMutex - blocks map[[32]byte]*types.Block - signedBlocks map[[32]byte]*types.SignedBlockWithAttestation - states map[[32]byte]*types.State -} - -// New creates a new in-memory store. -func New() *Store { - return &Store{ - blocks: make(map[[32]byte]*types.Block), - signedBlocks: make(map[[32]byte]*types.SignedBlockWithAttestation), - states: make(map[[32]byte]*types.State), - } -} - -func (m *Store) GetBlock(root [32]byte) (*types.Block, bool) { - m.mu.RLock() - defer m.mu.RUnlock() - b, ok := m.blocks[root] - return b, ok -} - -func (m *Store) PutBlock(root [32]byte, block *types.Block) { - m.mu.Lock() - defer m.mu.Unlock() - m.blocks[root] = block -} - -func (m *Store) DeleteBlock(root [32]byte) { - m.mu.Lock() - defer m.mu.Unlock() - delete(m.blocks, root) -} - -func (m *Store) GetSignedBlock(root [32]byte) (*types.SignedBlockWithAttestation, bool) { - m.mu.RLock() - defer m.mu.RUnlock() - sb, ok := m.signedBlocks[root] - return sb, ok -} - -func (m *Store) PutSignedBlock(root [32]byte, sb *types.SignedBlockWithAttestation) { - m.mu.Lock() - defer m.mu.Unlock() - m.signedBlocks[root] = sb -} - -func (m *Store) DeleteSignedBlock(root [32]byte) { - m.mu.Lock() - defer m.mu.Unlock() - delete(m.signedBlocks, root) -} - -func (m *Store) GetState(root [32]byte) (*types.State, bool) { - m.mu.RLock() - defer m.mu.RUnlock() - s, ok := m.states[root] - return s, ok -} - -func (m *Store) PutState(root [32]byte, state *types.State) { - m.mu.Lock() - defer m.mu.Unlock() - m.states[root] = state -} - -func (m *Store) DeleteState(root [32]byte) { - m.mu.Lock() - defer m.mu.Unlock() - delete(m.states, root) -} - -func (m *Store) GetAllBlocks() map[[32]byte]*types.Block { - m.mu.RLock() - defer m.mu.RUnlock() - cp := make(map[[32]byte]*types.Block, len(m.blocks)) - for k, v := range m.blocks { - cp[k] = v - } - return cp -} - -func (m *Store) ForEachBlock(fn func(root [32]byte, block *types.Block) bool) { - m.mu.RLock() - defer m.mu.RUnlock() - for root, block := range m.blocks { - if !fn(root, block) { - return - } - } -} - -func (m *Store) GetAllStates() map[[32]byte]*types.State { - m.mu.RLock() - defer m.mu.RUnlock() - cp := make(map[[32]byte]*types.State, len(m.states)) - for k, v := range m.states { - cp[k] = v - } - return cp -} diff --git a/storage/memory/memory_test.go b/storage/memory/memory_test.go deleted file mode 100644 index 0cd7063..0000000 --- a/storage/memory/memory_test.go +++ /dev/null @@ -1,87 +0,0 @@ -package memory_test - -import ( - "testing" - - "github.com/geanlabs/gean/storage/memory" - "github.com/geanlabs/gean/types" -) - -func TestPutGetBlock(t *testing.T) { - s := memory.New() - root := [32]byte{1} - block := &types.Block{Slot: 5} - - s.PutBlock(root, block) - - got, ok := s.GetBlock(root) - if !ok { - t.Fatal("expected block to be found") - } - if got.Slot != 5 { - t.Fatalf("block slot = %d, want 5", got.Slot) - } -} - -func TestPutGetState(t *testing.T) { - s := memory.New() - root := [32]byte{2} - state := &types.State{Slot: 10} - - s.PutState(root, state) - - got, ok := s.GetState(root) - if !ok { - t.Fatal("expected state to be found") - } - if got.Slot != 10 { - t.Fatalf("state slot = %d, want 10", got.Slot) - } -} - -func TestGetMissingBlockReturnsFalse(t *testing.T) { - s := memory.New() - _, ok := s.GetBlock([32]byte{0xff}) - if ok { - t.Fatal("expected missing block to return false") - } -} - -func TestGetMissingStateReturnsFalse(t *testing.T) { - s := memory.New() - _, ok := s.GetState([32]byte{0xff}) - if ok { - t.Fatal("expected missing state to return false") - } -} - -func TestGetAllBlocksCopiesMap(t *testing.T) { - s := memory.New() - root := [32]byte{1} - block := &types.Block{Slot: 1} - s.PutBlock(root, block) - - all := s.GetAllBlocks() - // Mutating the returned map should not affect the store. - delete(all, root) - - _, ok := s.GetBlock(root) - if !ok { - t.Fatal("deleting from GetAllBlocks result should not affect store") - } -} - -func TestGetAllStatesCopiesMap(t *testing.T) { - s := memory.New() - root := [32]byte{1} - state := &types.State{Slot: 1} - s.PutState(root, state) - - all := s.GetAllStates() - delete(all, root) - - _, ok := s.GetState(root) - if !ok { - t.Fatal("deleting from GetAllStates result should not affect store") - } -} diff --git a/storage/pebble.go b/storage/pebble.go new file mode 100644 index 0000000..0953c7c --- /dev/null +++ b/storage/pebble.go @@ -0,0 +1,188 @@ +package storage + +import ( + "bytes" + "fmt" + "path/filepath" + + "github.com/cockroachdb/pebble" +) + +// PebbleBackend is a persistent storage backend using CockroachDB's Pebble. +// +// Pebble doesn't have column families, so we prefix keys with the table name +// to achieve table isolation: "{table_name}\x00{key}". +type PebbleBackend struct { + db *pebble.DB +} + +// NewPebbleBackend opens or creates a Pebble database at the given path. +func NewPebbleBackend(dir string) (*PebbleBackend, error) { + db, err := pebble.Open(filepath.Clean(dir), &pebble.Options{}) + if err != nil { + return nil, fmt.Errorf("pebble open: %w", err) + } + return &PebbleBackend{db: db}, nil +} + +func (p *PebbleBackend) BeginRead() (ReadView, error) { + return &pebbleReadView{db: p.db}, nil +} + +func (p *PebbleBackend) BeginWrite() (WriteBatch, error) { + return &pebbleWriteBatch{db: p.db, batch: p.db.NewBatch()}, nil +} + +func (p *PebbleBackend) EstimateTableBytes(table Table) uint64 { + // Pebble doesn't have per-prefix size estimates like RocksDB column families. + return 0 +} + +func (p *PebbleBackend) Close() error { + return p.db.Close() +} + +// tableKey creates a prefixed key: "{table}\x00{key}". +func tableKey(table Table, key []byte) []byte { + prefix := []byte(table) + result := make([]byte, len(prefix)+1+len(key)) + copy(result, prefix) + result[len(prefix)] = 0x00 + copy(result[len(prefix)+1:], key) + return result +} + +// tablePrefix returns the prefix for all keys in a table: "{table}\x00". +func tablePrefix(table Table) []byte { + prefix := []byte(table) + result := make([]byte, len(prefix)+1) + copy(result, prefix) + result[len(prefix)] = 0x00 + return result +} + +// stripTablePrefix removes the table prefix from a key. +func stripTablePrefix(table Table, fullKey []byte) []byte { + prefixLen := len([]byte(table)) + 1 + if len(fullKey) <= prefixLen { + return nil + } + return fullKey[prefixLen:] +} + +// --- ReadView --- + +type pebbleReadView struct { + db *pebble.DB +} + +func (v *pebbleReadView) Get(table Table, key []byte) ([]byte, error) { + val, closer, err := v.db.Get(tableKey(table, key)) + if err == pebble.ErrNotFound { + return nil, nil + } + if err != nil { + return nil, err + } + defer closer.Close() + cp := make([]byte, len(val)) + copy(cp, val) + return cp, nil +} + +func (v *pebbleReadView) PrefixIterator(table Table, prefix []byte) (Iterator, error) { + fullPrefix := tableKey(table, prefix) + iter, err := v.db.NewIter(&pebble.IterOptions{ + LowerBound: fullPrefix, + UpperBound: prefixUpperBound(fullPrefix), + }) + if err != nil { + return nil, err + } + iter.First() + return &pebbleIterator{iter: iter, table: table}, nil +} + +// prefixUpperBound computes the upper bound for a prefix scan. +// Increments the last byte; if it overflows, the prefix has no upper bound. +func prefixUpperBound(prefix []byte) []byte { + if len(prefix) == 0 { + return nil + } + upper := make([]byte, len(prefix)) + copy(upper, prefix) + for i := len(upper) - 1; i >= 0; i-- { + upper[i]++ + if upper[i] != 0 { + return upper + } + } + return nil // all 0xFF — no upper bound +} + +// --- WriteBatch --- + +type pebbleWriteBatch struct { + db *pebble.DB + batch *pebble.Batch +} + +func (b *pebbleWriteBatch) PutBatch(table Table, entries []KV) error { + for _, e := range entries { + if err := b.batch.Set(tableKey(table, e.Key), e.Value, nil); err != nil { + return err + } + } + return nil +} + +func (b *pebbleWriteBatch) DeleteBatch(table Table, keys [][]byte) error { + for _, k := range keys { + if err := b.batch.Delete(tableKey(table, k), nil); err != nil { + return err + } + } + return nil +} + +func (b *pebbleWriteBatch) Commit() error { + return b.batch.Commit(pebble.NoSync) +} + +// --- Iterator --- + +type pebbleIterator struct { + iter *pebble.Iterator + table Table + started bool +} + +func (it *pebbleIterator) Next() bool { + if !it.started { + it.started = true + return it.iter.Valid() + } + if !it.iter.Valid() { + return false + } + it.iter.Next() + return it.iter.Valid() +} + +func (it *pebbleIterator) Key() []byte { + if !it.iter.Valid() { + return nil + } + return stripTablePrefix(it.table, bytes.Clone(it.iter.Key())) +} + +func (it *pebbleIterator) Value() []byte { + if !it.iter.Valid() { + return nil + } + return bytes.Clone(it.iter.Value()) +} + +func (it *pebbleIterator) Close() { + it.iter.Close() +} diff --git a/storage/pebble_test.go b/storage/pebble_test.go new file mode 100644 index 0000000..8e468fc --- /dev/null +++ b/storage/pebble_test.go @@ -0,0 +1,171 @@ +package storage + +import ( + "bytes" + "os" + "testing" +) + +func tempDir(t *testing.T) string { + t.Helper() + dir, err := os.MkdirTemp("", "gean-pebble-test-*") + if err != nil { + t.Fatal(err) + } + t.Cleanup(func() { os.RemoveAll(dir) }) + return dir +} + +func TestPebblePutAndGet(t *testing.T) { + b, err := NewPebbleBackend(tempDir(t)) + if err != nil { + t.Fatal(err) + } + defer b.Close() + + wb, _ := b.BeginWrite() + wb.PutBatch(TableBlockHeaders, []KV{ + {Key: []byte("root1"), Value: []byte("header1")}, + {Key: []byte("root2"), Value: []byte("header2")}, + }) + wb.Commit() + + rv, _ := b.BeginRead() + val, _ := rv.Get(TableBlockHeaders, []byte("root1")) + if string(val) != "header1" { + t.Fatalf("expected header1, got %s", string(val)) + } + val, _ = rv.Get(TableBlockHeaders, []byte("root2")) + if string(val) != "header2" { + t.Fatalf("expected header2, got %s", string(val)) + } +} + +func TestPebbleGetMissing(t *testing.T) { + b, err := NewPebbleBackend(tempDir(t)) + if err != nil { + t.Fatal(err) + } + defer b.Close() + + rv, _ := b.BeginRead() + val, _ := rv.Get(TableBlockHeaders, []byte("nonexistent")) + if val != nil { + t.Fatal("expected nil for missing key") + } +} + +func TestPebbleDelete(t *testing.T) { + b, err := NewPebbleBackend(tempDir(t)) + if err != nil { + t.Fatal(err) + } + defer b.Close() + + wb, _ := b.BeginWrite() + wb.PutBatch(TableStates, []KV{ + {Key: []byte("k1"), Value: []byte("v1")}, + {Key: []byte("k2"), Value: []byte("v2")}, + }) + wb.Commit() + + wb2, _ := b.BeginWrite() + wb2.DeleteBatch(TableStates, [][]byte{[]byte("k1")}) + wb2.Commit() + + rv, _ := b.BeginRead() + val, _ := rv.Get(TableStates, []byte("k1")) + if val != nil { + t.Fatal("k1 should be deleted") + } + val, _ = rv.Get(TableStates, []byte("k2")) + if string(val) != "v2" { + t.Fatal("k2 should still exist") + } +} + +func TestPebbleTableIsolation(t *testing.T) { + b, err := NewPebbleBackend(tempDir(t)) + if err != nil { + t.Fatal(err) + } + defer b.Close() + + wb, _ := b.BeginWrite() + wb.PutBatch(TableBlockHeaders, []KV{ + {Key: []byte("root"), Value: []byte("header")}, + }) + wb.Commit() + + rv, _ := b.BeginRead() + val, _ := rv.Get(TableStates, []byte("root")) + if val != nil { + t.Fatal("tables should be isolated") + } +} + +func TestPebblePrefixIterator(t *testing.T) { + b, err := NewPebbleBackend(tempDir(t)) + if err != nil { + t.Fatal(err) + } + defer b.Close() + + wb, _ := b.BeginWrite() + wb.PutBatch(TableLiveChain, []KV{ + {Key: []byte("aa_1"), Value: []byte("v1")}, + {Key: []byte("aa_2"), Value: []byte("v2")}, + {Key: []byte("bb_1"), Value: []byte("v3")}, + }) + wb.Commit() + + rv, _ := b.BeginRead() + it, err := rv.PrefixIterator(TableLiveChain, []byte("aa")) + if err != nil { + t.Fatal(err) + } + defer it.Close() + + count := 0 + for it.Next() { + if !bytes.HasPrefix(it.Key(), []byte("aa")) { + t.Fatalf("key %s doesn't have prefix aa", string(it.Key())) + } + count++ + } + if count != 2 { + t.Fatalf("expected 2 entries with prefix aa, got %d", count) + } +} + +func TestPebblePersistence(t *testing.T) { + dir := tempDir(t) + + // Write data. + { + b, err := NewPebbleBackend(dir) + if err != nil { + t.Fatal(err) + } + wb, _ := b.BeginWrite() + wb.PutBatch(TableMetadata, []KV{ + {Key: []byte("key"), Value: []byte("value")}, + }) + wb.Commit() + b.Close() + } + + // Reopen and read. + { + b, err := NewPebbleBackend(dir) + if err != nil { + t.Fatal(err) + } + defer b.Close() + rv, _ := b.BeginRead() + val, _ := rv.Get(TableMetadata, []byte("key")) + if string(val) != "value" { + t.Fatalf("expected value after reopen, got %s", string(val)) + } + } +} diff --git a/storage/storage_test.go b/storage/storage_test.go new file mode 100644 index 0000000..58e34c2 --- /dev/null +++ b/storage/storage_test.go @@ -0,0 +1,180 @@ +package storage + +import ( + "bytes" + "testing" +) + +func TestInMemoryPutAndGet(t *testing.T) { + b := NewInMemoryBackend() + wb, _ := b.BeginWrite() + wb.PutBatch(TableBlockHeaders, []KV{ + {Key: []byte("root1"), Value: []byte("header1")}, + {Key: []byte("root2"), Value: []byte("header2")}, + }) + wb.Commit() + + rv, _ := b.BeginRead() + val, err := rv.Get(TableBlockHeaders, []byte("root1")) + if err != nil { + t.Fatal(err) + } + if string(val) != "header1" { + t.Fatalf("expected header1, got %s", string(val)) + } + + val, err = rv.Get(TableBlockHeaders, []byte("root2")) + if err != nil { + t.Fatal(err) + } + if string(val) != "header2" { + t.Fatalf("expected header2, got %s", string(val)) + } +} + +func TestInMemoryGetMissing(t *testing.T) { + b := NewInMemoryBackend() + rv, _ := b.BeginRead() + val, err := rv.Get(TableBlockHeaders, []byte("nonexistent")) + if err != nil { + t.Fatal(err) + } + if val != nil { + t.Fatal("expected nil for missing key") + } +} + +func TestInMemoryDelete(t *testing.T) { + b := NewInMemoryBackend() + wb, _ := b.BeginWrite() + wb.PutBatch(TableStates, []KV{ + {Key: []byte("k1"), Value: []byte("v1")}, + {Key: []byte("k2"), Value: []byte("v2")}, + }) + wb.Commit() + + if b.CountEntries(TableStates) != 2 { + t.Fatal("expected 2 entries") + } + + wb2, _ := b.BeginWrite() + wb2.DeleteBatch(TableStates, [][]byte{[]byte("k1")}) + wb2.Commit() + + if b.CountEntries(TableStates) != 1 { + t.Fatal("expected 1 entry after delete") + } +} + +func TestInMemoryPrefixIterator(t *testing.T) { + b := NewInMemoryBackend() + wb, _ := b.BeginWrite() + wb.PutBatch(TableLiveChain, []KV{ + {Key: []byte("aa_1"), Value: []byte("v1")}, + {Key: []byte("aa_2"), Value: []byte("v2")}, + {Key: []byte("bb_1"), Value: []byte("v3")}, + }) + wb.Commit() + + rv, _ := b.BeginRead() + it, err := rv.PrefixIterator(TableLiveChain, []byte("aa")) + if err != nil { + t.Fatal(err) + } + defer it.Close() + + count := 0 + for it.Next() { + if !bytes.HasPrefix(it.Key(), []byte("aa")) { + t.Fatalf("key %s doesn't have prefix aa", string(it.Key())) + } + count++ + } + if count != 2 { + t.Fatalf("expected 2 entries with prefix aa, got %d", count) + } +} + +func TestInMemoryAtomicCommit(t *testing.T) { + b := NewInMemoryBackend() + + // Write batch but don't commit + wb, _ := b.BeginWrite() + wb.PutBatch(TableMetadata, []KV{ + {Key: []byte("key"), Value: []byte("val")}, + }) + // Don't commit — data should not be visible. + + rv, _ := b.BeginRead() + val, _ := rv.Get(TableMetadata, []byte("key")) + if val != nil { + t.Fatal("uncommitted write should not be visible") + } + + // Now commit. + wb.Commit() + rv2, _ := b.BeginRead() + val2, _ := rv2.Get(TableMetadata, []byte("key")) + if string(val2) != "val" { + t.Fatal("committed write should be visible") + } +} + +func TestInMemoryTableIsolation(t *testing.T) { + b := NewInMemoryBackend() + wb, _ := b.BeginWrite() + wb.PutBatch(TableBlockHeaders, []KV{ + {Key: []byte("root"), Value: []byte("header")}, + }) + wb.Commit() + + rv, _ := b.BeginRead() + // Same key, different table — should not exist. + val, _ := rv.Get(TableStates, []byte("root")) + if val != nil { + t.Fatal("tables should be isolated") + } +} + +func TestLiveChainKeyEncoding(t *testing.T) { + root := [32]byte{0xab, 0xcd} + key := EncodeLiveChainKey(42, root) + + slot, decoded := DecodeLiveChainKey(key) + if slot != 42 { + t.Fatalf("expected slot 42, got %d", slot) + } + if decoded != root { + t.Fatal("root mismatch") + } +} + +func TestLiveChainKeyOrdering(t *testing.T) { + // Big-endian encoding ensures lexicographic order matches numeric order. + rootA := [32]byte{1} + rootB := [32]byte{2} + key1 := EncodeLiveChainKey(10, rootA) + key2 := EncodeLiveChainKey(20, rootB) + key3 := EncodeLiveChainKey(10, rootB) + + if bytes.Compare(key1, key2) >= 0 { + t.Fatal("slot 10 should sort before slot 20") + } + if bytes.Compare(key1, key3) >= 0 { + t.Fatal("same slot, rootA should sort before rootB") + } +} + +func TestEstimateTableBytes(t *testing.T) { + b := NewInMemoryBackend() + wb, _ := b.BeginWrite() + wb.PutBatch(TableMetadata, []KV{ + {Key: []byte("k"), Value: []byte("value")}, + }) + wb.Commit() + + size := b.EstimateTableBytes(TableMetadata) + if size == 0 { + t.Fatal("should report non-zero size") + } +} diff --git a/storage/tables.go b/storage/tables.go new file mode 100644 index 0000000..9b8cdcb --- /dev/null +++ b/storage/tables.go @@ -0,0 +1,23 @@ +package storage + +// Table represents a logical storage table. +type Table string + +const ( + TableBlockHeaders Table = "block_headers" + TableBlockBodies Table = "block_bodies" + TableBlockSignatures Table = "block_signatures" + TableStates Table = "states" + TableMetadata Table = "metadata" + TableLiveChain Table = "live_chain" +) + +// AllTables returns all 6 storage tables. +var AllTables = []Table{ + TableBlockHeaders, + TableBlockBodies, + TableBlockSignatures, + TableStates, + TableMetadata, + TableLiveChain, +} diff --git a/types/aggregated_attestation.go b/types/aggregated_attestation.go deleted file mode 100644 index a88ed49..0000000 --- a/types/aggregated_attestation.go +++ /dev/null @@ -1,16 +0,0 @@ -package types - -// XMSSSignatureSize is the fixed size of an individual XMSS signature. -const XMSSSignatureSize = 3112 - -// AggregatedAttestation contains attestation data and participant bitlist. -type AggregatedAttestation struct { - AggregationBits []byte `ssz:"bitlist" ssz-max:"4096"` - Data *AttestationData -} - -// AggregatedSignatureProof carries the participants bitlist and proof payload. -type AggregatedSignatureProof struct { - Participants []byte `ssz:"bitlist" ssz-max:"4096"` - ProofData []byte `ssz-max:"1048576"` // ByteListMiB in leanSpec -} diff --git a/types/aggregated_attestation_encoding.go b/types/aggregated_attestation_encoding.go deleted file mode 100644 index bdc335c..0000000 --- a/types/aggregated_attestation_encoding.go +++ /dev/null @@ -1,261 +0,0 @@ -// Code generated by fastssz. DO NOT EDIT. -// Hash: 13e17ee916722e5a044ab4ad4cb964b606dda40bcb9ce66b227a3d82f050aeb5 -// Version: 0.1.3 -package types - -import ( - ssz "github.com/ferranbt/fastssz" -) - -// MarshalSSZ ssz marshals the AggregatedAttestation object -func (a *AggregatedAttestation) MarshalSSZ() ([]byte, error) { - return ssz.MarshalSSZ(a) -} - -// MarshalSSZTo ssz marshals the AggregatedAttestation object to a target array -func (a *AggregatedAttestation) MarshalSSZTo(buf []byte) (dst []byte, err error) { - dst = buf - offset := int(132) - - // Offset (0) 'AggregationBits' - dst = ssz.WriteOffset(dst, offset) - offset += len(a.AggregationBits) - - // Field (1) 'Data' - if a.Data == nil { - a.Data = new(AttestationData) - } - if dst, err = a.Data.MarshalSSZTo(dst); err != nil { - return - } - - // Field (0) 'AggregationBits' - if size := len(a.AggregationBits); size > 4096 { - err = ssz.ErrBytesLengthFn("AggregatedAttestation.AggregationBits", size, 4096) - return - } - dst = append(dst, a.AggregationBits...) - - return -} - -// UnmarshalSSZ ssz unmarshals the AggregatedAttestation object -func (a *AggregatedAttestation) UnmarshalSSZ(buf []byte) error { - var err error - size := uint64(len(buf)) - if size < 132 { - return ssz.ErrSize - } - - tail := buf - var o0 uint64 - - // Offset (0) 'AggregationBits' - if o0 = ssz.ReadOffset(buf[0:4]); o0 > size { - return ssz.ErrOffset - } - - if o0 < 132 { - return ssz.ErrInvalidVariableOffset - } - - // Field (1) 'Data' - if a.Data == nil { - a.Data = new(AttestationData) - } - if err = a.Data.UnmarshalSSZ(buf[4:132]); err != nil { - return err - } - - // Field (0) 'AggregationBits' - { - buf = tail[o0:] - if err = ssz.ValidateBitlist(buf, 4096); err != nil { - return err - } - if cap(a.AggregationBits) == 0 { - a.AggregationBits = make([]byte, 0, len(buf)) - } - a.AggregationBits = append(a.AggregationBits, buf...) - } - return err -} - -// SizeSSZ returns the ssz encoded size in bytes for the AggregatedAttestation object -func (a *AggregatedAttestation) SizeSSZ() (size int) { - size = 132 - - // Field (0) 'AggregationBits' - size += len(a.AggregationBits) - - return -} - -// HashTreeRoot ssz hashes the AggregatedAttestation object -func (a *AggregatedAttestation) HashTreeRoot() ([32]byte, error) { - return ssz.HashWithDefaultHasher(a) -} - -// HashTreeRootWith ssz hashes the AggregatedAttestation object with a hasher -func (a *AggregatedAttestation) HashTreeRootWith(hh ssz.HashWalker) (err error) { - indx := hh.Index() - - // Field (0) 'AggregationBits' - if len(a.AggregationBits) == 0 { - err = ssz.ErrEmptyBitlist - return - } - hh.PutBitlist(a.AggregationBits, 4096) - - // Field (1) 'Data' - if a.Data == nil { - a.Data = new(AttestationData) - } - if err = a.Data.HashTreeRootWith(hh); err != nil { - return - } - - hh.Merkleize(indx) - return -} - -// GetTree ssz hashes the AggregatedAttestation object -func (a *AggregatedAttestation) GetTree() (*ssz.Node, error) { - return ssz.ProofTree(a) -} - -// MarshalSSZ ssz marshals the AggregatedSignatureProof object -func (a *AggregatedSignatureProof) MarshalSSZ() ([]byte, error) { - return ssz.MarshalSSZ(a) -} - -// MarshalSSZTo ssz marshals the AggregatedSignatureProof object to a target array -func (a *AggregatedSignatureProof) MarshalSSZTo(buf []byte) (dst []byte, err error) { - dst = buf - offset := int(8) - - // Offset (0) 'Participants' - dst = ssz.WriteOffset(dst, offset) - offset += len(a.Participants) - - // Offset (1) 'ProofData' - dst = ssz.WriteOffset(dst, offset) - offset += len(a.ProofData) - - // Field (0) 'Participants' - if size := len(a.Participants); size > 4096 { - err = ssz.ErrBytesLengthFn("AggregatedSignatureProof.Participants", size, 4096) - return - } - dst = append(dst, a.Participants...) - - // Field (1) 'ProofData' - if size := len(a.ProofData); size > 1048576 { - err = ssz.ErrBytesLengthFn("AggregatedSignatureProof.ProofData", size, 1048576) - return - } - dst = append(dst, a.ProofData...) - - return -} - -// UnmarshalSSZ ssz unmarshals the AggregatedSignatureProof object -func (a *AggregatedSignatureProof) UnmarshalSSZ(buf []byte) error { - var err error - size := uint64(len(buf)) - if size < 8 { - return ssz.ErrSize - } - - tail := buf - var o0, o1 uint64 - - // Offset (0) 'Participants' - if o0 = ssz.ReadOffset(buf[0:4]); o0 > size { - return ssz.ErrOffset - } - - if o0 < 8 { - return ssz.ErrInvalidVariableOffset - } - - // Offset (1) 'ProofData' - if o1 = ssz.ReadOffset(buf[4:8]); o1 > size || o0 > o1 { - return ssz.ErrOffset - } - - // Field (0) 'Participants' - { - buf = tail[o0:o1] - if err = ssz.ValidateBitlist(buf, 4096); err != nil { - return err - } - if cap(a.Participants) == 0 { - a.Participants = make([]byte, 0, len(buf)) - } - a.Participants = append(a.Participants, buf...) - } - - // Field (1) 'ProofData' - { - buf = tail[o1:] - if len(buf) > 1048576 { - return ssz.ErrBytesLength - } - if cap(a.ProofData) == 0 { - a.ProofData = make([]byte, 0, len(buf)) - } - a.ProofData = append(a.ProofData, buf...) - } - return err -} - -// SizeSSZ returns the ssz encoded size in bytes for the AggregatedSignatureProof object -func (a *AggregatedSignatureProof) SizeSSZ() (size int) { - size = 8 - - // Field (0) 'Participants' - size += len(a.Participants) - - // Field (1) 'ProofData' - size += len(a.ProofData) - - return -} - -// HashTreeRoot ssz hashes the AggregatedSignatureProof object -func (a *AggregatedSignatureProof) HashTreeRoot() ([32]byte, error) { - return ssz.HashWithDefaultHasher(a) -} - -// HashTreeRootWith ssz hashes the AggregatedSignatureProof object with a hasher -func (a *AggregatedSignatureProof) HashTreeRootWith(hh ssz.HashWalker) (err error) { - indx := hh.Index() - - // Field (0) 'Participants' - if len(a.Participants) == 0 { - err = ssz.ErrEmptyBitlist - return - } - hh.PutBitlist(a.Participants, 4096) - - // Field (1) 'ProofData' - { - elemIndx := hh.Index() - byteLen := uint64(len(a.ProofData)) - if byteLen > 1048576 { - err = ssz.ErrIncorrectListSize - return - } - hh.Append(a.ProofData) - hh.MerkleizeWithMixin(elemIndx, byteLen, (1048576+31)/32) - } - - hh.Merkleize(indx) - return -} - -// GetTree ssz hashes the AggregatedSignatureProof object -func (a *AggregatedSignatureProof) GetTree() (*ssz.Node, error) { - return ssz.ProofTree(a) -} diff --git a/types/attestation.go b/types/attestation.go new file mode 100644 index 0000000..64ddeaa --- /dev/null +++ b/types/attestation.go @@ -0,0 +1,34 @@ +package types + +// AttestationData is the content of a validator's vote. +type AttestationData struct { + Slot uint64 `json:"slot"` + Head *Checkpoint `json:"head"` + Target *Checkpoint `json:"target"` + Source *Checkpoint `json:"source"` +} + +// Attestation is a single validator's unsigned vote. +type Attestation struct { + ValidatorID uint64 `json:"validator_id"` + Data *AttestationData `json:"data"` +} + +// SignedAttestation is an individual validator attestation with XMSS signature. +type SignedAttestation struct { + ValidatorID uint64 `json:"validator_id"` + Data *AttestationData `json:"data"` + Signature [SignatureSize]byte `json:"signature" ssz-size:"3112"` +} + +// AggregatedAttestation is a combined vote from multiple validators. +type AggregatedAttestation struct { + AggregationBits []byte `json:"aggregation_bits" ssz:"bitlist" ssz-max:"4096"` + Data *AttestationData `json:"data"` +} + +// SignedAggregatedAttestation carries an aggregated vote with a zkVM proof. +type SignedAggregatedAttestation struct { + Data *AttestationData `json:"data"` + Proof *AggregatedSignatureProof `json:"proof"` +} diff --git a/types/attestation_encoding.go b/types/attestation_encoding.go new file mode 100644 index 0000000..1884907 --- /dev/null +++ b/types/attestation_encoding.go @@ -0,0 +1,670 @@ +// Code generated by fastssz. DO NOT EDIT. +// Hash: 52b67ea3b7cb79482579d5c3ff659d3473b41bec2511e9c0fd826505bb9f9f72 +// Version: 0.1.3 +package types + +import ( + ssz "github.com/ferranbt/fastssz" +) + +// MarshalSSZ ssz marshals the AttestationData object +func (a *AttestationData) MarshalSSZ() ([]byte, error) { + return ssz.MarshalSSZ(a) +} + +// MarshalSSZTo ssz marshals the AttestationData object to a target array +func (a *AttestationData) MarshalSSZTo(buf []byte) (dst []byte, err error) { + dst = buf + + // Field (0) 'Slot' + dst = ssz.MarshalUint64(dst, a.Slot) + + // Field (1) 'Head' + if a.Head == nil { + a.Head = new(Checkpoint) + } + if dst, err = a.Head.MarshalSSZTo(dst); err != nil { + return + } + + // Field (2) 'Target' + if a.Target == nil { + a.Target = new(Checkpoint) + } + if dst, err = a.Target.MarshalSSZTo(dst); err != nil { + return + } + + // Field (3) 'Source' + if a.Source == nil { + a.Source = new(Checkpoint) + } + if dst, err = a.Source.MarshalSSZTo(dst); err != nil { + return + } + + return +} + +// UnmarshalSSZ ssz unmarshals the AttestationData object +func (a *AttestationData) UnmarshalSSZ(buf []byte) error { + var err error + size := uint64(len(buf)) + if size != 128 { + return ssz.ErrSize + } + + // Field (0) 'Slot' + a.Slot = ssz.UnmarshallUint64(buf[0:8]) + + // Field (1) 'Head' + if a.Head == nil { + a.Head = new(Checkpoint) + } + if err = a.Head.UnmarshalSSZ(buf[8:48]); err != nil { + return err + } + + // Field (2) 'Target' + if a.Target == nil { + a.Target = new(Checkpoint) + } + if err = a.Target.UnmarshalSSZ(buf[48:88]); err != nil { + return err + } + + // Field (3) 'Source' + if a.Source == nil { + a.Source = new(Checkpoint) + } + if err = a.Source.UnmarshalSSZ(buf[88:128]); err != nil { + return err + } + + return err +} + +// SizeSSZ returns the ssz encoded size in bytes for the AttestationData object +func (a *AttestationData) SizeSSZ() (size int) { + size = 128 + return +} + +// HashTreeRoot ssz hashes the AttestationData object +func (a *AttestationData) HashTreeRoot() ([32]byte, error) { + return ssz.HashWithDefaultHasher(a) +} + +// HashTreeRootWith ssz hashes the AttestationData object with a hasher +func (a *AttestationData) HashTreeRootWith(hh ssz.HashWalker) (err error) { + indx := hh.Index() + + // Field (0) 'Slot' + hh.PutUint64(a.Slot) + + // Field (1) 'Head' + if a.Head == nil { + a.Head = new(Checkpoint) + } + if err = a.Head.HashTreeRootWith(hh); err != nil { + return + } + + // Field (2) 'Target' + if a.Target == nil { + a.Target = new(Checkpoint) + } + if err = a.Target.HashTreeRootWith(hh); err != nil { + return + } + + // Field (3) 'Source' + if a.Source == nil { + a.Source = new(Checkpoint) + } + if err = a.Source.HashTreeRootWith(hh); err != nil { + return + } + + hh.Merkleize(indx) + return +} + +// GetTree ssz hashes the AttestationData object +func (a *AttestationData) GetTree() (*ssz.Node, error) { + return ssz.ProofTree(a) +} + +// MarshalSSZ ssz marshals the Attestation object +func (a *Attestation) MarshalSSZ() ([]byte, error) { + return ssz.MarshalSSZ(a) +} + +// MarshalSSZTo ssz marshals the Attestation object to a target array +func (a *Attestation) MarshalSSZTo(buf []byte) (dst []byte, err error) { + dst = buf + + // Field (0) 'ValidatorID' + dst = ssz.MarshalUint64(dst, a.ValidatorID) + + // Field (1) 'Data' + if a.Data == nil { + a.Data = new(AttestationData) + } + if dst, err = a.Data.MarshalSSZTo(dst); err != nil { + return + } + + return +} + +// UnmarshalSSZ ssz unmarshals the Attestation object +func (a *Attestation) UnmarshalSSZ(buf []byte) error { + var err error + size := uint64(len(buf)) + if size != 136 { + return ssz.ErrSize + } + + // Field (0) 'ValidatorID' + a.ValidatorID = ssz.UnmarshallUint64(buf[0:8]) + + // Field (1) 'Data' + if a.Data == nil { + a.Data = new(AttestationData) + } + if err = a.Data.UnmarshalSSZ(buf[8:136]); err != nil { + return err + } + + return err +} + +// SizeSSZ returns the ssz encoded size in bytes for the Attestation object +func (a *Attestation) SizeSSZ() (size int) { + size = 136 + return +} + +// HashTreeRoot ssz hashes the Attestation object +func (a *Attestation) HashTreeRoot() ([32]byte, error) { + return ssz.HashWithDefaultHasher(a) +} + +// HashTreeRootWith ssz hashes the Attestation object with a hasher +func (a *Attestation) HashTreeRootWith(hh ssz.HashWalker) (err error) { + indx := hh.Index() + + // Field (0) 'ValidatorID' + hh.PutUint64(a.ValidatorID) + + // Field (1) 'Data' + if a.Data == nil { + a.Data = new(AttestationData) + } + if err = a.Data.HashTreeRootWith(hh); err != nil { + return + } + + hh.Merkleize(indx) + return +} + +// GetTree ssz hashes the Attestation object +func (a *Attestation) GetTree() (*ssz.Node, error) { + return ssz.ProofTree(a) +} + +// MarshalSSZ ssz marshals the SignedAttestation object +func (s *SignedAttestation) MarshalSSZ() ([]byte, error) { + return ssz.MarshalSSZ(s) +} + +// MarshalSSZTo ssz marshals the SignedAttestation object to a target array +func (s *SignedAttestation) MarshalSSZTo(buf []byte) (dst []byte, err error) { + dst = buf + + // Field (0) 'ValidatorID' + dst = ssz.MarshalUint64(dst, s.ValidatorID) + + // Field (1) 'Data' + if s.Data == nil { + s.Data = new(AttestationData) + } + if dst, err = s.Data.MarshalSSZTo(dst); err != nil { + return + } + + // Field (2) 'Signature' + dst = append(dst, s.Signature[:]...) + + return +} + +// UnmarshalSSZ ssz unmarshals the SignedAttestation object +func (s *SignedAttestation) UnmarshalSSZ(buf []byte) error { + var err error + size := uint64(len(buf)) + if size != 3248 { + return ssz.ErrSize + } + + // Field (0) 'ValidatorID' + s.ValidatorID = ssz.UnmarshallUint64(buf[0:8]) + + // Field (1) 'Data' + if s.Data == nil { + s.Data = new(AttestationData) + } + if err = s.Data.UnmarshalSSZ(buf[8:136]); err != nil { + return err + } + + // Field (2) 'Signature' + copy(s.Signature[:], buf[136:3248]) + + return err +} + +// SizeSSZ returns the ssz encoded size in bytes for the SignedAttestation object +func (s *SignedAttestation) SizeSSZ() (size int) { + size = 3248 + return +} + +// HashTreeRoot ssz hashes the SignedAttestation object +func (s *SignedAttestation) HashTreeRoot() ([32]byte, error) { + return ssz.HashWithDefaultHasher(s) +} + +// HashTreeRootWith ssz hashes the SignedAttestation object with a hasher +func (s *SignedAttestation) HashTreeRootWith(hh ssz.HashWalker) (err error) { + indx := hh.Index() + + // Field (0) 'ValidatorID' + hh.PutUint64(s.ValidatorID) + + // Field (1) 'Data' + if s.Data == nil { + s.Data = new(AttestationData) + } + if err = s.Data.HashTreeRootWith(hh); err != nil { + return + } + + // Field (2) 'Signature' + hh.PutBytes(s.Signature[:]) + + hh.Merkleize(indx) + return +} + +// GetTree ssz hashes the SignedAttestation object +func (s *SignedAttestation) GetTree() (*ssz.Node, error) { + return ssz.ProofTree(s) +} + +// MarshalSSZ ssz marshals the AggregatedAttestation object +func (a *AggregatedAttestation) MarshalSSZ() ([]byte, error) { + return ssz.MarshalSSZ(a) +} + +// MarshalSSZTo ssz marshals the AggregatedAttestation object to a target array +func (a *AggregatedAttestation) MarshalSSZTo(buf []byte) (dst []byte, err error) { + dst = buf + offset := int(132) + + // Offset (0) 'AggregationBits' + dst = ssz.WriteOffset(dst, offset) + + // Field (1) 'Data' + if a.Data == nil { + a.Data = new(AttestationData) + } + if dst, err = a.Data.MarshalSSZTo(dst); err != nil { + return + } + + // Field (0) 'AggregationBits' + if size := len(a.AggregationBits); size > 4096 { + err = ssz.ErrBytesLengthFn("AggregatedAttestation.AggregationBits", size, 4096) + return + } + dst = append(dst, a.AggregationBits...) + + return +} + +// UnmarshalSSZ ssz unmarshals the AggregatedAttestation object +func (a *AggregatedAttestation) UnmarshalSSZ(buf []byte) error { + var err error + size := uint64(len(buf)) + if size < 132 { + return ssz.ErrSize + } + + tail := buf + var o0 uint64 + + // Offset (0) 'AggregationBits' + if o0 = ssz.ReadOffset(buf[0:4]); o0 > size { + return ssz.ErrOffset + } + + if o0 != 132 { + return ssz.ErrInvalidVariableOffset + } + + // Field (1) 'Data' + if a.Data == nil { + a.Data = new(AttestationData) + } + if err = a.Data.UnmarshalSSZ(buf[4:132]); err != nil { + return err + } + + // Field (0) 'AggregationBits' + { + buf = tail[o0:] + if err = ssz.ValidateBitlist(buf, 4096); err != nil { + return err + } + if cap(a.AggregationBits) == 0 { + a.AggregationBits = make([]byte, 0, len(buf)) + } + a.AggregationBits = append(a.AggregationBits, buf...) + } + return err +} + +// SizeSSZ returns the ssz encoded size in bytes for the AggregatedAttestation object +func (a *AggregatedAttestation) SizeSSZ() (size int) { + size = 132 + + // Field (0) 'AggregationBits' + size += len(a.AggregationBits) + + return +} + +// HashTreeRoot ssz hashes the AggregatedAttestation object +func (a *AggregatedAttestation) HashTreeRoot() ([32]byte, error) { + return ssz.HashWithDefaultHasher(a) +} + +// HashTreeRootWith ssz hashes the AggregatedAttestation object with a hasher +func (a *AggregatedAttestation) HashTreeRootWith(hh ssz.HashWalker) (err error) { + indx := hh.Index() + + // Field (0) 'AggregationBits' + if len(a.AggregationBits) == 0 { + err = ssz.ErrEmptyBitlist + return + } + hh.PutBitlist(a.AggregationBits, 4096) + + // Field (1) 'Data' + if a.Data == nil { + a.Data = new(AttestationData) + } + if err = a.Data.HashTreeRootWith(hh); err != nil { + return + } + + hh.Merkleize(indx) + return +} + +// GetTree ssz hashes the AggregatedAttestation object +func (a *AggregatedAttestation) GetTree() (*ssz.Node, error) { + return ssz.ProofTree(a) +} + +// MarshalSSZ ssz marshals the SignedAggregatedAttestation object +func (s *SignedAggregatedAttestation) MarshalSSZ() ([]byte, error) { + return ssz.MarshalSSZ(s) +} + +// MarshalSSZTo ssz marshals the SignedAggregatedAttestation object to a target array +func (s *SignedAggregatedAttestation) MarshalSSZTo(buf []byte) (dst []byte, err error) { + dst = buf + offset := int(132) + + // Field (0) 'Data' + if s.Data == nil { + s.Data = new(AttestationData) + } + if dst, err = s.Data.MarshalSSZTo(dst); err != nil { + return + } + + // Offset (1) 'Proof' + dst = ssz.WriteOffset(dst, offset) + + // Field (1) 'Proof' + if dst, err = s.Proof.MarshalSSZTo(dst); err != nil { + return + } + + return +} + +// UnmarshalSSZ ssz unmarshals the SignedAggregatedAttestation object +func (s *SignedAggregatedAttestation) UnmarshalSSZ(buf []byte) error { + var err error + size := uint64(len(buf)) + if size < 132 { + return ssz.ErrSize + } + + tail := buf + var o1 uint64 + + // Field (0) 'Data' + if s.Data == nil { + s.Data = new(AttestationData) + } + if err = s.Data.UnmarshalSSZ(buf[0:128]); err != nil { + return err + } + + // Offset (1) 'Proof' + if o1 = ssz.ReadOffset(buf[128:132]); o1 > size { + return ssz.ErrOffset + } + + if o1 != 132 { + return ssz.ErrInvalidVariableOffset + } + + // Field (1) 'Proof' + { + buf = tail[o1:] + if s.Proof == nil { + s.Proof = new(AggregatedSignatureProof) + } + if err = s.Proof.UnmarshalSSZ(buf); err != nil { + return err + } + } + return err +} + +// SizeSSZ returns the ssz encoded size in bytes for the SignedAggregatedAttestation object +func (s *SignedAggregatedAttestation) SizeSSZ() (size int) { + size = 132 + + // Field (1) 'Proof' + if s.Proof == nil { + s.Proof = new(AggregatedSignatureProof) + } + size += s.Proof.SizeSSZ() + + return +} + +// HashTreeRoot ssz hashes the SignedAggregatedAttestation object +func (s *SignedAggregatedAttestation) HashTreeRoot() ([32]byte, error) { + return ssz.HashWithDefaultHasher(s) +} + +// HashTreeRootWith ssz hashes the SignedAggregatedAttestation object with a hasher +func (s *SignedAggregatedAttestation) HashTreeRootWith(hh ssz.HashWalker) (err error) { + indx := hh.Index() + + // Field (0) 'Data' + if s.Data == nil { + s.Data = new(AttestationData) + } + if err = s.Data.HashTreeRootWith(hh); err != nil { + return + } + + // Field (1) 'Proof' + if err = s.Proof.HashTreeRootWith(hh); err != nil { + return + } + + hh.Merkleize(indx) + return +} + +// GetTree ssz hashes the SignedAggregatedAttestation object +func (s *SignedAggregatedAttestation) GetTree() (*ssz.Node, error) { + return ssz.ProofTree(s) +} + +// MarshalSSZ ssz marshals the AggregatedSignatureProof object +func (a *AggregatedSignatureProof) MarshalSSZ() ([]byte, error) { + return ssz.MarshalSSZ(a) +} + +// MarshalSSZTo ssz marshals the AggregatedSignatureProof object to a target array +func (a *AggregatedSignatureProof) MarshalSSZTo(buf []byte) (dst []byte, err error) { + dst = buf + offset := int(8) + + // Offset (0) 'Participants' + dst = ssz.WriteOffset(dst, offset) + offset += len(a.Participants) + + // Offset (1) 'ProofData' + dst = ssz.WriteOffset(dst, offset) + + // Field (0) 'Participants' + if size := len(a.Participants); size > 4096 { + err = ssz.ErrBytesLengthFn("AggregatedSignatureProof.Participants", size, 4096) + return + } + dst = append(dst, a.Participants...) + + // Field (1) 'ProofData' + if size := len(a.ProofData); size > 1048576 { + err = ssz.ErrBytesLengthFn("AggregatedSignatureProof.ProofData", size, 1048576) + return + } + dst = append(dst, a.ProofData...) + + return +} + +// UnmarshalSSZ ssz unmarshals the AggregatedSignatureProof object +func (a *AggregatedSignatureProof) UnmarshalSSZ(buf []byte) error { + var err error + size := uint64(len(buf)) + if size < 8 { + return ssz.ErrSize + } + + tail := buf + var o0, o1 uint64 + + // Offset (0) 'Participants' + if o0 = ssz.ReadOffset(buf[0:4]); o0 > size { + return ssz.ErrOffset + } + + if o0 != 8 { + return ssz.ErrInvalidVariableOffset + } + + // Offset (1) 'ProofData' + if o1 = ssz.ReadOffset(buf[4:8]); o1 > size || o0 > o1 { + return ssz.ErrOffset + } + + // Field (0) 'Participants' + { + buf = tail[o0:o1] + if err = ssz.ValidateBitlist(buf, 4096); err != nil { + return err + } + if cap(a.Participants) == 0 { + a.Participants = make([]byte, 0, len(buf)) + } + a.Participants = append(a.Participants, buf...) + } + + // Field (1) 'ProofData' + { + buf = tail[o1:] + if len(buf) > 1048576 { + return ssz.ErrBytesLength + } + if cap(a.ProofData) == 0 { + a.ProofData = make([]byte, 0, len(buf)) + } + a.ProofData = append(a.ProofData, buf...) + } + return err +} + +// SizeSSZ returns the ssz encoded size in bytes for the AggregatedSignatureProof object +func (a *AggregatedSignatureProof) SizeSSZ() (size int) { + size = 8 + + // Field (0) 'Participants' + size += len(a.Participants) + + // Field (1) 'ProofData' + size += len(a.ProofData) + + return +} + +// HashTreeRoot ssz hashes the AggregatedSignatureProof object +func (a *AggregatedSignatureProof) HashTreeRoot() ([32]byte, error) { + return ssz.HashWithDefaultHasher(a) +} + +// HashTreeRootWith ssz hashes the AggregatedSignatureProof object with a hasher +func (a *AggregatedSignatureProof) HashTreeRootWith(hh ssz.HashWalker) (err error) { + indx := hh.Index() + + // Field (0) 'Participants' + if len(a.Participants) == 0 { + err = ssz.ErrEmptyBitlist + return + } + hh.PutBitlist(a.Participants, 4096) + + // Field (1) 'ProofData' + { + elemIndx := hh.Index() + byteLen := uint64(len(a.ProofData)) + if byteLen > 1048576 { + err = ssz.ErrIncorrectListSize + return + } + hh.Append(a.ProofData) + hh.MerkleizeWithMixin(elemIndx, byteLen, (1048576+31)/32) + } + + hh.Merkleize(indx) + return +} + +// GetTree ssz hashes the AggregatedSignatureProof object +func (a *AggregatedSignatureProof) GetTree() (*ssz.Node, error) { + return ssz.ProofTree(a) +} diff --git a/types/bitlist.go b/types/bitlist.go new file mode 100644 index 0000000..45b323b --- /dev/null +++ b/types/bitlist.go @@ -0,0 +1,93 @@ +package types + +// Bitlist helpers for working with SSZ bitlists encoded as []byte. +// SSZ bitlists use a delimiter bit: the highest set bit marks the end of data. + +// BitlistLen returns the number of data bits in a SSZ-encoded bitlist. +func BitlistLen(b []byte) uint64 { + if len(b) == 0 { + return 0 + } + for i := len(b) - 1; i >= 0; i-- { + if b[i] == 0 { + continue + } + for bit := 7; bit >= 0; bit-- { + if b[i]&(1<= uint64(len(b)) { + return false + } + return b[i/8]&(1<<(i%8)) != 0 +} + +// BitlistSet sets bit at index i to 1. +func BitlistSet(b []byte, i uint64) { + if i/8 >= uint64(len(b)) { + return + } + b[i/8] |= 1 << (i % 8) +} + +// BitlistCount returns the number of set data bits. +func BitlistCount(b []byte) uint64 { + length := BitlistLen(b) + var count uint64 + for i := uint64(0); i < length; i++ { + if BitlistGet(b, i) { + count++ + } + } + return count +} + +// NewBitlistSSZ creates a new SSZ-encoded bitlist with the given number of data bits. +func NewBitlistSSZ(length uint64) []byte { + if length == 0 { + return []byte{0x01} + } + numBytes := (length + 8) / 8 + data := make([]byte, numBytes) + data[length/8] |= 1 << (length % 8) + return data +} + +// BitlistIndices returns all set bit indices in a bitlist. +func BitlistIndices(b []byte) []uint64 { + length := BitlistLen(b) + var indices []uint64 + for i := uint64(0); i < length; i++ { + if BitlistGet(b, i) { + indices = append(indices, i) + } + } + return indices +} + +// BitlistExtend grows a bitlist to newLen, preserving existing bits. +func BitlistExtend(b []byte, newLen uint64) []byte { + oldLen := BitlistLen(b) + if newLen <= oldLen { + return b + } + // Clear old delimiter + if oldLen < uint64(len(b))*8 { + b[oldLen/8] &^= 1 << (oldLen % 8) + } + // Grow + needed := (newLen + 8) / 8 + for uint64(len(b)) < needed { + b = append(b, 0) + } + // New delimiter + b[newLen/8] |= 1 << (newLen % 8) + return b +} diff --git a/types/bitlist_test.go b/types/bitlist_test.go new file mode 100644 index 0000000..edb0119 --- /dev/null +++ b/types/bitlist_test.go @@ -0,0 +1,68 @@ +package types + +import "testing" + +func TestNewBitlistSSZ(t *testing.T) { + bl := NewBitlistSSZ(8) + if BitlistLen(bl) != 8 { + t.Fatalf("expected length 8, got %d", BitlistLen(bl)) + } + for i := uint64(0); i < 8; i++ { + if BitlistGet(bl, i) { + t.Fatalf("bit %d should be false", i) + } + } +} + +func TestBitlistSetAndGet(t *testing.T) { + bl := NewBitlistSSZ(16) + BitlistSet(bl, 0) + BitlistSet(bl, 5) + BitlistSet(bl, 15) + if !BitlistGet(bl, 0) || !BitlistGet(bl, 5) || !BitlistGet(bl, 15) { + t.Fatal("set bits not readable") + } + if BitlistGet(bl, 1) || BitlistGet(bl, 14) { + t.Fatal("unset bits should be false") + } + if BitlistCount(bl) != 3 { + t.Fatalf("expected count 3, got %d", BitlistCount(bl)) + } +} + +func TestBitlistExtend(t *testing.T) { + bl := NewBitlistSSZ(4) + BitlistSet(bl, 1) + BitlistSet(bl, 3) + bl = BitlistExtend(bl, 10) + if BitlistLen(bl) != 10 { + t.Fatalf("expected length 10, got %d", BitlistLen(bl)) + } + if !BitlistGet(bl, 1) || !BitlistGet(bl, 3) { + t.Fatal("original bits lost after extend") + } + if BitlistGet(bl, 5) { + t.Fatal("extended bits should be false") + } +} + +func TestBitlistEmpty(t *testing.T) { + bl := NewBitlistSSZ(0) + if BitlistLen(bl) != 0 { + t.Fatalf("expected length 0, got %d", BitlistLen(bl)) + } + if BitlistCount(bl) != 0 { + t.Fatalf("expected count 0, got %d", BitlistCount(bl)) + } +} + +func TestBitlistFromRawBytes(t *testing.T) { + // 3 bits [true, false, true] + delimiter = 0b00001101 = 0x0d + bl := []byte{0x0d} + if BitlistLen(bl) != 3 { + t.Fatalf("expected length 3, got %d", BitlistLen(bl)) + } + if !BitlistGet(bl, 0) || BitlistGet(bl, 1) || !BitlistGet(bl, 2) { + t.Fatal("incorrect bits") + } +} diff --git a/types/block.go b/types/block.go index 8b68dee..660c1ae 100644 --- a/types/block.go +++ b/types/block.go @@ -1,42 +1,48 @@ package types -// BlockHeader contains metadata for a block. +// BlockHeader contains block metadata without the body. type BlockHeader struct { - Slot uint64 - ProposerIndex uint64 - ParentRoot [32]byte `ssz-size:"32"` - StateRoot [32]byte `ssz-size:"32"` - BodyRoot [32]byte `ssz-size:"32"` + Slot uint64 `json:"slot"` + ProposerIndex uint64 `json:"proposer_index"` + ParentRoot [RootSize]byte `json:"parent_root" ssz-size:"32"` + StateRoot [RootSize]byte `json:"state_root" ssz-size:"32"` + BodyRoot [RootSize]byte `json:"body_root" ssz-size:"32"` } -// BlockBody contains the payload of a block. +// BlockBody contains the attestations included in a block. type BlockBody struct { - Attestations []*AggregatedAttestation `ssz-max:"4096"` + Attestations []*AggregatedAttestation `json:"attestations" ssz-max:"4096"` } -// Block is a complete block including header fields and body. +// Block is the core block structure proposed by a validator. type Block struct { - Slot uint64 - ProposerIndex uint64 - ParentRoot [32]byte `ssz-size:"32"` - StateRoot [32]byte `ssz-size:"32"` - Body *BlockBody + Slot uint64 `json:"slot"` + ProposerIndex uint64 `json:"proposer_index"` + ParentRoot [RootSize]byte `json:"parent_root" ssz-size:"32"` + StateRoot [RootSize]byte `json:"state_root" ssz-size:"32"` + Body *BlockBody `json:"body"` } -// BlockWithAttestation wraps a block and the proposer's own attestation. +// BlockWithAttestation pairs a block with the proposer's own attestation. type BlockWithAttestation struct { - Block *Block - ProposerAttestation *Attestation + Block *Block `json:"block"` + ProposerAttestation *Attestation `json:"proposer_attestation"` } -// BlockSignatures contains per-aggregated-attestation proofs and proposer sig. +// AggregatedSignatureProof is a zkVM proof that a set of validators signed. +type AggregatedSignatureProof struct { + Participants []byte `json:"participants" ssz:"bitlist" ssz-max:"4096"` + ProofData []byte `json:"proof_data" ssz-max:"1048576"` +} + +// BlockSignatures carries the XMSS signatures for a block. type BlockSignatures struct { - AttestationSignatures []*AggregatedSignatureProof `ssz-max:"4096"` - ProposerSignature [3112]byte `ssz-size:"3112"` + AttestationSignatures []*AggregatedSignatureProof `json:"attestation_signatures" ssz-max:"4096"` + ProposerSignature [SignatureSize]byte `json:"proposer_signature" ssz-size:"3112"` } -// SignedBlockWithAttestation is the gossip/wire envelope for blocks. +// SignedBlockWithAttestation is the complete signed block as gossiped on the network. type SignedBlockWithAttestation struct { - Message *BlockWithAttestation - Signature BlockSignatures + Block *BlockWithAttestation `json:"block"` + Signature *BlockSignatures `json:"signature"` } diff --git a/types/block_encoding.go b/types/block_encoding.go index 89003ef..bcabaa1 100644 --- a/types/block_encoding.go +++ b/types/block_encoding.go @@ -1,5 +1,5 @@ // Code generated by fastssz. DO NOT EDIT. -// Hash: 13e17ee916722e5a044ab4ad4cb964b606dda40bcb9ce66b227a3d82f050aeb5 +// Hash: 52b67ea3b7cb79482579d5c3ff659d3473b41bec2511e9c0fd826505bb9f9f72 // Version: 0.1.3 package types @@ -111,10 +111,6 @@ func (b *BlockBody) MarshalSSZTo(buf []byte) (dst []byte, err error) { // Offset (0) 'Attestations' dst = ssz.WriteOffset(dst, offset) - for ii := 0; ii < len(b.Attestations); ii++ { - offset += 4 - offset += b.Attestations[ii].SizeSSZ() - } // Field (0) 'Attestations' if size := len(b.Attestations); size > 4096 { @@ -153,7 +149,7 @@ func (b *BlockBody) UnmarshalSSZ(buf []byte) error { return ssz.ErrOffset } - if o0 < 4 { + if o0 != 4 { return ssz.ErrInvalidVariableOffset } @@ -252,10 +248,6 @@ func (b *Block) MarshalSSZTo(buf []byte) (dst []byte, err error) { // Offset (4) 'Body' dst = ssz.WriteOffset(dst, offset) - if b.Body == nil { - b.Body = new(BlockBody) - } - offset += b.Body.SizeSSZ() // Field (4) 'Body' if dst, err = b.Body.MarshalSSZTo(dst); err != nil { @@ -293,7 +285,7 @@ func (b *Block) UnmarshalSSZ(buf []byte) error { return ssz.ErrOffset } - if o4 < 84 { + if o4 != 84 { return ssz.ErrInvalidVariableOffset } @@ -370,10 +362,6 @@ func (b *BlockWithAttestation) MarshalSSZTo(buf []byte) (dst []byte, err error) // Offset (0) 'Block' dst = ssz.WriteOffset(dst, offset) - if b.Block == nil { - b.Block = new(Block) - } - offset += b.Block.SizeSSZ() // Field (1) 'ProposerAttestation' if b.ProposerAttestation == nil { @@ -407,7 +395,7 @@ func (b *BlockWithAttestation) UnmarshalSSZ(buf []byte) error { return ssz.ErrOffset } - if o0 < 140 { + if o0 != 140 { return ssz.ErrInvalidVariableOffset } @@ -488,10 +476,6 @@ func (b *BlockSignatures) MarshalSSZTo(buf []byte) (dst []byte, err error) { // Offset (0) 'AttestationSignatures' dst = ssz.WriteOffset(dst, offset) - for ii := 0; ii < len(b.AttestationSignatures); ii++ { - offset += 4 - offset += b.AttestationSignatures[ii].SizeSSZ() - } // Field (1) 'ProposerSignature' dst = append(dst, b.ProposerSignature[:]...) @@ -533,7 +517,7 @@ func (b *BlockSignatures) UnmarshalSSZ(buf []byte) error { return ssz.ErrOffset } - if o0 < 3116 { + if o0 != 3116 { return ssz.ErrInvalidVariableOffset } @@ -624,19 +608,18 @@ func (s *SignedBlockWithAttestation) MarshalSSZTo(buf []byte) (dst []byte, err e dst = buf offset := int(8) - // Offset (0) 'Message' + // Offset (0) 'Block' dst = ssz.WriteOffset(dst, offset) - if s.Message == nil { - s.Message = new(BlockWithAttestation) + if s.Block == nil { + s.Block = new(BlockWithAttestation) } - offset += s.Message.SizeSSZ() + offset += s.Block.SizeSSZ() // Offset (1) 'Signature' dst = ssz.WriteOffset(dst, offset) - offset += s.Signature.SizeSSZ() - // Field (0) 'Message' - if dst, err = s.Message.MarshalSSZTo(dst); err != nil { + // Field (0) 'Block' + if dst, err = s.Block.MarshalSSZTo(dst); err != nil { return } @@ -659,12 +642,12 @@ func (s *SignedBlockWithAttestation) UnmarshalSSZ(buf []byte) error { tail := buf var o0, o1 uint64 - // Offset (0) 'Message' + // Offset (0) 'Block' if o0 = ssz.ReadOffset(buf[0:4]); o0 > size { return ssz.ErrOffset } - if o0 < 8 { + if o0 != 8 { return ssz.ErrInvalidVariableOffset } @@ -673,13 +656,13 @@ func (s *SignedBlockWithAttestation) UnmarshalSSZ(buf []byte) error { return ssz.ErrOffset } - // Field (0) 'Message' + // Field (0) 'Block' { buf = tail[o0:o1] - if s.Message == nil { - s.Message = new(BlockWithAttestation) + if s.Block == nil { + s.Block = new(BlockWithAttestation) } - if err = s.Message.UnmarshalSSZ(buf); err != nil { + if err = s.Block.UnmarshalSSZ(buf); err != nil { return err } } @@ -687,6 +670,9 @@ func (s *SignedBlockWithAttestation) UnmarshalSSZ(buf []byte) error { // Field (1) 'Signature' { buf = tail[o1:] + if s.Signature == nil { + s.Signature = new(BlockSignatures) + } if err = s.Signature.UnmarshalSSZ(buf); err != nil { return err } @@ -698,13 +684,16 @@ func (s *SignedBlockWithAttestation) UnmarshalSSZ(buf []byte) error { func (s *SignedBlockWithAttestation) SizeSSZ() (size int) { size = 8 - // Field (0) 'Message' - if s.Message == nil { - s.Message = new(BlockWithAttestation) + // Field (0) 'Block' + if s.Block == nil { + s.Block = new(BlockWithAttestation) } - size += s.Message.SizeSSZ() + size += s.Block.SizeSSZ() // Field (1) 'Signature' + if s.Signature == nil { + s.Signature = new(BlockSignatures) + } size += s.Signature.SizeSSZ() return @@ -719,8 +708,8 @@ func (s *SignedBlockWithAttestation) HashTreeRoot() ([32]byte, error) { func (s *SignedBlockWithAttestation) HashTreeRootWith(hh ssz.HashWalker) (err error) { indx := hh.Index() - // Field (0) 'Message' - if err = s.Message.HashTreeRootWith(hh); err != nil { + // Field (0) 'Block' + if err = s.Block.HashTreeRootWith(hh); err != nil { return } diff --git a/types/checkpoint.go b/types/checkpoint.go index 59baa09..4efe9fb 100644 --- a/types/checkpoint.go +++ b/types/checkpoint.go @@ -1,7 +1,7 @@ package types -// Checkpoint represents a checkpoint in the chain's history. +// Checkpoint is a finality marker: a block root at a specific slot. type Checkpoint struct { - Root [32]byte `ssz-size:"32"` - Slot uint64 + Root [RootSize]byte `json:"root" ssz-size:"32"` + Slot uint64 `json:"slot"` } diff --git a/types/checkpoint_encoding.go b/types/checkpoint_encoding.go index a8edc58..802ff5e 100644 --- a/types/checkpoint_encoding.go +++ b/types/checkpoint_encoding.go @@ -1,5 +1,5 @@ // Code generated by fastssz. DO NOT EDIT. -// Hash: 13e17ee916722e5a044ab4ad4cb964b606dda40bcb9ce66b227a3d82f050aeb5 +// Hash: 52b67ea3b7cb79482579d5c3ff659d3473b41bec2511e9c0fd826505bb9f9f72 // Version: 0.1.3 package types diff --git a/types/config.go b/types/config.go index 4e4ee01..4e53098 100644 --- a/types/config.go +++ b/types/config.go @@ -1,6 +1,6 @@ package types -// Config holds genesis configuration. -type Config struct { - GenesisTime uint64 +// ChainConfig holds minimal consensus configuration embedded in the beacon state. +type ChainConfig struct { + GenesisTime uint64 `json:"genesis_time"` } diff --git a/types/config_encoding.go b/types/config_encoding.go index 9ec7a1d..15dfa0d 100644 --- a/types/config_encoding.go +++ b/types/config_encoding.go @@ -1,5 +1,5 @@ // Code generated by fastssz. DO NOT EDIT. -// Hash: 13e17ee916722e5a044ab4ad4cb964b606dda40bcb9ce66b227a3d82f050aeb5 +// Hash: 52b67ea3b7cb79482579d5c3ff659d3473b41bec2511e9c0fd826505bb9f9f72 // Version: 0.1.3 package types @@ -7,13 +7,13 @@ import ( ssz "github.com/ferranbt/fastssz" ) -// MarshalSSZ ssz marshals the Config object -func (c *Config) MarshalSSZ() ([]byte, error) { +// MarshalSSZ ssz marshals the ChainConfig object +func (c *ChainConfig) MarshalSSZ() ([]byte, error) { return ssz.MarshalSSZ(c) } -// MarshalSSZTo ssz marshals the Config object to a target array -func (c *Config) MarshalSSZTo(buf []byte) (dst []byte, err error) { +// MarshalSSZTo ssz marshals the ChainConfig object to a target array +func (c *ChainConfig) MarshalSSZTo(buf []byte) (dst []byte, err error) { dst = buf // Field (0) 'GenesisTime' @@ -22,8 +22,8 @@ func (c *Config) MarshalSSZTo(buf []byte) (dst []byte, err error) { return } -// UnmarshalSSZ ssz unmarshals the Config object -func (c *Config) UnmarshalSSZ(buf []byte) error { +// UnmarshalSSZ ssz unmarshals the ChainConfig object +func (c *ChainConfig) UnmarshalSSZ(buf []byte) error { var err error size := uint64(len(buf)) if size != 8 { @@ -36,19 +36,19 @@ func (c *Config) UnmarshalSSZ(buf []byte) error { return err } -// SizeSSZ returns the ssz encoded size in bytes for the Config object -func (c *Config) SizeSSZ() (size int) { +// SizeSSZ returns the ssz encoded size in bytes for the ChainConfig object +func (c *ChainConfig) SizeSSZ() (size int) { size = 8 return } -// HashTreeRoot ssz hashes the Config object -func (c *Config) HashTreeRoot() ([32]byte, error) { +// HashTreeRoot ssz hashes the ChainConfig object +func (c *ChainConfig) HashTreeRoot() ([32]byte, error) { return ssz.HashWithDefaultHasher(c) } -// HashTreeRootWith ssz hashes the Config object with a hasher -func (c *Config) HashTreeRootWith(hh ssz.HashWalker) (err error) { +// HashTreeRootWith ssz hashes the ChainConfig object with a hasher +func (c *ChainConfig) HashTreeRootWith(hh ssz.HashWalker) (err error) { indx := hh.Index() // Field (0) 'GenesisTime' @@ -58,7 +58,7 @@ func (c *Config) HashTreeRootWith(hh ssz.HashWalker) (err error) { return } -// GetTree ssz hashes the Config object -func (c *Config) GetTree() (*ssz.Node, error) { +// GetTree ssz hashes the ChainConfig object +func (c *ChainConfig) GetTree() (*ssz.Node, error) { return ssz.ProofTree(c) } diff --git a/types/constants.go b/types/constants.go index 0582b14..0517467 100644 --- a/types/constants.go +++ b/types/constants.go @@ -1,15 +1,27 @@ package types -// Protocol constants from the reference spec. const ( + // Timing SecondsPerSlot = 4 IntervalsPerSlot = 5 MillisecondsPerSlot = SecondsPerSlot * 1000 MillisecondsPerInterval = MillisecondsPerSlot / IntervalsPerSlot // 800 - JustificationLookback = 3 - MaxRequestBlocks = 1024 - SlotsPerEpoch = 32 -) -// ZeroHash is a 32-byte zero hash used as genesis parent and padding. -var ZeroHash [32]byte + // Limits + HistoricalRootsLimit = 1 << 18 // 262144 + ValidatorRegistryLimit = 1 << 12 // 4096 + AttestationCommitteeCount = 1 + JustificationLookbackSlots = 3 + + // Derived limits + JustificationValidatorsLimit = HistoricalRootsLimit * ValidatorRegistryLimit // 1073741824 + + // Byte sizes + PubkeySize = 52 + SignatureSize = 3112 + RootSize = 32 + ByteListMiBMax = 1 << 20 // 1048576 + + // Sync + SyncToleranceSlots = 2 +) diff --git a/types/generate.go b/types/generate.go deleted file mode 100644 index bcb8542..0000000 --- a/types/generate.go +++ /dev/null @@ -1,5 +0,0 @@ -package types - -// NOTE: State encoding is checked in from previous generation and unchanged for devnet-2. -// fastssz v0.1.3 currently panics when regenerating State in this package. -//go:generate sszgen --path . --objs Checkpoint,Config,Validator,AttestationData,Attestation,SignedAttestation,AggregatedAttestation,AggregatedSignatureProof,SignedAggregatedAttestation,BlockHeader,BlockBody,Block,BlockWithAttestation,BlockSignatures,SignedBlockWithAttestation diff --git a/types/helpers.go b/types/helpers.go new file mode 100644 index 0000000..9a05e4c --- /dev/null +++ b/types/helpers.go @@ -0,0 +1,31 @@ +package types + +import "encoding/hex" + +var ZeroRoot [RootSize]byte + +// IsZeroRoot returns true if the root is all zeros. +func IsZeroRoot(root [RootSize]byte) bool { + return root == ZeroRoot +} + +// IsProposer returns true if validatorIndex is the proposer for the given slot. +func IsProposer(slot, validatorIndex, numValidators uint64) bool { + if numValidators == 0 { + return false + } + return slot%numValidators == validatorIndex +} + +// ProposerIndex returns the proposer for a slot, or -1 if no validators. +func ProposerIndex(slot, numValidators uint64) int64 { + if numValidators == 0 { + return -1 + } + return int64(slot % numValidators) +} + +// ShortRoot returns the first 4 bytes of a root as hex for logging. +func ShortRoot(root [RootSize]byte) string { + return "0x" + hex.EncodeToString(root[:4]) +} diff --git a/types/helpers_test.go b/types/helpers_test.go new file mode 100644 index 0000000..40c7e5f --- /dev/null +++ b/types/helpers_test.go @@ -0,0 +1,53 @@ +package types + +import "testing" + +func TestIsProposer(t *testing.T) { + tests := []struct { + slot, validator, numValidators uint64 + want bool + }{ + {0, 0, 5, true}, + {1, 1, 5, true}, + {4, 4, 5, true}, + {5, 0, 5, true}, // wraps around + {6, 1, 5, true}, + {0, 1, 5, false}, + {1, 0, 5, false}, + {0, 0, 0, false}, // no validators + } + for _, tt := range tests { + got := IsProposer(tt.slot, tt.validator, tt.numValidators) + if got != tt.want { + t.Errorf("IsProposer(%d, %d, %d) = %v, want %v", + tt.slot, tt.validator, tt.numValidators, got, tt.want) + } + } +} + +func TestProposerIndex(t *testing.T) { + if ProposerIndex(7, 3) != 1 { + t.Fatal("expected proposer 1 for slot 7 with 3 validators") + } + if ProposerIndex(0, 0) != -1 { + t.Fatal("expected -1 for 0 validators") + } +} + +func TestIsZeroRoot(t *testing.T) { + if !IsZeroRoot(ZeroRoot) { + t.Fatal("ZeroRoot should be zero") + } + nonZero := [RootSize]byte{1} + if IsZeroRoot(nonZero) { + t.Fatal("non-zero root should not be zero") + } +} + +func TestShortRoot(t *testing.T) { + root := [RootSize]byte{0xab, 0xcd, 0xef, 0x01} + s := ShortRoot(root) + if s != "0xabcdef01" { + t.Fatalf("expected 0xabcdef01, got %s", s) + } +} diff --git a/types/signed_aggregated_attestation.go b/types/signed_aggregated_attestation.go deleted file mode 100644 index 1d32043..0000000 --- a/types/signed_aggregated_attestation.go +++ /dev/null @@ -1,8 +0,0 @@ -package types - -// SignedAggregatedAttestation is the gossip container for aggregated attestations. -// Published on the "aggregation" topic by aggregator nodes at interval 2. -type SignedAggregatedAttestation struct { - Data *AttestationData - Proof *AggregatedSignatureProof -} diff --git a/types/signed_aggregated_attestation_encoding.go b/types/signed_aggregated_attestation_encoding.go deleted file mode 100644 index b3cee43..0000000 --- a/types/signed_aggregated_attestation_encoding.go +++ /dev/null @@ -1,126 +0,0 @@ -// Code generated by fastssz. DO NOT EDIT. -// Hash: aeb5298883701763283ca3a5630096026f28750d62f9609c9bb75b7f9f6d9b98 -// Version: 0.1.3 -package types - -import ( - ssz "github.com/ferranbt/fastssz" -) - -// MarshalSSZ ssz marshals the SignedAggregatedAttestation object -func (s *SignedAggregatedAttestation) MarshalSSZ() ([]byte, error) { - return ssz.MarshalSSZ(s) -} - -// MarshalSSZTo ssz marshals the SignedAggregatedAttestation object to a target array -func (s *SignedAggregatedAttestation) MarshalSSZTo(buf []byte) (dst []byte, err error) { - dst = buf - offset := int(132) - - // Field (0) 'Data' - if s.Data == nil { - s.Data = new(AttestationData) - } - if dst, err = s.Data.MarshalSSZTo(dst); err != nil { - return - } - - // Offset (1) 'Proof' - dst = ssz.WriteOffset(dst, offset) - if s.Proof == nil { - s.Proof = new(AggregatedSignatureProof) - } - offset += s.Proof.SizeSSZ() - - // Field (1) 'Proof' - if dst, err = s.Proof.MarshalSSZTo(dst); err != nil { - return - } - - return -} - -// UnmarshalSSZ ssz unmarshals the SignedAggregatedAttestation object -func (s *SignedAggregatedAttestation) UnmarshalSSZ(buf []byte) error { - var err error - size := uint64(len(buf)) - if size < 132 { - return ssz.ErrSize - } - - tail := buf - var o1 uint64 - - // Field (0) 'Data' - if s.Data == nil { - s.Data = new(AttestationData) - } - if err = s.Data.UnmarshalSSZ(buf[0:128]); err != nil { - return err - } - - // Offset (1) 'Proof' - if o1 = ssz.ReadOffset(buf[128:132]); o1 > size { - return ssz.ErrOffset - } - - if o1 < 132 { - return ssz.ErrInvalidVariableOffset - } - - // Field (1) 'Proof' - { - buf = tail[o1:] - if s.Proof == nil { - s.Proof = new(AggregatedSignatureProof) - } - if err = s.Proof.UnmarshalSSZ(buf); err != nil { - return err - } - } - return err -} - -// SizeSSZ returns the ssz encoded size in bytes for the SignedAggregatedAttestation object -func (s *SignedAggregatedAttestation) SizeSSZ() (size int) { - size = 132 - - // Field (1) 'Proof' - if s.Proof == nil { - s.Proof = new(AggregatedSignatureProof) - } - size += s.Proof.SizeSSZ() - - return -} - -// HashTreeRoot ssz hashes the SignedAggregatedAttestation object -func (s *SignedAggregatedAttestation) HashTreeRoot() ([32]byte, error) { - return ssz.HashWithDefaultHasher(s) -} - -// HashTreeRootWith ssz hashes the SignedAggregatedAttestation object with a hasher -func (s *SignedAggregatedAttestation) HashTreeRootWith(hh ssz.HashWalker) (err error) { - indx := hh.Index() - - // Field (0) 'Data' - if s.Data == nil { - s.Data = new(AttestationData) - } - if err = s.Data.HashTreeRootWith(hh); err != nil { - return - } - - // Field (1) 'Proof' - if err = s.Proof.HashTreeRootWith(hh); err != nil { - return - } - - hh.Merkleize(indx) - return -} - -// GetTree ssz hashes the SignedAggregatedAttestation object -func (s *SignedAggregatedAttestation) GetTree() (*ssz.Node, error) { - return ssz.ProofTree(s) -} diff --git a/types/slot.go b/types/slot.go deleted file mode 100644 index ae632c8..0000000 --- a/types/slot.go +++ /dev/null @@ -1,52 +0,0 @@ -package types - -import "math" - -// IsJustifiableAfter checks if a slot is a valid candidate for justification -// after a given finalized slot according to 3SF-mini rules. -// -// A slot is justifiable if its distance (delta) from the finalized slot is: -// 1. Less than or equal to 5 -// 2. A perfect square (e.g., 9, 16, 25...) -// 3. A pronic number n*(n+1) (e.g., 6, 12, 20, 30...) -func IsJustifiableAfter(slot, finalizedSlot uint64) bool { - if slot < finalizedSlot { - return false - } - - delta := slot - finalizedSlot - - // Rule 1: first 5 slots always justifiable - if delta <= 5 { - return true - } - - // Rule 2: perfect square - s := isqrt(delta) - if s*s == delta { - return true - } - - // Rule 3: pronic number n*(n+1) - if s*(s+1) == delta { - return true - } - - return false -} - -// isqrt returns the integer square root of n (floor(sqrt(n))). -func isqrt(n uint64) uint64 { - if n == 0 { - return 0 - } - x := uint64(math.Sqrt(float64(n))) - // Correct for float64 imprecision near large values. - if (x+1)*(x+1) <= n { - return x + 1 - } - if x*x > n { - return x - 1 - } - return x -} diff --git a/types/ssz_compliance_test.go b/types/ssz_compliance_test.go new file mode 100644 index 0000000..703452f --- /dev/null +++ b/types/ssz_compliance_test.go @@ -0,0 +1,244 @@ +package types + +import ( + "crypto/sha256" + "encoding/hex" + "testing" +) + +// These tests verify hash_tree_root correctness against leanSpec's test vectors. +// Reference: leanSpec/tests/lean_spec/subspecs/ssz/test_hash.py + +// chunk pads hex payload to 32 bytes (right-padded with zeros). +func chunk(hexPayload string) [32]byte { + var c [32]byte + b, _ := hex.DecodeString(hexPayload) + copy(c[:], b) + return c +} + +// h computes SHA256(a || b) for two 32-byte chunks. +func h(a, b [32]byte) [32]byte { + var combined [64]byte + copy(combined[:32], a[:]) + copy(combined[32:], b[:]) + return sha256.Sum256(combined[:]) +} + +func rootHex(root [32]byte) string { + return hex.EncodeToString(root[:]) +} + +// Test uint64 hash_tree_root: little-endian padded to 32 bytes. +// Verified: leanSpec test_hash.py test_hash_tree_root_basic_uint +func TestHashTreeRootUint64Compliance(t *testing.T) { + // ChainConfig has a single uint64 field, so its root = merkleize([padded_uint64]) + // For a 1-field container, merkleize of 1 chunk = the chunk itself. + cfg := &ChainConfig{GenesisTime: 0} + root, err := cfg.HashTreeRoot() + if err != nil { + t.Fatal(err) + } + // genesis_time=0 → chunk is all zeros → root should be all zeros + if root != [32]byte{} { + t.Fatalf("ChainConfig(0) root should be zero, got %s", rootHex(root)) + } + + cfg2 := &ChainConfig{GenesisTime: 1} + root2, err := cfg2.HashTreeRoot() + if err != nil { + t.Fatal(err) + } + // genesis_time=1 → LE bytes = 0x0100000000000000, padded to 32 + expected := chunk("0100000000000000") + if root2 != expected { + t.Fatalf("ChainConfig(1) expected %s, got %s", rootHex(expected), rootHex(root2)) + } +} + +// Test Checkpoint hash_tree_root: container with 2 fields (root, slot). +// Merkle tree: h(field0_chunk, field1_chunk) +// Verified: leanSpec Small container pattern from test_hash.py +func TestCheckpointHashTreeRootCompliance(t *testing.T) { + cp := &Checkpoint{ + Root: [32]byte{0xab}, + Slot: 0, + } + root, err := cp.HashTreeRoot() + if err != nil { + t.Fatal(err) + } + // Field 0: root = [0xab, 0, 0, ..., 0] (already 32 bytes, identity) + // Field 1: slot = 0 → chunk of all zeros + // Merkle: h(field0, field1) since 2 fields → depth 1 + field0 := chunk("ab") + field1 := chunk("") + expected := h(field0, field1) + if root != expected { + t.Fatalf("Checkpoint(0xab,0) expected %s, got %s", rootHex(expected), rootHex(root)) + } +} + +// Test Checkpoint with non-zero slot. +func TestCheckpointHashTreeRootWithSlot(t *testing.T) { + cp := &Checkpoint{ + Root: [32]byte{}, + Slot: 42, + } + root, err := cp.HashTreeRoot() + if err != nil { + t.Fatal(err) + } + field0 := chunk("") // root = zero + field1 := chunk("2a00000000000000") // slot = 42 LE + expected := h(field0, field1) + if root != expected { + t.Fatalf("Checkpoint(0,42) expected %s, got %s", rootHex(expected), rootHex(root)) + } +} + +// Test BlockHeader hash_tree_root: 5-field container. +// 5 fields → pad to 8 → depth 3 merkle tree. +func TestBlockHeaderHashTreeRootCompliance(t *testing.T) { + hdr := &BlockHeader{ + Slot: 1, + ProposerIndex: 0, + ParentRoot: [32]byte{0x01}, + StateRoot: [32]byte{0x02}, + BodyRoot: [32]byte{0x03}, + } + root, err := hdr.HashTreeRoot() + if err != nil { + t.Fatal(err) + } + // 5 fields padded to 8: + f0 := chunk("0100000000000000") // slot=1 LE + f1 := chunk("") // proposer_index=0 + f2 := chunk("01") // parent_root + f3 := chunk("02") // state_root + f4 := chunk("03") // body_root + f5 := chunk("") // padding + f6 := chunk("") // padding + f7 := chunk("") // padding + + expected := h( + h(h(f0, f1), h(f2, f3)), + h(h(f4, f5), h(f6, f7)), + ) + if root != expected { + t.Fatalf("BlockHeader expected %s, got %s", rootHex(expected), rootHex(root)) + } +} + +// Test Validator hash_tree_root: 2-field container. +// pubkey (52 bytes) → 2 chunks (32 + 20 padded), then merkleize to get field root. +func TestValidatorHashTreeRootCompliance(t *testing.T) { + v := &Validator{ + Pubkey: [52]byte{}, // all zeros + Index: 0, + } + root, err := v.HashTreeRoot() + if err != nil { + t.Fatal(err) + } + // Field 0: pubkey = 52 zero bytes → chunks: [zeros_32, zeros_20_padded_to_32] + // Merkleize 2 chunks → h(chunk0, chunk1) = h(zeros, zeros) = ZERO_HASHES[1] + pubkeyChunk0 := chunk("") + pubkeyChunk1 := chunk("") + pubkeyRoot := h(pubkeyChunk0, pubkeyChunk1) + // Field 1: index = 0 + indexRoot := chunk("") + // Container: h(pubkey_root, index_root) + expected := h(pubkeyRoot, indexRoot) + if root != expected { + t.Fatalf("Validator(zeros) expected %s, got %s", rootHex(expected), rootHex(root)) + } +} + +// Test AttestationData hash_tree_root: 4-field container. +// 4 fields → pad to 4 → depth 2 merkle tree. +func TestAttestationDataHashTreeRootCompliance(t *testing.T) { + d := &AttestationData{ + Slot: 0, + Head: &Checkpoint{Root: [32]byte{}, Slot: 0}, + Target: &Checkpoint{Root: [32]byte{}, Slot: 0}, + Source: &Checkpoint{Root: [32]byte{}, Slot: 0}, + } + root, err := d.HashTreeRoot() + if err != nil { + t.Fatal(err) + } + // All fields are zero → all field roots are h(zeros, zeros) or zeros + slotRoot := chunk("") + cpRoot := h(chunk(""), chunk("")) // Checkpoint(zero, 0) + expected := h( + h(slotRoot, cpRoot), + h(cpRoot, cpRoot), + ) + if root != expected { + t.Fatalf("AttestationData(zeros) expected %s, got %s", rootHex(expected), rootHex(root)) + } +} + +// Test determinism: same input always produces same root. +func TestHashTreeRootDeterminism(t *testing.T) { + s := &State{ + Config: &ChainConfig{GenesisTime: 1704085200}, + Slot: 100, + LatestBlockHeader: &BlockHeader{Slot: 99, ProposerIndex: 2, ParentRoot: [32]byte{0xff}}, + LatestJustified: &Checkpoint{Root: [32]byte{0xaa}, Slot: 50}, + LatestFinalized: &Checkpoint{Root: [32]byte{0xbb}, Slot: 30}, + HistoricalBlockHashes: [][]byte{make([]byte, 32), make([]byte, 32)}, + JustifiedSlots: NewBitlistSSZ(100), + Validators: []*Validator{{Pubkey: [52]byte{1}, Index: 0}, {Pubkey: [52]byte{2}, Index: 1}}, + JustificationsRoots: [][]byte{make([]byte, 32)}, + JustificationsValidators: NewBitlistSSZ(10), + } + root1, err := s.HashTreeRoot() + if err != nil { + t.Fatal(err) + } + root2, err := s.HashTreeRoot() + if err != nil { + t.Fatal(err) + } + if root1 != root2 { + t.Fatalf("hash_tree_root not deterministic: %s vs %s", rootHex(root1), rootHex(root2)) + } + if root1 == [32]byte{} { + t.Fatal("state root should not be zero for non-trivial state") + } +} + +// Test SSZ encoding size matches expected fixed sizes. +func TestSSZSizeCompliance(t *testing.T) { + // Checkpoint: 32 (root) + 8 (slot) = 40 bytes + cp := &Checkpoint{} + if cp.SizeSSZ() != 40 { + t.Fatalf("Checkpoint size: expected 40, got %d", cp.SizeSSZ()) + } + + // ChainConfig: 8 bytes + cfg := &ChainConfig{} + if cfg.SizeSSZ() != 8 { + t.Fatalf("ChainConfig size: expected 8, got %d", cfg.SizeSSZ()) + } + + // Validator: 52 (pubkey) + 8 (index) = 60 bytes + v := &Validator{} + if v.SizeSSZ() != 60 { + t.Fatalf("Validator size: expected 60, got %d", v.SizeSSZ()) + } + + // BlockHeader: 8+8+32+32+32 = 112 bytes + hdr := &BlockHeader{} + if hdr.SizeSSZ() != 112 { + t.Fatalf("BlockHeader size: expected 112, got %d", hdr.SizeSSZ()) + } + + // AttestationData: 8 + 40 + 40 + 40 = 128 bytes + ad := &AttestationData{Head: &Checkpoint{}, Target: &Checkpoint{}, Source: &Checkpoint{}} + if ad.SizeSSZ() != 128 { + t.Fatalf("AttestationData size: expected 128, got %d", ad.SizeSSZ()) + } +} diff --git a/types/ssz_test.go b/types/ssz_test.go new file mode 100644 index 0000000..1b41e69 --- /dev/null +++ b/types/ssz_test.go @@ -0,0 +1,199 @@ +package types + +import ( + "testing" +) + +func TestCheckpointSSZRoundtrip(t *testing.T) { + c := &Checkpoint{Root: [32]byte{0xab, 0xcd}, Slot: 42} + data, err := c.MarshalSSZ() + if err != nil { + t.Fatal(err) + } + c2 := &Checkpoint{} + if err := c2.UnmarshalSSZ(data); err != nil { + t.Fatal(err) + } + if c.Root != c2.Root || c.Slot != c2.Slot { + t.Fatal("roundtrip mismatch") + } +} + +func TestCheckpointHashTreeRoot(t *testing.T) { + c := &Checkpoint{Root: [32]byte{1}, Slot: 100} + root, err := c.HashTreeRoot() + if err != nil { + t.Fatal(err) + } + if root == [32]byte{} { + t.Fatal("root should not be zero") + } + // Deterministic + root2, _ := c.HashTreeRoot() + if root != root2 { + t.Fatal("hash_tree_root not deterministic") + } +} + +func TestBlockHeaderSSZRoundtrip(t *testing.T) { + h := &BlockHeader{ + Slot: 5, ProposerIndex: 2, + ParentRoot: [32]byte{1}, StateRoot: [32]byte{2}, BodyRoot: [32]byte{3}, + } + data, err := h.MarshalSSZ() + if err != nil { + t.Fatal(err) + } + h2 := &BlockHeader{} + if err := h2.UnmarshalSSZ(data); err != nil { + t.Fatal(err) + } + if h.Slot != h2.Slot || h.ProposerIndex != h2.ProposerIndex || + h.ParentRoot != h2.ParentRoot || h.StateRoot != h2.StateRoot || h.BodyRoot != h2.BodyRoot { + t.Fatal("roundtrip mismatch") + } +} + +func TestBlockHeaderHashTreeRoot(t *testing.T) { + h := &BlockHeader{ + Slot: 1, ProposerIndex: 0, + ParentRoot: [32]byte{0x01}, StateRoot: [32]byte{0x02}, BodyRoot: [32]byte{0x03}, + } + root, err := h.HashTreeRoot() + if err != nil { + t.Fatal(err) + } + if root == [32]byte{} { + t.Fatal("root should not be zero") + } +} + +func TestValidatorSSZRoundtrip(t *testing.T) { + v := &Validator{Pubkey: [52]byte{1, 2, 3}, Index: 7} + data, err := v.MarshalSSZ() + if err != nil { + t.Fatal(err) + } + v2 := &Validator{} + if err := v2.UnmarshalSSZ(data); err != nil { + t.Fatal(err) + } + if v.Pubkey != v2.Pubkey || v.Index != v2.Index { + t.Fatal("roundtrip mismatch") + } +} + +func TestAttestationDataSSZRoundtrip(t *testing.T) { + d := &AttestationData{ + Slot: 5, + Head: &Checkpoint{Root: [32]byte{1}, Slot: 5}, + Target: &Checkpoint{Root: [32]byte{2}, Slot: 4}, + Source: &Checkpoint{Root: [32]byte{3}, Slot: 3}, + } + data, err := d.MarshalSSZ() + if err != nil { + t.Fatal(err) + } + d2 := &AttestationData{} + if err := d2.UnmarshalSSZ(data); err != nil { + t.Fatal(err) + } + if d.Slot != d2.Slot || d.Head.Slot != d2.Head.Slot || d.Target.Root != d2.Target.Root { + t.Fatal("roundtrip mismatch") + } +} + +func TestAttestationDataHashTreeRoot(t *testing.T) { + d := &AttestationData{ + Slot: 5, + Head: &Checkpoint{Root: [32]byte{1}, Slot: 5}, + Target: &Checkpoint{Root: [32]byte{2}, Slot: 4}, + Source: &Checkpoint{Root: [32]byte{3}, Slot: 3}, + } + root, err := d.HashTreeRoot() + if err != nil { + t.Fatal(err) + } + if root == [32]byte{} { + t.Fatal("root should not be zero") + } + // Different data should produce different root + d.Slot = 99 + root2, _ := d.HashTreeRoot() + if root == root2 { + t.Fatal("different data should produce different roots") + } +} + +func TestChainConfigSSZRoundtrip(t *testing.T) { + c := &ChainConfig{GenesisTime: 1704085200} + data, err := c.MarshalSSZ() + if err != nil { + t.Fatal(err) + } + c2 := &ChainConfig{} + if err := c2.UnmarshalSSZ(data); err != nil { + t.Fatal(err) + } + if c.GenesisTime != c2.GenesisTime { + t.Fatal("roundtrip mismatch") + } +} + +func TestStateSSZRoundtrip(t *testing.T) { + s := &State{ + Config: &ChainConfig{GenesisTime: 1000}, + Slot: 10, + LatestBlockHeader: &BlockHeader{Slot: 9}, + LatestJustified: &Checkpoint{Slot: 5}, + LatestFinalized: &Checkpoint{Slot: 3}, + HistoricalBlockHashes: [][]byte{ + make([]byte, 32), + }, + JustifiedSlots: NewBitlistSSZ(10), + Validators: []*Validator{{Pubkey: [52]byte{1}, Index: 0}}, + JustificationsRoots: [][]byte{make([]byte, 32)}, + JustificationsValidators: NewBitlistSSZ(5), + } + data, err := s.MarshalSSZ() + if err != nil { + t.Fatal(err) + } + s2 := &State{} + if err := s2.UnmarshalSSZ(data); err != nil { + t.Fatal(err) + } + if s.Slot != s2.Slot || s.Config.GenesisTime != s2.Config.GenesisTime { + t.Fatal("roundtrip mismatch") + } + if len(s.Validators) != len(s2.Validators) { + t.Fatal("validator count mismatch") + } +} + +func TestStateHashTreeRoot(t *testing.T) { + s := &State{ + Config: &ChainConfig{GenesisTime: 1000}, + Slot: 10, + LatestBlockHeader: &BlockHeader{Slot: 9}, + LatestJustified: &Checkpoint{Slot: 5}, + LatestFinalized: &Checkpoint{Slot: 3}, + HistoricalBlockHashes: [][]byte{make([]byte, 32)}, + JustifiedSlots: NewBitlistSSZ(10), + Validators: []*Validator{{Pubkey: [52]byte{1}, Index: 0}}, + JustificationsRoots: [][]byte{make([]byte, 32)}, + JustificationsValidators: NewBitlistSSZ(5), + } + root, err := s.HashTreeRoot() + if err != nil { + t.Fatal(err) + } + if root == [32]byte{} { + t.Fatal("state root should not be zero") + } + // Deterministic + root2, _ := s.HashTreeRoot() + if root != root2 { + t.Fatal("hash_tree_root not deterministic") + } +} diff --git a/types/state.go b/types/state.go index 75363f3..d147ff4 100644 --- a/types/state.go +++ b/types/state.go @@ -1,74 +1,20 @@ package types -// SSZ limits matching the reference spec. -const ( - HistoricalRootsLimit = 1 << 18 // 262144 - ValidatorRegistryLimit = 1 << 12 // 4096 - JustificationValsLimit = HistoricalRootsLimit * ValidatorRegistryLimit // 1073741824 -) - -// Validator represents a validator in the registry. -type Validator struct { - Pubkey [52]byte `ssz-size:"52"` - Index uint64 -} - -// State is the main consensus state object. +// State is the full beacon consensus state at a given slot. type State struct { - Config *Config `json:"config"` + Config *ChainConfig `json:"config"` Slot uint64 `json:"slot"` LatestBlockHeader *BlockHeader `json:"latest_block_header"` LatestJustified *Checkpoint `json:"latest_justified"` LatestFinalized *Checkpoint `json:"latest_finalized"` - HistoricalBlockHashes [][32]byte `json:"historical_block_hashes" ssz-max:"262144"` - JustifiedSlots []byte `json:"justified_slots" ssz:"bitlist" ssz-max:"262144"` - Validators []*Validator `json:"validators" ssz-max:"4096"` - JustificationsRoots [][32]byte `json:"justifications_roots" ssz-max:"262144"` - JustificationsValidators []byte `json:"justifications_validators" ssz:"bitlist" ssz-max:"1073741824"` + HistoricalBlockHashes [][]byte `json:"historical_block_hashes" ssz-max:"262144" ssz-size:"?,32"` + JustifiedSlots []byte `json:"justified_slots" ssz:"bitlist" ssz-max:"262144"` + Validators []*Validator `json:"validators" ssz-max:"4096"` + JustificationsRoots [][]byte `json:"justifications_roots" ssz-max:"262144" ssz-size:"?,32"` + JustificationsValidators []byte `json:"justifications_validators" ssz:"bitlist" ssz-max:"1073741824"` } -// Copy returns a deep copy of the state. -func (s *State) Copy() *State { - out := &State{ - Slot: s.Slot, - } - - if s.Config != nil { - out.Config = &Config{GenesisTime: s.Config.GenesisTime} - } - if s.LatestBlockHeader != nil { - h := *s.LatestBlockHeader - out.LatestBlockHeader = &h - } - if s.LatestJustified != nil { - out.LatestJustified = &Checkpoint{Root: s.LatestJustified.Root, Slot: s.LatestJustified.Slot} - } - if s.LatestFinalized != nil { - out.LatestFinalized = &Checkpoint{Root: s.LatestFinalized.Root, Slot: s.LatestFinalized.Slot} - } - if s.HistoricalBlockHashes != nil { - out.HistoricalBlockHashes = make([][32]byte, len(s.HistoricalBlockHashes)) - copy(out.HistoricalBlockHashes, s.HistoricalBlockHashes) - } - if s.JustifiedSlots != nil { - out.JustifiedSlots = make([]byte, len(s.JustifiedSlots)) - copy(out.JustifiedSlots, s.JustifiedSlots) - } - if s.Validators != nil { - out.Validators = make([]*Validator, len(s.Validators)) - for i, v := range s.Validators { - cp := *v - out.Validators[i] = &cp - } - } - if s.JustificationsRoots != nil { - out.JustificationsRoots = make([][32]byte, len(s.JustificationsRoots)) - copy(out.JustificationsRoots, s.JustificationsRoots) - } - if s.JustificationsValidators != nil { - out.JustificationsValidators = make([]byte, len(s.JustificationsValidators)) - copy(out.JustificationsValidators, s.JustificationsValidators) - } - - return out +// NumValidators returns the validator count. +func (s *State) NumValidators() uint64 { + return uint64(len(s.Validators)) } diff --git a/types/state_encoding.go b/types/state_encoding.go index 50c8087..48008a8 100644 --- a/types/state_encoding.go +++ b/types/state_encoding.go @@ -1,5 +1,5 @@ // Code generated by fastssz. DO NOT EDIT. -// Hash: 4ca450d8ebae4ea8e8a0f7213b44a8308d018191c333d5a7612a17cea1a4b4ac +// Hash: 52b67ea3b7cb79482579d5c3ff659d3473b41bec2511e9c0fd826505bb9f9f72 // Version: 0.1.3 package types @@ -7,71 +7,6 @@ import ( ssz "github.com/ferranbt/fastssz" ) -// MarshalSSZ ssz marshals the Validator object -func (v *Validator) MarshalSSZ() ([]byte, error) { - return ssz.MarshalSSZ(v) -} - -// MarshalSSZTo ssz marshals the Validator object to a target array -func (v *Validator) MarshalSSZTo(buf []byte) (dst []byte, err error) { - dst = buf - - // Field (0) 'Pubkey' - dst = append(dst, v.Pubkey[:]...) - - // Field (1) 'Index' - dst = ssz.MarshalUint64(dst, v.Index) - - return -} - -// UnmarshalSSZ ssz unmarshals the Validator object -func (v *Validator) UnmarshalSSZ(buf []byte) error { - var err error - size := uint64(len(buf)) - if size != 60 { - return ssz.ErrSize - } - - // Field (0) 'Pubkey' - copy(v.Pubkey[:], buf[0:52]) - - // Field (1) 'Index' - v.Index = ssz.UnmarshallUint64(buf[52:60]) - - return err -} - -// SizeSSZ returns the ssz encoded size in bytes for the Validator object -func (v *Validator) SizeSSZ() (size int) { - size = 60 - return -} - -// HashTreeRoot ssz hashes the Validator object -func (v *Validator) HashTreeRoot() ([32]byte, error) { - return ssz.HashWithDefaultHasher(v) -} - -// HashTreeRootWith ssz hashes the Validator object with a hasher -func (v *Validator) HashTreeRootWith(hh ssz.HashWalker) (err error) { - indx := hh.Index() - - // Field (0) 'Pubkey' - hh.PutBytes(v.Pubkey[:]) - - // Field (1) 'Index' - hh.PutUint64(v.Index) - - hh.Merkleize(indx) - return -} - -// GetTree ssz hashes the Validator object -func (v *Validator) GetTree() (*ssz.Node, error) { - return ssz.ProofTree(v) -} - // MarshalSSZ ssz marshals the State object func (s *State) MarshalSSZ() ([]byte, error) { return ssz.MarshalSSZ(s) @@ -84,7 +19,7 @@ func (s *State) MarshalSSZTo(buf []byte) (dst []byte, err error) { // Field (0) 'Config' if s.Config == nil { - s.Config = new(Config) + s.Config = new(ChainConfig) } if dst, err = s.Config.MarshalSSZTo(dst); err != nil { return @@ -142,7 +77,11 @@ func (s *State) MarshalSSZTo(buf []byte) (dst []byte, err error) { return } for ii := 0; ii < len(s.HistoricalBlockHashes); ii++ { - dst = append(dst, s.HistoricalBlockHashes[ii][:]...) + if size := len(s.HistoricalBlockHashes[ii]); size != 32 { + err = ssz.ErrBytesLengthFn("State.HistoricalBlockHashes[ii]", size, 32) + return + } + dst = append(dst, s.HistoricalBlockHashes[ii]...) } // Field (6) 'JustifiedSlots' @@ -169,7 +108,11 @@ func (s *State) MarshalSSZTo(buf []byte) (dst []byte, err error) { return } for ii := 0; ii < len(s.JustificationsRoots); ii++ { - dst = append(dst, s.JustificationsRoots[ii][:]...) + if size := len(s.JustificationsRoots[ii]); size != 32 { + err = ssz.ErrBytesLengthFn("State.JustificationsRoots[ii]", size, 32) + return + } + dst = append(dst, s.JustificationsRoots[ii]...) } // Field (9) 'JustificationsValidators' @@ -195,7 +138,7 @@ func (s *State) UnmarshalSSZ(buf []byte) error { // Field (0) 'Config' if s.Config == nil { - s.Config = new(Config) + s.Config = new(ChainConfig) } if err = s.Config.UnmarshalSSZ(buf[0:8]); err != nil { return err @@ -264,9 +207,12 @@ func (s *State) UnmarshalSSZ(buf []byte) error { if err != nil { return err } - s.HistoricalBlockHashes = make([][32]byte, num) + s.HistoricalBlockHashes = make([][]byte, num) for ii := 0; ii < num; ii++ { - copy(s.HistoricalBlockHashes[ii][:], buf[ii*32:(ii+1)*32]) + if cap(s.HistoricalBlockHashes[ii]) == 0 { + s.HistoricalBlockHashes[ii] = make([]byte, 0, len(buf[ii*32:(ii+1)*32])) + } + s.HistoricalBlockHashes[ii] = append(s.HistoricalBlockHashes[ii], buf[ii*32:(ii+1)*32]...) } } @@ -307,9 +253,12 @@ func (s *State) UnmarshalSSZ(buf []byte) error { if err != nil { return err } - s.JustificationsRoots = make([][32]byte, num) + s.JustificationsRoots = make([][]byte, num) for ii := 0; ii < num; ii++ { - copy(s.JustificationsRoots[ii][:], buf[ii*32:(ii+1)*32]) + if cap(s.JustificationsRoots[ii]) == 0 { + s.JustificationsRoots[ii] = make([]byte, 0, len(buf[ii*32:(ii+1)*32])) + } + s.JustificationsRoots[ii] = append(s.JustificationsRoots[ii], buf[ii*32:(ii+1)*32]...) } } @@ -360,7 +309,7 @@ func (s *State) HashTreeRootWith(hh ssz.HashWalker) (err error) { // Field (0) 'Config' if s.Config == nil { - s.Config = new(Config) + s.Config = new(ChainConfig) } if err = s.Config.HashTreeRootWith(hh); err != nil { return @@ -401,7 +350,11 @@ func (s *State) HashTreeRootWith(hh ssz.HashWalker) (err error) { } subIndx := hh.Index() for _, i := range s.HistoricalBlockHashes { - hh.Append(i[:]) + if len(i) != 32 { + err = ssz.ErrBytesLength + return + } + hh.Append(i) } numItems := uint64(len(s.HistoricalBlockHashes)) hh.MerkleizeWithMixin(subIndx, numItems, 262144) @@ -438,7 +391,11 @@ func (s *State) HashTreeRootWith(hh ssz.HashWalker) (err error) { } subIndx := hh.Index() for _, i := range s.JustificationsRoots { - hh.Append(i[:]) + if len(i) != 32 { + err = ssz.ErrBytesLength + return + } + hh.Append(i) } numItems := uint64(len(s.JustificationsRoots)) hh.MerkleizeWithMixin(subIndx, numItems, 262144) diff --git a/types/validator.go b/types/validator.go new file mode 100644 index 0000000..289dd7d --- /dev/null +++ b/types/validator.go @@ -0,0 +1,8 @@ +package types + +// Validator represents a consensus validator. Devnet-3 uses a single pubkey +// for both attestation and proposal duties. +type Validator struct { + Pubkey [PubkeySize]byte `json:"pubkey" ssz-size:"52"` + Index uint64 `json:"index"` +} diff --git a/types/validator_encoding.go b/types/validator_encoding.go new file mode 100644 index 0000000..0e6ae4b --- /dev/null +++ b/types/validator_encoding.go @@ -0,0 +1,73 @@ +// Code generated by fastssz. DO NOT EDIT. +// Hash: 52b67ea3b7cb79482579d5c3ff659d3473b41bec2511e9c0fd826505bb9f9f72 +// Version: 0.1.3 +package types + +import ( + ssz "github.com/ferranbt/fastssz" +) + +// MarshalSSZ ssz marshals the Validator object +func (v *Validator) MarshalSSZ() ([]byte, error) { + return ssz.MarshalSSZ(v) +} + +// MarshalSSZTo ssz marshals the Validator object to a target array +func (v *Validator) MarshalSSZTo(buf []byte) (dst []byte, err error) { + dst = buf + + // Field (0) 'Pubkey' + dst = append(dst, v.Pubkey[:]...) + + // Field (1) 'Index' + dst = ssz.MarshalUint64(dst, v.Index) + + return +} + +// UnmarshalSSZ ssz unmarshals the Validator object +func (v *Validator) UnmarshalSSZ(buf []byte) error { + var err error + size := uint64(len(buf)) + if size != 60 { + return ssz.ErrSize + } + + // Field (0) 'Pubkey' + copy(v.Pubkey[:], buf[0:52]) + + // Field (1) 'Index' + v.Index = ssz.UnmarshallUint64(buf[52:60]) + + return err +} + +// SizeSSZ returns the ssz encoded size in bytes for the Validator object +func (v *Validator) SizeSSZ() (size int) { + size = 60 + return +} + +// HashTreeRoot ssz hashes the Validator object +func (v *Validator) HashTreeRoot() ([32]byte, error) { + return ssz.HashWithDefaultHasher(v) +} + +// HashTreeRootWith ssz hashes the Validator object with a hasher +func (v *Validator) HashTreeRootWith(hh ssz.HashWalker) (err error) { + indx := hh.Index() + + // Field (0) 'Pubkey' + hh.PutBytes(v.Pubkey[:]) + + // Field (1) 'Index' + hh.PutUint64(v.Index) + + hh.Merkleize(indx) + return +} + +// GetTree ssz hashes the Validator object +func (v *Validator) GetTree() (*ssz.Node, error) { + return ssz.ProofTree(v) +} diff --git a/types/vote.go b/types/vote.go deleted file mode 100644 index ccbc930..0000000 --- a/types/vote.go +++ /dev/null @@ -1,23 +0,0 @@ -package types - -// AttestationData contains the vote data for a validator's attestation. -type AttestationData struct { - Slot uint64 - Head *Checkpoint - Target *Checkpoint - Source *Checkpoint -} - -// Attestation wraps a validator ID and attestation data (unsigned, goes in block body). -type Attestation struct { - ValidatorID uint64 - Data *AttestationData -} - -// SignedAttestation is the gossip envelope for attestations. -// Devnet-2 format is flattened: validator_id + attestation_data + signature. -type SignedAttestation struct { - ValidatorID uint64 - Message *AttestationData - Signature [3112]byte `ssz-size:"3112"` -} diff --git a/types/vote_encoding.go b/types/vote_encoding.go deleted file mode 100644 index 6e54160..0000000 --- a/types/vote_encoding.go +++ /dev/null @@ -1,305 +0,0 @@ -// Code generated by fastssz. DO NOT EDIT. -// Hash: 13e17ee916722e5a044ab4ad4cb964b606dda40bcb9ce66b227a3d82f050aeb5 -// Version: 0.1.3 -package types - -import ( - ssz "github.com/ferranbt/fastssz" -) - -// MarshalSSZ ssz marshals the AttestationData object -func (a *AttestationData) MarshalSSZ() ([]byte, error) { - return ssz.MarshalSSZ(a) -} - -// MarshalSSZTo ssz marshals the AttestationData object to a target array -func (a *AttestationData) MarshalSSZTo(buf []byte) (dst []byte, err error) { - dst = buf - - // Field (0) 'Slot' - dst = ssz.MarshalUint64(dst, a.Slot) - - // Field (1) 'Head' - if a.Head == nil { - a.Head = new(Checkpoint) - } - if dst, err = a.Head.MarshalSSZTo(dst); err != nil { - return - } - - // Field (2) 'Target' - if a.Target == nil { - a.Target = new(Checkpoint) - } - if dst, err = a.Target.MarshalSSZTo(dst); err != nil { - return - } - - // Field (3) 'Source' - if a.Source == nil { - a.Source = new(Checkpoint) - } - if dst, err = a.Source.MarshalSSZTo(dst); err != nil { - return - } - - return -} - -// UnmarshalSSZ ssz unmarshals the AttestationData object -func (a *AttestationData) UnmarshalSSZ(buf []byte) error { - var err error - size := uint64(len(buf)) - if size != 128 { - return ssz.ErrSize - } - - // Field (0) 'Slot' - a.Slot = ssz.UnmarshallUint64(buf[0:8]) - - // Field (1) 'Head' - if a.Head == nil { - a.Head = new(Checkpoint) - } - if err = a.Head.UnmarshalSSZ(buf[8:48]); err != nil { - return err - } - - // Field (2) 'Target' - if a.Target == nil { - a.Target = new(Checkpoint) - } - if err = a.Target.UnmarshalSSZ(buf[48:88]); err != nil { - return err - } - - // Field (3) 'Source' - if a.Source == nil { - a.Source = new(Checkpoint) - } - if err = a.Source.UnmarshalSSZ(buf[88:128]); err != nil { - return err - } - - return err -} - -// SizeSSZ returns the ssz encoded size in bytes for the AttestationData object -func (a *AttestationData) SizeSSZ() (size int) { - size = 128 - return -} - -// HashTreeRoot ssz hashes the AttestationData object -func (a *AttestationData) HashTreeRoot() ([32]byte, error) { - return ssz.HashWithDefaultHasher(a) -} - -// HashTreeRootWith ssz hashes the AttestationData object with a hasher -func (a *AttestationData) HashTreeRootWith(hh ssz.HashWalker) (err error) { - indx := hh.Index() - - // Field (0) 'Slot' - hh.PutUint64(a.Slot) - - // Field (1) 'Head' - if a.Head == nil { - a.Head = new(Checkpoint) - } - if err = a.Head.HashTreeRootWith(hh); err != nil { - return - } - - // Field (2) 'Target' - if a.Target == nil { - a.Target = new(Checkpoint) - } - if err = a.Target.HashTreeRootWith(hh); err != nil { - return - } - - // Field (3) 'Source' - if a.Source == nil { - a.Source = new(Checkpoint) - } - if err = a.Source.HashTreeRootWith(hh); err != nil { - return - } - - hh.Merkleize(indx) - return -} - -// GetTree ssz hashes the AttestationData object -func (a *AttestationData) GetTree() (*ssz.Node, error) { - return ssz.ProofTree(a) -} - -// MarshalSSZ ssz marshals the Attestation object -func (a *Attestation) MarshalSSZ() ([]byte, error) { - return ssz.MarshalSSZ(a) -} - -// MarshalSSZTo ssz marshals the Attestation object to a target array -func (a *Attestation) MarshalSSZTo(buf []byte) (dst []byte, err error) { - dst = buf - - // Field (0) 'ValidatorID' - dst = ssz.MarshalUint64(dst, a.ValidatorID) - - // Field (1) 'Data' - if a.Data == nil { - a.Data = new(AttestationData) - } - if dst, err = a.Data.MarshalSSZTo(dst); err != nil { - return - } - - return -} - -// UnmarshalSSZ ssz unmarshals the Attestation object -func (a *Attestation) UnmarshalSSZ(buf []byte) error { - var err error - size := uint64(len(buf)) - if size != 136 { - return ssz.ErrSize - } - - // Field (0) 'ValidatorID' - a.ValidatorID = ssz.UnmarshallUint64(buf[0:8]) - - // Field (1) 'Data' - if a.Data == nil { - a.Data = new(AttestationData) - } - if err = a.Data.UnmarshalSSZ(buf[8:136]); err != nil { - return err - } - - return err -} - -// SizeSSZ returns the ssz encoded size in bytes for the Attestation object -func (a *Attestation) SizeSSZ() (size int) { - size = 136 - return -} - -// HashTreeRoot ssz hashes the Attestation object -func (a *Attestation) HashTreeRoot() ([32]byte, error) { - return ssz.HashWithDefaultHasher(a) -} - -// HashTreeRootWith ssz hashes the Attestation object with a hasher -func (a *Attestation) HashTreeRootWith(hh ssz.HashWalker) (err error) { - indx := hh.Index() - - // Field (0) 'ValidatorID' - hh.PutUint64(a.ValidatorID) - - // Field (1) 'Data' - if a.Data == nil { - a.Data = new(AttestationData) - } - if err = a.Data.HashTreeRootWith(hh); err != nil { - return - } - - hh.Merkleize(indx) - return -} - -// GetTree ssz hashes the Attestation object -func (a *Attestation) GetTree() (*ssz.Node, error) { - return ssz.ProofTree(a) -} - -// MarshalSSZ ssz marshals the SignedAttestation object -func (s *SignedAttestation) MarshalSSZ() ([]byte, error) { - return ssz.MarshalSSZ(s) -} - -// MarshalSSZTo ssz marshals the SignedAttestation object to a target array -func (s *SignedAttestation) MarshalSSZTo(buf []byte) (dst []byte, err error) { - dst = buf - - // Field (0) 'ValidatorID' - dst = ssz.MarshalUint64(dst, s.ValidatorID) - - // Field (1) 'Message' - if s.Message == nil { - s.Message = new(AttestationData) - } - if dst, err = s.Message.MarshalSSZTo(dst); err != nil { - return - } - - // Field (2) 'Signature' - dst = append(dst, s.Signature[:]...) - - return -} - -// UnmarshalSSZ ssz unmarshals the SignedAttestation object -func (s *SignedAttestation) UnmarshalSSZ(buf []byte) error { - var err error - size := uint64(len(buf)) - if size != 3248 { - return ssz.ErrSize - } - - // Field (0) 'ValidatorID' - s.ValidatorID = ssz.UnmarshallUint64(buf[0:8]) - - // Field (1) 'Message' - if s.Message == nil { - s.Message = new(AttestationData) - } - if err = s.Message.UnmarshalSSZ(buf[8:136]); err != nil { - return err - } - - // Field (2) 'Signature' - copy(s.Signature[:], buf[136:3248]) - - return err -} - -// SizeSSZ returns the ssz encoded size in bytes for the SignedAttestation object -func (s *SignedAttestation) SizeSSZ() (size int) { - size = 3248 - return -} - -// HashTreeRoot ssz hashes the SignedAttestation object -func (s *SignedAttestation) HashTreeRoot() ([32]byte, error) { - return ssz.HashWithDefaultHasher(s) -} - -// HashTreeRootWith ssz hashes the SignedAttestation object with a hasher -func (s *SignedAttestation) HashTreeRootWith(hh ssz.HashWalker) (err error) { - indx := hh.Index() - - // Field (0) 'ValidatorID' - hh.PutUint64(s.ValidatorID) - - // Field (1) 'Message' - if s.Message == nil { - s.Message = new(AttestationData) - } - if err = s.Message.HashTreeRootWith(hh); err != nil { - return - } - - // Field (2) 'Signature' - hh.PutBytes(s.Signature[:]) - - hh.Merkleize(indx) - return -} - -// GetTree ssz hashes the SignedAttestation object -func (s *SignedAttestation) GetTree() (*ssz.Node, error) { - return ssz.ProofTree(s) -} diff --git a/validators.yaml b/validators.yaml deleted file mode 100644 index 713ce1e..0000000 --- a/validators.yaml +++ /dev/null @@ -1,7 +0,0 @@ -assignments: - - node_name: node0 - validators: [0, 1] - - node_name: node1 - validators: [2, 3] - - node_name: node2 - validators: [4] diff --git a/xmss/aggregate_test.go b/xmss/aggregate_test.go new file mode 100644 index 0000000..efe64a2 --- /dev/null +++ b/xmss/aggregate_test.go @@ -0,0 +1,101 @@ +package xmss + +import ( + "testing" +) + +// TestAggregateSignaturesRoundtrip generates keys, signs, serializes signatures +// to SSZ bytes, deserializes back, and aggregates — same path as the engine. +func TestAggregateSignaturesRoundtrip(t *testing.T) { + // Generate two keypairs. + kp1, err := GenerateKeyPair("agg-test-validator-0", 0, 1<<18) + if err != nil { + t.Fatalf("keygen 0: %v", err) + } + defer kp1.Close() + + kp2, err := GenerateKeyPair("agg-test-validator-1", 0, 1<<18) + if err != nil { + t.Fatalf("keygen 1: %v", err) + } + defer kp2.Close() + + // Sign the same message at slot 0. + var message [32]byte + message[0] = 0xab + + sig1Bytes, err := kp1.Sign(0, message) + if err != nil { + t.Fatalf("sign 0: %v", err) + } + sig2Bytes, err := kp2.Sign(0, message) + if err != nil { + t.Fatalf("sign 1: %v", err) + } + + // Verify individually (same path as onGossipAttestation). + pk1Bytes, _ := kp1.PublicKeyBytes() + pk2Bytes, _ := kp2.PublicKeyBytes() + + valid, err := VerifySignatureSSZ(pk1Bytes, 0, message, sig1Bytes) + if err != nil || !valid { + t.Fatalf("verify sig1: valid=%v err=%v", valid, err) + } + valid, err = VerifySignatureSSZ(pk2Bytes, 0, message, sig2Bytes) + if err != nil || !valid { + t.Fatalf("verify sig2: valid=%v err=%v", valid, err) + } + + // Now parse pubkeys and signatures to opaque handles (same path as AggregateCommitteeSignatures). + cpk1, err := ParsePublicKey(pk1Bytes) + if err != nil { + t.Fatalf("parse pk1: %v", err) + } + defer FreePublicKey(cpk1) + + cpk2, err := ParsePublicKey(pk2Bytes) + if err != nil { + t.Fatalf("parse pk2: %v", err) + } + defer FreePublicKey(cpk2) + + csig1, err := ParseSignature(sig1Bytes[:]) + if err != nil { + t.Fatalf("parse sig1: %v", err) + } + defer FreeSignature(csig1) + + csig2, err := ParseSignature(sig2Bytes[:]) + if err != nil { + t.Fatalf("parse sig2: %v", err) + } + defer FreeSignature(csig2) + + // Aggregate. + EnsureProverReady() + + proofBytes, err := AggregateSignatures( + []CPubKey{cpk1, cpk2}, + []CSig{csig1, csig2}, + message, + 0, + ) + if err != nil { + t.Fatalf("aggregate failed: %v", err) + } + if len(proofBytes) == 0 { + t.Fatal("empty proof") + } + + t.Logf("aggregation succeeded: proof size = %d bytes", len(proofBytes)) + + // Verify the aggregated proof. + EnsureVerifierReady() + + err = VerifyAggregatedSignature(proofBytes, []CPubKey{cpk1, cpk2}, message, 0) + if err != nil { + t.Fatalf("verify aggregated failed: %v", err) + } + + t.Log("aggregated verification succeeded") +} diff --git a/xmss/block_roundtrip_test.go b/xmss/block_roundtrip_test.go new file mode 100644 index 0000000..dc21a5d --- /dev/null +++ b/xmss/block_roundtrip_test.go @@ -0,0 +1,101 @@ +package xmss + +import ( + "testing" + + "github.com/geanlabs/gean/types" +) + +// TestProposerSigThroughBlockSSZ simulates the exact P2P path: +// Node1 signs → builds SignedBlockWithAttestation → SSZ marshal → SSZ unmarshal → +// extract ProposerSignature → ParseSignature → aggregate. +func TestProposerSigThroughBlockSSZ(t *testing.T) { + kp, err := GenerateKeyPair("block-roundtrip-0", 0, 1<<18) + if err != nil { + t.Fatalf("keygen: %v", err) + } + defer kp.Close() + + pkBytes, _ := kp.PublicKeyBytes() + + // Sign a message (attestation data root) + var attDataRoot [32]byte + attDataRoot[0] = 0xab + attDataRoot[1] = 0xcd + + sig, err := kp.Sign(1, attDataRoot) // slot 1 + if err != nil { + t.Fatalf("sign: %v", err) + } + + // Build a SignedBlockWithAttestation with this signature as ProposerSignature + signedBlock := &types.SignedBlockWithAttestation{ + Block: &types.BlockWithAttestation{ + Block: &types.Block{ + Slot: 1, + ProposerIndex: 0, + Body: &types.BlockBody{}, + }, + ProposerAttestation: &types.Attestation{ + ValidatorID: 0, + Data: &types.AttestationData{ + Slot: 1, + Head: &types.Checkpoint{}, + Target: &types.Checkpoint{}, + Source: &types.Checkpoint{}, + }, + }, + }, + Signature: &types.BlockSignatures{ + ProposerSignature: sig, + }, + } + + // SSZ marshal (simulates P2P send) + encoded, err := signedBlock.MarshalSSZ() + if err != nil { + t.Fatalf("marshal: %v", err) + } + + // SSZ unmarshal (simulates P2P receive) + decoded := &types.SignedBlockWithAttestation{} + if err := decoded.UnmarshalSSZ(encoded); err != nil { + t.Fatalf("unmarshal: %v", err) + } + + // Extract ProposerSignature from decoded block (this is what processProposerAttestation does) + extractedSig := decoded.Signature.ProposerSignature + + // Verify the raw bytes match + if extractedSig != sig { + t.Fatal("signature bytes changed through block SSZ round-trip") + } + + // Parse to C handle (what processProposerAttestation does) + csig, err := ParseSignature(extractedSig[:]) + if err != nil { + t.Fatalf("parse sig: %v", err) + } + defer FreeSignature(csig) + + cpk, err := ParsePublicKey(pkBytes) + if err != nil { + t.Fatalf("parse pk: %v", err) + } + defer FreePublicKey(cpk) + + // Aggregate with slot=1 and the attestation data root as message + EnsureProverReady() + proof, err := AggregateSignatures([]CPubKey{cpk}, []CSig{csig}, attDataRoot, 1) + if err != nil { + t.Fatalf("aggregate FAILED: %v", err) + } + t.Logf("aggregate succeeded: proof=%d bytes", len(proof)) + + // Verify + EnsureVerifierReady() + if err := VerifyAggregatedSignature(proof, []CPubKey{cpk}, attDataRoot, 1); err != nil { + t.Fatalf("verify FAILED: %v", err) + } + t.Log("verify succeeded") +} diff --git a/xmss/ffi.go b/xmss/ffi.go new file mode 100644 index 0000000..455741f --- /dev/null +++ b/xmss/ffi.go @@ -0,0 +1,346 @@ +package xmss + +// CGo bindings to gean's Rust glue libraries (hashsig-glue + multisig-glue). +// +// Build the FFI libraries first: +// make ffi +// +// Static libraries are compiled to crypto/rust/target/release/. + +// #cgo CFLAGS: -I. +// #cgo linux LDFLAGS: -L${SRCDIR}/rust/target/release -lhashsig_glue -lmultisig_glue -lm -ldl -lpthread +// #cgo darwin LDFLAGS: -L${SRCDIR}/rust/target/release -lhashsig_glue -lmultisig_glue -lm -ldl -lpthread -framework CoreFoundation -framework SystemConfiguration -framework Security +// +// #include +// #include +// #include +// +// // Opaque types from hashsig-glue +// typedef struct KeyPair KeyPair; +// typedef struct PublicKey PublicKey; +// typedef struct PrivateKey PrivateKey; +// typedef struct Signature Signature; +// +// // Opaque type from multisig-glue +// typedef struct Devnet2XmssAggregateSignature Devnet2XmssAggregateSignature; +// +// // --- hashsig-glue FFI (rs) --- +// +// KeyPair* hashsig_keypair_from_ssz( +// const uint8_t* private_key_ptr, size_t private_key_len, +// const uint8_t* public_key_ptr, size_t public_key_len); +// void hashsig_keypair_free(KeyPair* keypair); +// const PublicKey* hashsig_keypair_get_public_key(const KeyPair* keypair); +// const PrivateKey* hashsig_keypair_get_private_key(const KeyPair* keypair); +// +// PublicKey* hashsig_public_key_from_ssz(const uint8_t* public_key_ptr, size_t public_key_len); +// void hashsig_public_key_free(PublicKey* public_key); +// +// Signature* hashsig_sign(const PrivateKey* private_key, const uint8_t* message_ptr, uint32_t epoch); +// void hashsig_signature_free(Signature* signature); +// size_t hashsig_signature_to_bytes(const Signature* signature, uint8_t* buffer, size_t buffer_len); +// Signature* hashsig_signature_from_ssz(const uint8_t* signature_ptr, size_t signature_len); +// +// int hashsig_verify(const PublicKey* public_key, const uint8_t* message_ptr, +// uint32_t epoch, const Signature* signature); +// int hashsig_verify_ssz(const uint8_t* pubkey_bytes, size_t pubkey_len, +// const uint8_t* message, uint32_t epoch, +// const uint8_t* signature_bytes, size_t signature_len); +// +// size_t hashsig_public_key_to_bytes(const PublicKey* public_key, uint8_t* buffer, size_t buffer_len); +// size_t hashsig_private_key_to_bytes(const PrivateKey* private_key, uint8_t* buffer, size_t buffer_len); +// size_t hashsig_message_length(); +// +// KeyPair* hashsig_keypair_generate(const char* seed_phrase, +// size_t activation_epoch, size_t num_active_epochs); +// +// // --- multisig-glue FFI (rs) --- +// +// void xmss_setup_prover(); +// void xmss_setup_verifier(); +// +// const Devnet2XmssAggregateSignature* xmss_aggregate( +// const PublicKey* const* public_keys, size_t num_keys, +// const Signature* const* signatures, size_t num_sigs, +// const uint8_t* message_hash_ptr, uint32_t epoch); +// +// bool xmss_verify_aggregated( +// const PublicKey* const* public_keys, size_t num_keys, +// const uint8_t* message_hash_ptr, +// const Devnet2XmssAggregateSignature* agg_sig, uint32_t epoch); +// +// void xmss_free_aggregate_signature(Devnet2XmssAggregateSignature* agg_sig); +// size_t xmss_aggregate_signature_to_bytes( +// const Devnet2XmssAggregateSignature* agg_sig, +// uint8_t* buffer, size_t buffer_len); +// Devnet2XmssAggregateSignature* xmss_aggregate_signature_from_bytes( +// const uint8_t* bytes, size_t bytes_len); +import "C" + +import ( + "errors" + "fmt" + "sync" + "unsafe" + + "github.com/geanlabs/gean/types" +) + +// Size constants. +const ( + MessageLength = 32 + MaxProofSize = 1 << 20 // 1 MiB (ByteListMiB) + SignatureBuffer = 4000 // buffer for SSZ signature serialization + PubkeyBuffer = 256 // buffer for SSZ pubkey serialization +) + +// Errors. +var ( + ErrEmptyInput = errors.New("empty input") + ErrCountMismatch = errors.New("public key count does not match signature count") + ErrAggregationFailed = errors.New("signature aggregation failed") + ErrSerializationFailed = errors.New("proof serialization failed") + ErrProofTooBig = errors.New("aggregated proof exceeds 1 MiB") + ErrVerificationFailed = errors.New("aggregated signature verification failed") + ErrDeserializeFailed = errors.New("proof deserialization failed") + ErrSigningFailed = errors.New("signing failed") + ErrInvalidSignature = errors.New("signature verification returned invalid") + ErrSignatureError = errors.New("signature verification error (malformed data)") + ErrPubkeyParseFailed = errors.New("public key parsing failed") + ErrKeypairParseFailed = errors.New("keypair parsing failed") +) + +// Lazy initialization guards +var ( + proverOnce sync.Once + verifierOnce sync.Once +) + +// EnsureProverReady initializes the aggregation prover (expensive, runs once). +// Matches multisig-glue xmss_setup_prover with PROVER_INIT Once guard. +func EnsureProverReady() { + proverOnce.Do(func() { + C.xmss_setup_prover() + }) +} + +// EnsureVerifierReady initializes the aggregation verifier (runs once). +// Matches multisig-glue xmss_setup_verifier with VERIFIER_INIT Once guard. +func EnsureVerifierReady() { + verifierOnce.Do(func() { + C.xmss_setup_verifier() + }) +} + +// VerifySignatureSSZ verifies an individual XMSS signature from raw bytes. +// No aggregation VM setup needed — single sig verification is standalone. +// Returns (valid, error). Error means malformed input, not invalid signature. +func VerifySignatureSSZ(pubkey [types.PubkeySize]byte, slot uint32, message [32]byte, signature [types.SignatureSize]byte) (bool, error) { + // No EnsureVerifierReady() — that's for aggregation only. + // Single sig verify is stateless (matches old gean pattern). + + result := C.hashsig_verify_ssz( + (*C.uint8_t)(unsafe.Pointer(&pubkey[0])), C.size_t(types.PubkeySize), + (*C.uint8_t)(unsafe.Pointer(&message[0])), + C.uint32_t(slot), + (*C.uint8_t)(unsafe.Pointer(&signature[0])), C.size_t(types.SignatureSize), + ) + + switch result { + case 1: + return true, nil + case 0: + return false, nil + default: + return false, ErrSignatureError + } +} + +// AggregateSignatures aggregates multiple XMSS signatures into a single ZK proof. +// Takes arrays of opaque PublicKey/Signature pointers from resolved gossip signatures. +// Returns SSZ-encoded proof bytes (max 1 MiB). +func AggregateSignatures( + pubkeys []CPubKey, + sigs []CSig, + message [32]byte, + slot uint32, +) ([]byte, error) { + if len(pubkeys) == 0 { + return nil, ErrEmptyInput + } + if len(pubkeys) != len(sigs) { + return nil, fmt.Errorf("%w: %d pubkeys, %d sigs", ErrCountMismatch, len(pubkeys), len(sigs)) + } + + EnsureProverReady() + + // Convert []CPubKey ([]unsafe.Pointer) to []*C.PublicKey for FFI. + cPubkeys := make([]*C.PublicKey, len(pubkeys)) + for i, pk := range pubkeys { + cPubkeys[i] = (*C.PublicKey)(pk) + } + cSigs := make([]*C.Signature, len(sigs)) + for i, s := range sigs { + cSigs[i] = (*C.Signature)(s) + } + + aggSig := C.xmss_aggregate( + (**C.PublicKey)(unsafe.Pointer(&cPubkeys[0])), + C.size_t(len(cPubkeys)), + (**C.Signature)(unsafe.Pointer(&cSigs[0])), + C.size_t(len(cSigs)), + (*C.uint8_t)(unsafe.Pointer(&message[0])), + C.uint32_t(slot), + ) + if aggSig == nil { + return nil, ErrAggregationFailed + } + defer C.xmss_free_aggregate_signature((*C.Devnet2XmssAggregateSignature)(unsafe.Pointer(aggSig))) + + // Serialize to SSZ bytes using pooled buffer. + bufPtr := getProofBuf() + buf := *bufPtr + n := C.xmss_aggregate_signature_to_bytes( + aggSig, + (*C.uint8_t)(unsafe.Pointer(&buf[0])), + C.size_t(len(buf)), + ) + if n == 0 { + putProofBuf(bufPtr) + return nil, ErrSerializationFailed + } + if int(n) > MaxProofSize { + putProofBuf(bufPtr) + return nil, ErrProofTooBig + } + + // Copy used bytes to a right-sized slice, return pooled buffer. + result := make([]byte, int(n)) + copy(result, buf[:n]) + putProofBuf(bufPtr) + return result, nil +} + +// VerifyAggregatedSignature verifies an aggregated XMSS proof. +// Takes SSZ proof bytes + array of pubkey pointers for participating validators. +func VerifyAggregatedSignature( + proofData []byte, + pubkeys []CPubKey, + message [32]byte, + slot uint32, +) error { + if len(proofData) == 0 || len(pubkeys) == 0 { + return ErrEmptyInput + } + + EnsureVerifierReady() + + // Deserialize proof from SSZ bytes. + aggSig := C.xmss_aggregate_signature_from_bytes( + (*C.uint8_t)(unsafe.Pointer(&proofData[0])), + C.size_t(len(proofData)), + ) + if aggSig == nil { + return ErrDeserializeFailed + } + defer C.xmss_free_aggregate_signature(aggSig) + + cPubkeys := make([]*C.PublicKey, len(pubkeys)) + for i, pk := range pubkeys { + cPubkeys[i] = (*C.PublicKey)(pk) + } + + valid := C.xmss_verify_aggregated( + (**C.PublicKey)(unsafe.Pointer(&cPubkeys[0])), + C.size_t(len(cPubkeys)), + (*C.uint8_t)(unsafe.Pointer(&message[0])), + (*C.Devnet2XmssAggregateSignature)(unsafe.Pointer(aggSig)), + C.uint32_t(slot), + ) + if !valid { + return ErrVerificationFailed + } + + return nil +} + +// CPubKey is an opaque handle to a C PublicKey, exported as unsafe.Pointer +// so other packages can hold and pass it without importing C types. +type CPubKey = unsafe.Pointer + +// CSig is an opaque handle to a C Signature. +type CSig = unsafe.Pointer + +// ParsePublicKey creates an opaque PublicKey handle from SSZ-encoded bytes. +// Caller must call FreePublicKey when done. +func ParsePublicKey(pubkeyBytes [types.PubkeySize]byte) (CPubKey, error) { + pk := C.hashsig_public_key_from_ssz( + (*C.uint8_t)(unsafe.Pointer(&pubkeyBytes[0])), + C.size_t(types.PubkeySize), + ) + if pk == nil { + return nil, ErrPubkeyParseFailed + } + return unsafe.Pointer(pk), nil +} + +// FreePublicKey frees a PublicKey handle created by ParsePublicKey. +func FreePublicKey(pk CPubKey) { + if pk != nil { + C.hashsig_public_key_free((*C.PublicKey)(pk)) + } +} + +// ParseSignature creates an opaque Signature handle from SSZ-encoded bytes. +// Caller must call FreeSignature when done. +func ParseSignature(sigBytes []byte) (CSig, error) { + if len(sigBytes) == 0 { + return nil, ErrInvalidSignature + } + sig := C.hashsig_signature_from_ssz( + (*C.uint8_t)(unsafe.Pointer(&sigBytes[0])), + C.size_t(len(sigBytes)), + ) + if sig == nil { + return nil, ErrInvalidSignature + } + return unsafe.Pointer(sig), nil +} + +// FreeSignature frees a Signature handle created by ParseSignature. +func FreeSignature(sig CSig) { + if sig != nil { + C.hashsig_signature_free((*C.Signature)(sig)) + } +} + +// GenerateKeyPair generates a new XMSS keypair from a seed phrase. +// Used for testing. The returned ValidatorKeyPair must be closed when done. +// Matches hashsig-glue hashsig_keypair_generate. +func GenerateKeyPair(seedPhrase string, activationEpoch, numActiveEpochs uint64) (*ValidatorKeyPair, error) { + cSeed := C.CString(seedPhrase) + defer C.free(unsafe.Pointer(cSeed)) + + kp := C.hashsig_keypair_generate(cSeed, C.size_t(activationEpoch), C.size_t(numActiveEpochs)) + if kp == nil { + return nil, ErrKeypairParseFailed + } + return &ValidatorKeyPair{handle: kp, Index: 0}, nil +} + +// PublicKeyBytes returns the SSZ-encoded public key bytes from a ValidatorKeyPair. +func (kp *ValidatorKeyPair) PublicKeyBytes() ([types.PubkeySize]byte, error) { + var result [types.PubkeySize]byte + var buf [PubkeyBuffer]byte + + n := C.hashsig_public_key_to_bytes( + kp.PublicKeyPtr(), + (*C.uint8_t)(unsafe.Pointer(&buf[0])), + C.size_t(len(buf)), + ) + if n == 0 || int(n) != types.PubkeySize { + return result, fmt.Errorf("pubkey serialization failed: got %d bytes", n) + } + copy(result[:], buf[:n]) + return result, nil +} diff --git a/xmss/ffi_test.go b/xmss/ffi_test.go new file mode 100644 index 0000000..69be732 --- /dev/null +++ b/xmss/ffi_test.go @@ -0,0 +1,184 @@ +package xmss + +import ( + "encoding/hex" + "testing" +) + +const reamPublicKeyHex = "7bbaf95bd653c827b5775e00b973b24d50ab4743db3373244f29c95fdf4ccc628788ba2b5b9d635acdb25770e8ceef66bfdecd0a" + +const reamSignatureHex = "240000006590c5180f52a57ef4d12153ace9dd1bb346ea1299402d32978bd56f2804000004000000903b927d2ee9cf14087b2164dd418115962e322a6da10a58821ee20dd54f3c6a3f6dba14ebc2340b1aa7cc5647e08d0151024229312973331f669e1a268c8a2429d2353393f0d67a6b02da35e589da5bb099ea1ef931e9770ccae83a31f1454b359a4f4227fa1f17895918649115f3416bfa4e7976c5736170bc4e2acaf58342bf8e7472a2d6871e93bbbd01ebb6f30d51ccc3150e1d1e6c7bbfe15ea42cac218a8b94745184ce1f1098d62a9ea7a20662de0464f58822501084da7cbb58833ef9adfe59c17ac121676d92103a52903fbb70c1694717c4695140411977dfcb0e3da29612e27c590ef56967046817cb3727def54b4544f4031427be0a056ba7633334fd4104b0ba550521b61e16ecad72b5fda17de3e7a03bf683f55ecefd215235c6481b5a6f873bc8134373f9fafc3053164e5f435c3d2ad790221601ce274a4117f3104cadb00feb8baf79dd48904c5c1e0c1627c17c41ee7ca3760051ec163eee3e38a7bffa5779570a6cc078a630f5494f4917214954f1da636decff784335944a0674aa82096e5136063ef59c6962e95308074fed4bf6a81301d38a7919149bb24d221e5c44c12f82127c551413fef4f40b6f5ad646f4ca4578baf6f11d3325fe356168925b1f75690c17dbfd74d0d39756106c8a6d10c3bd355f27de621c72ca76734e523ebb8e647ef9fd216093f6bf086f075e4dcc607b5f6ff0603ca71787665a504621bdaf9e0b2924516972dc9c0ee1198f4ae3d6f7109c0c0f4b1708262685c7850709275e6e7f14cf4b74647a04005c501c376d3a179014dc69d26716199b3811773d77800948ad6a6c620d1d2cd0d038283e10c659bd6c3635fa8634482cc14c1d476fa23a40f94d0dcb542b2230bb4f02833713357b8f783b9ffaa50c32ef9933265a072bbc34b671800a8807fae7b235d3f2cc62804a4e3efecbb420122e6a6b62154976faff37329de17d7691a5213d5475b92132e48d657993d12c5eb5d0164f6af8589c199c1b61cf4619c60d72358097e43e5c5a676914748541e8d705626ac3654342f749361577056efe50e65e232fd91c8588576db75cd96e004994278441bc2abac25a51cbdc095764bf7a64ffe05b06cea7bb446f6de72149bf850d795ddd15e464bf7377985f422e937e4ad00563360340c93fd8d0c2695aaac71776f9476c4e574a7645285f5ca714cc021d88eb317d9c0177305bfa06f35622433447803322bf3833b617852b5cbd1d7abbca563f124886792d0133298de40c6d4a8c802faa83da52211bdd5f13c1a1440416b40acc044774e083394d5e803a6a4a3123549d208c340448655b633d43478488717ebc9f3a09b514852c709cc77ace7bba083cc3826d3542ec663a55750838b83369521b40683a268413a302f90adb9a2954466d8d432b316021228f9763e76e7a704866b66d0471626ae5f19c345147b64233266b7a58c7db0c246f370b91b490297ce07130166c424624d96b2adcb3a460fb0cce015f9e33194cc4a128f9153e1ddb24e1509e21793796b5b11fadc22218db97f3650ca8ec0e6003086fab38cf251b71ae2dece3716f3d50e26873ef8346c9951742953e0f720420af17b530c1720f8f606e19a0c223d7059b6da670d12fd555313205312f7c151af068bb0b3d0f80333f2f73159e63a9c7cb46a88ede5ac4ddc20700bee377ec01f568cc179a6ccfc2e86d03912b779e2c4e5cc3d73d3b53f18001dee0032b276d456b822d8b242d6d5e27b794ed14a2478e1c13eb5d0866990e4d75a27b1cca956737cac4b16813ca29191720f632f9826e5cd2d32216a9580646bdcc1f0ff114e0066165f1368a324f27a97c3032a697f3317229782dfe053445921286144fbe7d7e68ac441a91c5cc2543ac6908cc171a67cdb0a638e19bb32f3dc6576b8b4aeb5aa4866a18a458972c1521345fbf885238afa6862b0ee4eb62b1c7b94026b8926f2fae2f469b8bd02a6706995733b9321edaff6c429311f72ec541c65fccf9646f4e2ceb7c58d7f26e8280641dde221f0cb38cee6ccb9bd7238641fb11c576b02a2b47553858208a7ac9b6805a80474c38e3295d00f147ed3deb3d30503ec4076b2543bf307194b2787be3d53f3227ca3b815541709bbbc77a11eff92f5cccca7c7617e303c1b430799018aa0a94c7b152b154956a7e874507895e1735edc2683bb328dd576ff11a2e3fc07f1ba086831e534dab1c0320920156155f72cc7e90795c268c2486d1dc4921204c53082dfb1b830bd9238794881c985ab247274990541cfa8f0e5e582e2caf44f7598f068272eb4ee81c5c04663504b745194c8c570169de83626063e20a6847b82c77931f321cd3ac5cbac802476969255a13a2fe60707f960f5df8095c89cc786ffbe1b86a3e0b7f607f98982b9f10591c9c7bd14d216bac36de88254da273a3601048cb47cb5c2954f34bae09d33d814162bf3d6da2de8c7a51331d437a8012420f6c1323b063e4085e32ee4d2864a14a66bffc29efbca2328416901fe998be14033cb00b104d6732bdc8a963357b9d1290043229e321c25f549c467415472e4061a30b616b23700cc930597e117438293330de628b7c465a8d79b150f0cf913a43e92e6c87e8e644412d0815c0086c46e3028b74d7967a6cf6f5332c4fea10320e90061dc0df4b3f01d0934772291c0cc9b8622b1ee9e120a0178613586b1370f71ef508bcc70c0728894621146a707a9c802321ccf7e50d5d4f8a21c5ced8480117bb11404cbc6eac8ec928507e912474c3fa4ba7dbd22781c28761d19e736f63e0c659c94f243b87271e2424505c2118c30c0dcfd7b719d2a6a549f28da317e1cbeb5748da595c86ada266aed25d185753686e94914c68af6f4c5b833bd91033bbcb6b89cad52b9b65e110683dc96be38d690780253c49bf454d783aa25c0838e00929155c943d7c80c83d0a22c65e6e49c760f50b0623f074ee19138c08671cd6a846faf4237cbc4cee192106ed245451652c591b5a04fd6537255f54a261f7df231fe13a803f617a4719ba15831c4fa84864f581772539091726fd0d3112a55781440796562a9324936110004039a3e3ec26078f76243e1be93df0166270055dcb65a94f2c14c5813145f4bd680c02c33e042d98d17e1c5a9c22d095ed30ad754144ae7f5150ac0df842d4e9415f849f1b36f2c1ff520be3d0721aab9b31c249df28aac8d9378326184262f53307bc77bc6ca59c3349bb29b90b7464ab666f563e5ac741a6390e6d634620fbb33182f958482746ff1138f53e55b9d1a119800e8d6fe04a46044781f813817514338a60a8044b3333249cc9c93cdf6c8537de140943f4907f7926f5a81ed20f1526fd9447412d81e75a05d93b7610e2e27d851b1163cb96e242c08796493f564e0e51a45e17e74b2619dafa0855922b714b84f1266bfc094e494c29175298d9a44eb5cf2d3a696c744f15a1f21d737fb45ed8af333157e88b1c89018d6322f91e751827c030a6fb9d03554dee606a39cc6b79147906c18b376b1a39c20249a64c2b0b95e626204ee24dcfa2665bd5ca0405df11391d15b8604130c3de32f7c43726c10bd5334b16df17f17456605008cb3cf926f0166209bc358562f32293073012bc74250445e39e6cc14aa2613d77de1143e15c2e101cca14b9543d44b248a17c72279b743b0e18483a2331228bd57a3d5ecfa03f9318d269cbf992765ea15678046038749a04c35638c7984cbb90c40b7b94e657bc70a6760520fb4cbe1bbe62f5f7c121268168361963cf3bf032f71e01d9555655faeb58e995c916d7b0850f6813830583717d36e9b4a319ccc4797386f072485834f4772d291a3976f1fa799ba0d007822c011ad959895a4b29c11f3f9f07277a1ce94d1c5d9b0150a1e919528c17340031a204be28ff69ad90c13e4b81f354ef8ec11df62e9c62c1de715981bd4f4095a72d2eb68a5811bc059f0f58a5fa539a9a857e68604248d2fd5c28cabdf11ae4a4692218a18d49d38ff76a6e2b8956b884ac0a5af9cc6680bfb62cf3d83a14d031f0404cc8930898bda055a934624650064e40665ce21754c72b6be8152662b5e29571be85b01cb7728045c92a5a37764a99062abb535c21260612b3026d1145be6c1b3d5e917ee457102fcfc8c5771d011d61f591271220a01e1dc11d1b441217577a5b5cc113421cd740bbe47556719bb51375eda8173d54706cafe98b6b9ad0f9639708e87e77fcbb2bb822ce59f0bdba7746ca286c5b98447d4ca2f027b827ea4c7987e96a0429696bf9cfa21d6add9b79f2eddf2e6fe1c23c118ce2035ca0e3022270b628" + +const reamSlot uint32 = 5 + +func decodeHex(t *testing.T, s string) []byte { + t.Helper() + b, err := hex.DecodeString(s) + if err != nil { + t.Fatalf("hex decode: %v", err) + } + return b +} + +func TestVerifySignatureSSZValid(t *testing.T) { + pkBytes := decodeHex(t, reamPublicKeyHex) + sigBytes := decodeHex(t, reamSignatureHex) + + var pubkey [52]byte + copy(pubkey[:], pkBytes) + + var sig [3112]byte + copy(sig[:], sigBytes) + + var message [32]byte // all zeros + + valid, err := VerifySignatureSSZ(pubkey, reamSlot, message, sig) + if err != nil { + t.Fatalf("verify returned error: %v", err) + } + if !valid { + t.Fatal("expected valid signature") + } +} + +func TestVerifySignatureSSZWrongSlot(t *testing.T) { + pkBytes := decodeHex(t, reamPublicKeyHex) + sigBytes := decodeHex(t, reamSignatureHex) + + var pubkey [52]byte + copy(pubkey[:], pkBytes) + + var sig [3112]byte + copy(sig[:], sigBytes) + + var message [32]byte + + valid, err := VerifySignatureSSZ(pubkey, reamSlot+1, message, sig) + if err != nil { + t.Fatalf("verify returned error: %v", err) + } + if valid { + t.Fatal("expected invalid signature with wrong slot") + } +} + +func TestVerifySignatureSSZWrongMessage(t *testing.T) { + pkBytes := decodeHex(t, reamPublicKeyHex) + sigBytes := decodeHex(t, reamSignatureHex) + + var pubkey [52]byte + copy(pubkey[:], pkBytes) + + var sig [3112]byte + copy(sig[:], sigBytes) + + var message [32]byte + for i := range message { + message[i] = 0xff + } + + valid, err := VerifySignatureSSZ(pubkey, reamSlot, message, sig) + if err != nil { + t.Fatalf("verify returned error: %v", err) + } + if valid { + t.Fatal("expected invalid signature with wrong message") + } +} + +func TestParsePublicKeyRoundtrip(t *testing.T) { + pkBytes := decodeHex(t, reamPublicKeyHex) + var pubkey [52]byte + copy(pubkey[:], pkBytes) + + pk, err := ParsePublicKey(pubkey) + if err != nil { + t.Fatalf("parse failed: %v", err) + } + defer FreePublicKey(pk) + + if pk == nil { + t.Fatal("expected non-nil public key") + } +} + +func TestParseSignatureRoundtrip(t *testing.T) { + sigBytes := decodeHex(t, reamSignatureHex) + + sig, err := ParseSignature(sigBytes) + if err != nil { + t.Fatalf("parse failed: %v", err) + } + defer FreeSignature(sig) + + if sig == nil { + t.Fatal("expected non-nil signature") + } +} + +func TestKeyGenerateSignVerifyRoundtrip(t *testing.T) { + // Generate a keypair via FFI. + kp, err := GenerateKeyPair("gean-test-seed-phrase", 0, 1<<18) + if err != nil { + t.Fatalf("key generation failed: %v", err) + } + defer kp.Close() + + // Get pubkey bytes for verification. + pubkey, err := kp.PublicKeyBytes() + if err != nil { + t.Fatalf("pubkey serialization failed: %v", err) + } + + // Sign a message at slot 0. + var message [32]byte + message[0] = 0xab + message[31] = 0xcd + + sig, err := kp.Sign(0, message) + if err != nil { + t.Fatalf("sign failed: %v", err) + } + + // Verify with correct slot and message. + valid, err := VerifySignatureSSZ(pubkey, 0, message, sig) + if err != nil { + t.Fatalf("verify error: %v", err) + } + if !valid { + t.Fatal("signature should be valid") + } + + // Verify with wrong slot — must fail. + valid, err = VerifySignatureSSZ(pubkey, 1, message, sig) + if err != nil { + t.Fatalf("verify error: %v", err) + } + if valid { + t.Fatal("signature should be invalid with wrong slot") + } + + // Verify with wrong message — must fail. + var wrongMsg [32]byte + wrongMsg[0] = 0xff + valid, err = VerifySignatureSSZ(pubkey, 0, wrongMsg, sig) + if err != nil { + t.Fatalf("verify error: %v", err) + } + if valid { + t.Fatal("signature should be invalid with wrong message") + } +} + +func TestVerifySignatureSSZMalformedPubkey(t *testing.T) { + var pubkey [52]byte // all zeros — invalid + var sig [3112]byte + var message [32]byte + + _, err := VerifySignatureSSZ(pubkey, 0, message, sig) + // Should return error or invalid, not panic + if err != nil { + return // error is acceptable for malformed input + } + // returning false is also fine +} diff --git a/xmss/keys.go b/xmss/keys.go new file mode 100644 index 0000000..e46e367 --- /dev/null +++ b/xmss/keys.go @@ -0,0 +1,231 @@ +package xmss + +// Key management for XMSS validators. + +// #include +// #include +// typedef struct KeyPair KeyPair; +// typedef struct PublicKey PublicKey; +// typedef struct PrivateKey PrivateKey; +// typedef struct Signature Signature; +// +// KeyPair* hashsig_keypair_from_ssz( +// const uint8_t* private_key_ptr, size_t private_key_len, +// const uint8_t* public_key_ptr, size_t public_key_len); +// void hashsig_keypair_free(KeyPair* keypair); +// const PublicKey* hashsig_keypair_get_public_key(const KeyPair* keypair); +// const PrivateKey* hashsig_keypair_get_private_key(const KeyPair* keypair); +// Signature* hashsig_sign(const PrivateKey* private_key, const uint8_t* message_ptr, uint32_t epoch); +// void hashsig_signature_free(Signature* signature); +// size_t hashsig_signature_to_bytes(const Signature* signature, uint8_t* buffer, size_t buffer_len); +import "C" + +import ( + "encoding/hex" + "fmt" + "os" + "path/filepath" + "strings" + "unsafe" + + "github.com/geanlabs/gean/types" + "gopkg.in/yaml.v3" +) + +// ValidatorKeyPair holds a loaded XMSS keypair for a validator. +// The opaque C pointer is owned by this struct and freed on Close. +type ValidatorKeyPair struct { + handle *C.KeyPair + Index uint64 +} + +// PublicKeyPtr returns a borrowed pointer to the embedded public key. +// Valid only while the ValidatorKeyPair is alive — do NOT free it. +func (kp *ValidatorKeyPair) PublicKeyPtr() *C.PublicKey { + return C.hashsig_keypair_get_public_key(kp.handle) +} + +// PrivateKeyPtr returns a borrowed pointer to the embedded private key. +// Valid only while the ValidatorKeyPair is alive. +func (kp *ValidatorKeyPair) PrivateKeyPtr() *C.PrivateKey { + return C.hashsig_keypair_get_private_key(kp.handle) +} + +// Sign signs a 32-byte message at the given slot. +// Returns the SSZ-encoded 3112-byte signature. +func (kp *ValidatorKeyPair) Sign(slot uint32, message [32]byte) ([types.SignatureSize]byte, error) { + var result [types.SignatureSize]byte + + sigPtr := C.hashsig_sign( + kp.PrivateKeyPtr(), + (*C.uint8_t)(unsafe.Pointer(&message[0])), + C.uint32_t(slot), + ) + if sigPtr == nil { + return result, fmt.Errorf("%w: validator %d slot %d", ErrSigningFailed, kp.Index, slot) + } + defer C.hashsig_signature_free(sigPtr) + + // Serialize to fixed-size SSZ bytes. + buf := make([]byte, SignatureBuffer) + n := C.hashsig_signature_to_bytes( + sigPtr, + (*C.uint8_t)(unsafe.Pointer(&buf[0])), + C.size_t(len(buf)), + ) + if n == 0 || int(n) != types.SignatureSize { + return result, fmt.Errorf("signature serialization failed: wrote %d bytes, expected %d", n, types.SignatureSize) + } + + copy(result[:], buf[:n]) + return result, nil +} + +// Close frees the underlying C keypair. +func (kp *ValidatorKeyPair) Close() { + if kp.handle != nil { + C.hashsig_keypair_free(kp.handle) + kp.handle = nil + } +} + +// KeyManager holds all validator keypairs for this node. +type KeyManager struct { + keys map[uint64]*ValidatorKeyPair // validator_id -> keypair +} + +// NewKeyManager creates a KeyManager from loaded keypairs. +func NewKeyManager(keys map[uint64]*ValidatorKeyPair) *KeyManager { + return &KeyManager{keys: keys} +} + +// ValidatorIDs returns all validator indices managed by this node. +func (km *KeyManager) ValidatorIDs() []uint64 { + ids := make([]uint64, 0, len(km.keys)) + for id := range km.keys { + ids = append(ids, id) + } + return ids +} + +// Get returns the keypair for a validator, or nil if not found. +func (km *KeyManager) Get(validatorID uint64) *ValidatorKeyPair { + return km.keys[validatorID] +} + +// SignAttestation signs attestation data for a validator. +// Message = HashTreeRoot(attestationData), slot from the data. +func (km *KeyManager) SignAttestation(validatorID uint64, data *types.AttestationData) ([types.SignatureSize]byte, error) { + kp, ok := km.keys[validatorID] + if !ok { + return [types.SignatureSize]byte{}, fmt.Errorf("validator %d not found in key manager", validatorID) + } + + msgRoot, err := data.HashTreeRoot() + if err != nil { + return [types.SignatureSize]byte{}, fmt.Errorf("hash tree root failed: %w", err) + } + + slot := uint32(data.Slot) + if uint64(slot) != data.Slot { + return [types.SignatureSize]byte{}, fmt.Errorf("slot %d overflows uint32", data.Slot) + } + + return kp.Sign(slot, msgRoot) +} + +// SignBlock signs a block root for a validator (proposer signature). +func (km *KeyManager) SignBlock(validatorID uint64, slot uint64, blockRoot [32]byte) ([types.SignatureSize]byte, error) { + kp, ok := km.keys[validatorID] + if !ok { + return [types.SignatureSize]byte{}, fmt.Errorf("validator %d not found in key manager", validatorID) + } + + s := uint32(slot) + if uint64(s) != slot { + return [types.SignatureSize]byte{}, fmt.Errorf("slot %d overflows uint32", slot) + } + + return kp.Sign(s, blockRoot) +} + +// Close frees all keypairs. +func (km *KeyManager) Close() { + for _, kp := range km.keys { + kp.Close() + } +} + +// --- Key loading from YAML + files --- + +// annotatedValidator represents a validator entry from annotated_validators.yaml. +type annotatedValidator struct { + Index uint64 `yaml:"index"` + PubkeyHex string `yaml:"pubkey_hex"` + PrivkeyFile string `yaml:"privkey_file"` +} + +// LoadValidatorKeys loads XMSS keypairs from annotated_validators.yaml + key files. +// +// annotatedPath: path to annotated_validators.yaml +// keysDir: directory containing validator_*_sk.ssz files +// nodeID: e.g., "gean_0" +func LoadValidatorKeys(annotatedPath, keysDir, nodeID string) (*KeyManager, error) { + data, err := os.ReadFile(annotatedPath) + if err != nil { + return nil, fmt.Errorf("read annotated validators: %w", err) + } + + // File is map[node_id][]annotatedValidator. + var allValidators map[string][]annotatedValidator + if err := yaml.Unmarshal(data, &allValidators); err != nil { + return nil, fmt.Errorf("parse annotated validators: %w", err) + } + + validators, ok := allValidators[nodeID] + if !ok { + return nil, fmt.Errorf("node ID %q not found in annotated validators", nodeID) + } + + keys := make(map[uint64]*ValidatorKeyPair, len(validators)) + + for _, v := range validators { + // Resolve privkey file path (relative to keysDir). + skPath := v.PrivkeyFile + if !filepath.IsAbs(skPath) { + skPath = filepath.Join(keysDir, skPath) + } + + // Read raw SSZ secret key bytes. + skBytes, err := os.ReadFile(skPath) + if err != nil { + return nil, fmt.Errorf("read secret key for validator %d: %w", v.Index, err) + } + + // Decode pubkey from hex. + pkHex := strings.TrimPrefix(strings.TrimSpace(v.PubkeyHex), "0x") + pkBytes, err := hex.DecodeString(pkHex) + if err != nil { + return nil, fmt.Errorf("decode pubkey hex for validator %d: %w", v.Index, err) + } + if len(pkBytes) != types.PubkeySize { + return nil, fmt.Errorf("pubkey for validator %d has %d bytes, expected %d", v.Index, len(pkBytes), types.PubkeySize) + } + + // Create keypair via FFI. + handle := C.hashsig_keypair_from_ssz( + (*C.uint8_t)(unsafe.Pointer(&skBytes[0])), C.size_t(len(skBytes)), + (*C.uint8_t)(unsafe.Pointer(&pkBytes[0])), C.size_t(len(pkBytes)), + ) + if handle == nil { + return nil, fmt.Errorf("%w: validator %d", ErrKeypairParseFailed, v.Index) + } + + keys[v.Index] = &ValidatorKeyPair{ + handle: handle, + Index: v.Index, + } + } + + return NewKeyManager(keys), nil +} diff --git a/xmss/leanmultisig-ffi/.gitignore b/xmss/leanmultisig-ffi/.gitignore deleted file mode 100644 index ea8c4bf..0000000 --- a/xmss/leanmultisig-ffi/.gitignore +++ /dev/null @@ -1 +0,0 @@ -/target diff --git a/xmss/leanmultisig-ffi/Cargo.toml b/xmss/leanmultisig-ffi/Cargo.toml deleted file mode 100644 index 38b3932..0000000 --- a/xmss/leanmultisig-ffi/Cargo.toml +++ /dev/null @@ -1,17 +0,0 @@ -[package] -name = "leanmultisig-ffi" -version = "0.1.0" -edition = "2024" -rust-version = "1.87" - -[lib] -name = "leanmultisig_ffi" -crate-type = ["cdylib", "staticlib"] - -[dependencies] -rec_aggregation = { git = "https://github.com/leanEthereum/leanMultisig.git", rev = "e4474138487eeb1ed7c2e1013674fe80ac9f3165" } -leansig = { git = "https://github.com/leanEthereum/leanSig", rev = "73bedc26ed961b110df7ac2e234dc11361a4bf25" } -ssz = { package = "ethereum_ssz", version = "0.10.0" } - -[build-dependencies] -cbindgen = "0.29" diff --git a/xmss/leanmultisig-ffi/build.rs b/xmss/leanmultisig-ffi/build.rs deleted file mode 100644 index 71a2fc8..0000000 --- a/xmss/leanmultisig-ffi/build.rs +++ /dev/null @@ -1,25 +0,0 @@ -use std::env; -use std::path::PathBuf; - -fn main() { - let crate_dir = env::var("CARGO_MANIFEST_DIR").unwrap(); - let output_file = PathBuf::from(&crate_dir) - .join("include") - .join("leanmultisig_ffi.h"); - - // Ensure the include directory exists. - std::fs::create_dir_all(PathBuf::from(&crate_dir).join("include")).unwrap(); - - let config = cbindgen::Config::from_file(PathBuf::from(&crate_dir).join("cbindgen.toml")) - .expect("failed to read cbindgen.toml"); - - cbindgen::Builder::new() - .with_crate(&crate_dir) - .with_config(config) - .generate() - .expect("cbindgen failed to generate bindings") - .write_to_file(&output_file); - - println!("cargo:rerun-if-changed=src/lib.rs"); - println!("cargo:rerun-if-changed=cbindgen.toml"); -} diff --git a/xmss/leanmultisig-ffi/cbindgen.toml b/xmss/leanmultisig-ffi/cbindgen.toml deleted file mode 100644 index 80139ef..0000000 --- a/xmss/leanmultisig-ffi/cbindgen.toml +++ /dev/null @@ -1,29 +0,0 @@ -language = "C" -cpp_compat = true - -header = """/* - * leanmultisig_ffi.h - Auto-generated C header for the leanmultisig FFI library. - * - * DO NOT EDIT MANUALLY. This file is generated by cbindgen from src/lib.rs. - * Run `cargo build` to regenerate. - */""" -include_guard = "LEANMULTISIG_FFI_H" -sys_includes = ["stddef.h", "stdint.h", "stdbool.h"] - -after_includes = """ -/* Message hash length expected by aggregate/verify (32-byte attestation root). */ -#define LEANMULTISIG_MESSAGE_HASH_LENGTH 32""" - -documentation = true -documentation_style = "c99" -documentation_length = "full" - -style = "Both" -usize_is_size_t = true - -[export] -include = ["LeanmultisigResult", "LeanmultisigBytes"] - -[enum] -rename_variants = "ScreamingSnakeCase" -prefix_with_name = true diff --git a/xmss/leanmultisig-ffi/include/leanmultisig_ffi.h b/xmss/leanmultisig-ffi/include/leanmultisig_ffi.h deleted file mode 100644 index 5bf6ded..0000000 --- a/xmss/leanmultisig-ffi/include/leanmultisig_ffi.h +++ /dev/null @@ -1,76 +0,0 @@ -/* - * leanmultisig_ffi.h - Auto-generated C header for the leanmultisig FFI library. - * - * DO NOT EDIT MANUALLY. This file is generated by cbindgen from src/lib.rs. - * Run `cargo build` to regenerate. - */ - -#ifndef LEANMULTISIG_FFI_H -#define LEANMULTISIG_FFI_H - -#include -#include -#include -#include -#include -#include -#include -#include -/* Message hash length expected by aggregate/verify (32-byte attestation root). */ -#define LEANMULTISIG_MESSAGE_HASH_LENGTH 32 - -typedef enum LeanmultisigResult { - LEANMULTISIG_RESULT_OK = 0, - LEANMULTISIG_RESULT_NULL_POINTER = 1, - LEANMULTISIG_RESULT_INVALID_LENGTH = 2, - LEANMULTISIG_RESULT_LENGTH_MISMATCH = 3, - LEANMULTISIG_RESULT_DESERIALIZATION_FAILED = 4, - LEANMULTISIG_RESULT_AGGREGATION_FAILED = 5, - LEANMULTISIG_RESULT_VERIFICATION_FAILED = 6, -} LeanmultisigResult; - -typedef struct LeanmultisigBytes { - const uint8_t *data; - size_t len; -} LeanmultisigBytes; - -#ifdef __cplusplus -extern "C" { -#endif // __cplusplus - -// Initialize prover-side aggregation setup. Idempotent. -void leanmultisig_setup_prover(void); - -// Initialize verifier-side setup. Idempotent. -void leanmultisig_setup_verifier(void); - -// Aggregate XMSS signatures into a devnet-2 leanMultisig proof. -// -// The caller owns the returned buffer and must free it via `leanmultisig_bytes_free`. -enum LeanmultisigResult leanmultisig_aggregate(const struct LeanmultisigBytes *pubkeys, - size_t num_pubkeys, - const struct LeanmultisigBytes *signatures, - size_t num_signatures, - const uint8_t *message_hash_ptr, - size_t message_hash_len, - uint32_t epoch, - uint8_t **out_data, - size_t *out_len); - -// Verify a devnet-2 leanMultisig proof. -enum LeanmultisigResult leanmultisig_verify_aggregated(const struct LeanmultisigBytes *pubkeys, - size_t num_pubkeys, - const uint8_t *message_hash_ptr, - size_t message_hash_len, - const uint8_t *proof_data, - size_t proof_len, - uint32_t epoch); - -// Free a buffer allocated by `leanmultisig_aggregate`. -void leanmultisig_bytes_free(uint8_t *data, size_t len); - -#ifdef __cplusplus -} // extern "C" -#endif // __cplusplus - -#endif /* LEANMULTISIG_FFI_H */ diff --git a/xmss/leanmultisig-ffi/src/lib.rs b/xmss/leanmultisig-ffi/src/lib.rs deleted file mode 100644 index ed23a56..0000000 --- a/xmss/leanmultisig-ffi/src/lib.rs +++ /dev/null @@ -1,261 +0,0 @@ -use std::panic::{AssertUnwindSafe, catch_unwind}; -use std::slice; -use std::sync::Once; -use std::thread; - -use leansig::serialization::Serializable; -use leansig::signature::generalized_xmss::instantiations_poseidon_top_level::lifetime_2_to_the_32::hashing_optimized::SIGTopLevelTargetSumLifetime32Dim64Base8 as SigScheme; -use leansig::signature::SignatureScheme; -use rec_aggregation::xmss_aggregate::{ - xmss_aggregate_signatures, xmss_setup_aggregation_program, xmss_verify_aggregated_signatures, - Devnet2XmssAggregateSignature, -}; -use ssz::{Decode, Encode}; - -type PublicKey = ::PublicKey; -type Signature = ::Signature; - -const MESSAGE_HASH_LENGTH: usize = 32; - -static PROVER_INIT: Once = Once::new(); -static VERIFIER_INIT: Once = Once::new(); - -#[repr(C)] -pub enum LeanmultisigResult { - Ok = 0, - NullPointer = 1, - InvalidLength = 2, - LengthMismatch = 3, - DeserializationFailed = 4, - AggregationFailed = 5, - VerificationFailed = 6, -} - -#[repr(C)] -#[derive(Copy, Clone)] -pub struct LeanmultisigBytes { - pub data: *const u8, - pub len: usize, -} - -fn setup_with_large_stack() { - const STACK_SIZE: usize = 64 * 1024 * 1024; - let builder = thread::Builder::new().name("xmss_setup".to_string()).stack_size(STACK_SIZE); - let run = || { - let _ = catch_unwind(AssertUnwindSafe(|| { - xmss_setup_aggregation_program(); - })); - }; - - match builder.spawn(run) { - Ok(handle) => { - let _ = handle.join(); - } - Err(_) => { - run(); - } - } -} - -fn setup_prover_once() { - PROVER_INIT.call_once(setup_with_large_stack); -} - -fn setup_verifier_once() { - VERIFIER_INIT.call_once(setup_with_large_stack); -} - -unsafe fn parse_message_hash( - message_hash_ptr: *const u8, - message_hash_len: usize, -) -> Result<[u8; MESSAGE_HASH_LENGTH], LeanmultisigResult> { - if message_hash_ptr.is_null() { - return Err(LeanmultisigResult::NullPointer); - } - if message_hash_len != MESSAGE_HASH_LENGTH { - return Err(LeanmultisigResult::InvalidLength); - } - - let mut message_hash = [0u8; MESSAGE_HASH_LENGTH]; - let hash_slice = unsafe { slice::from_raw_parts(message_hash_ptr, message_hash_len) }; - message_hash.copy_from_slice(hash_slice); - Ok(message_hash) -} - -unsafe fn parse_pubkeys( - pubkeys: *const LeanmultisigBytes, - num_pubkeys: usize, -) -> Result, LeanmultisigResult> { - if pubkeys.is_null() { - return Err(LeanmultisigResult::NullPointer); - } - - let pubkey_views = unsafe { slice::from_raw_parts(pubkeys, num_pubkeys) }; - let mut decoded_pubkeys = Vec::with_capacity(num_pubkeys); - - for view in pubkey_views { - if view.data.is_null() || view.len == 0 { - return Err(LeanmultisigResult::InvalidLength); - } - let bytes = unsafe { slice::from_raw_parts(view.data, view.len) }; - let decoded = - PublicKey::from_bytes(bytes).map_err(|_| LeanmultisigResult::DeserializationFailed)?; - decoded_pubkeys.push(decoded); - } - - Ok(decoded_pubkeys) -} - -unsafe fn parse_signatures( - signatures: *const LeanmultisigBytes, - num_signatures: usize, -) -> Result, LeanmultisigResult> { - if signatures.is_null() { - return Err(LeanmultisigResult::NullPointer); - } - - let signature_views = unsafe { slice::from_raw_parts(signatures, num_signatures) }; - let mut decoded_signatures = Vec::with_capacity(num_signatures); - - for view in signature_views { - if view.data.is_null() || view.len == 0 { - return Err(LeanmultisigResult::InvalidLength); - } - let bytes = unsafe { slice::from_raw_parts(view.data, view.len) }; - let decoded = - Signature::from_bytes(bytes).map_err(|_| LeanmultisigResult::DeserializationFailed)?; - decoded_signatures.push(decoded); - } - - Ok(decoded_signatures) -} - -/// Initialize prover-side aggregation setup. Idempotent. -#[unsafe(no_mangle)] -pub extern "C" fn leanmultisig_setup_prover() { - setup_prover_once(); -} - -/// Initialize verifier-side setup. Idempotent. -#[unsafe(no_mangle)] -pub extern "C" fn leanmultisig_setup_verifier() { - setup_verifier_once(); -} - -/// Aggregate XMSS signatures into a devnet-2 leanMultisig proof. -/// -/// The caller owns the returned buffer and must free it via `leanmultisig_bytes_free`. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leanmultisig_aggregate( - pubkeys: *const LeanmultisigBytes, - num_pubkeys: usize, - signatures: *const LeanmultisigBytes, - num_signatures: usize, - message_hash_ptr: *const u8, - message_hash_len: usize, - epoch: u32, - out_data: *mut *mut u8, - out_len: *mut usize, -) -> LeanmultisigResult { - if out_data.is_null() || out_len.is_null() { - return LeanmultisigResult::NullPointer; - } - - if num_pubkeys == 0 || num_signatures == 0 { - return LeanmultisigResult::InvalidLength; - } - if num_pubkeys != num_signatures { - return LeanmultisigResult::LengthMismatch; - } - - let message_hash = match unsafe { parse_message_hash(message_hash_ptr, message_hash_len) } { - Ok(hash) => hash, - Err(err) => return err, - }; - - let decoded_pubkeys = match unsafe { parse_pubkeys(pubkeys, num_pubkeys) } { - Ok(pks) => pks, - Err(err) => return err, - }; - - let decoded_signatures = match unsafe { parse_signatures(signatures, num_signatures) } { - Ok(sigs) => sigs, - Err(err) => return err, - }; - - setup_prover_once(); - - let aggregated_proof = match catch_unwind(AssertUnwindSafe(|| { - xmss_aggregate_signatures(&decoded_pubkeys, &decoded_signatures, &message_hash, epoch) - })) { - Ok(Ok(sig)) => sig, - Ok(Err(_)) | Err(_) => return LeanmultisigResult::AggregationFailed, - }; - - let proof_bytes = aggregated_proof.as_ssz_bytes(); - let proof_len = proof_bytes.len(); - let proof_ptr = proof_bytes.leak().as_mut_ptr(); - - unsafe { - *out_data = proof_ptr; - *out_len = proof_len; - } - LeanmultisigResult::Ok -} - -/// Verify a devnet-2 leanMultisig proof. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leanmultisig_verify_aggregated( - pubkeys: *const LeanmultisigBytes, - num_pubkeys: usize, - message_hash_ptr: *const u8, - message_hash_len: usize, - proof_data: *const u8, - proof_len: usize, - epoch: u32, -) -> LeanmultisigResult { - if num_pubkeys == 0 { - return LeanmultisigResult::InvalidLength; - } - if proof_data.is_null() || proof_len == 0 { - return LeanmultisigResult::InvalidLength; - } - - let message_hash = match unsafe { parse_message_hash(message_hash_ptr, message_hash_len) } { - Ok(hash) => hash, - Err(err) => return err, - }; - - let decoded_pubkeys = match unsafe { parse_pubkeys(pubkeys, num_pubkeys) } { - Ok(pks) => pks, - Err(err) => return err, - }; - - let proof_slice = unsafe { slice::from_raw_parts(proof_data, proof_len) }; - let aggregated_proof = match Devnet2XmssAggregateSignature::from_ssz_bytes(proof_slice) { - Ok(proof) => proof, - Err(_) => return LeanmultisigResult::DeserializationFailed, - }; - if decoded_pubkeys.len() != aggregated_proof.encoding_randomness.len() { - return LeanmultisigResult::LengthMismatch; - } - - setup_verifier_once(); - - match catch_unwind(AssertUnwindSafe(|| { - xmss_verify_aggregated_signatures(&decoded_pubkeys, &message_hash, &aggregated_proof, epoch) - })) { - Ok(Ok(_)) => LeanmultisigResult::Ok, - Ok(Err(_)) | Err(_) => LeanmultisigResult::VerificationFailed, - } -} - -/// Free a buffer allocated by `leanmultisig_aggregate`. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leanmultisig_bytes_free(data: *mut u8, len: usize) { - if !data.is_null() && len > 0 { - unsafe { - drop(Vec::from_raw_parts(data, len, len)); - } - } -} diff --git a/xmss/leanmultisig/leanmultisig.go b/xmss/leanmultisig/leanmultisig.go deleted file mode 100644 index 74b05e6..0000000 --- a/xmss/leanmultisig/leanmultisig.go +++ /dev/null @@ -1,189 +0,0 @@ -// Package leanmultisig provides Go bindings for devnet-2 recursive XMSS -// signature aggregation via the leanmultisig Rust FFI library. -package leanmultisig - -/* -#cgo CFLAGS: -I${SRCDIR}/../leanmultisig-ffi/include -#cgo LDFLAGS: ${SRCDIR}/../leanmultisig-ffi/target/release/deps/libleanmultisig_ffi.a -lm -ldl -lpthread -#include "leanmultisig_ffi.h" -*/ -import "C" -import ( - "fmt" - "runtime" - "sync" - "unsafe" -) - -// MessageHashLength is the fixed hash size accepted by leanMultisig APIs. -const MessageHashLength = 32 - -// Result codes matching the LeanmultisigResult C enum. -const ( - ResultOK = C.LEANMULTISIG_RESULT_OK - ResultNullPointer = C.LEANMULTISIG_RESULT_NULL_POINTER - ResultInvalidLength = C.LEANMULTISIG_RESULT_INVALID_LENGTH - ResultLengthMismatch = C.LEANMULTISIG_RESULT_LENGTH_MISMATCH - ResultDeserializationFailed = C.LEANMULTISIG_RESULT_DESERIALIZATION_FAILED - ResultAggregationFailed = C.LEANMULTISIG_RESULT_AGGREGATION_FAILED - ResultVerificationFailed = C.LEANMULTISIG_RESULT_VERIFICATION_FAILED -) - -var setupOnce sync.Once - -func setup() { - // The Rust FFI uses two separate Once guards that both call the same - // xmss_setup_aggregation_program() function. If both SetupProver and - // SetupVerifier are called, it can invoke the setup twice and crash. - // Guard it once on the Go side as well. - C.leanmultisig_setup_prover() -} - -// SetupProver initializes prover-side aggregation artifacts. It is idempotent. -func SetupProver() { - setupOnce.Do(setup) -} - -// SetupVerifier initializes verifier-side aggregation artifacts. It is idempotent. -func SetupVerifier() { - setupOnce.Do(setup) -} - -// Aggregate aggregates individual XMSS signatures into a devnet-2 proof blob. -func Aggregate(pubkeys, signatures [][]byte, messageHash [MessageHashLength]byte, epoch uint32) ([]byte, error) { - if len(pubkeys) == 0 || len(signatures) == 0 { - return nil, fmt.Errorf("pubkeys and signatures must be non-empty") - } - if len(pubkeys) != len(signatures) { - return nil, fmt.Errorf("pubkeys/signatures length mismatch: %d/%d", len(pubkeys), len(signatures)) - } - - cPubkeys, numPubkeys, freePubkeys, err := makeCBytesViews(pubkeys) - if err != nil { - return nil, err - } - defer freePubkeys() - - cSignatures, numSignatures, freeSignatures, err := makeCBytesViews(signatures) - if err != nil { - return nil, err - } - defer freeSignatures() - - var outData *C.uint8_t - var outLen C.size_t - result := C.leanmultisig_aggregate( - cPubkeys, - numPubkeys, - cSignatures, - numSignatures, - (*C.uint8_t)(unsafe.Pointer(&messageHash[0])), - C.size_t(MessageHashLength), - C.uint32_t(epoch), - &outData, - &outLen, - ) - runtime.KeepAlive(messageHash) - if result != ResultOK { - return nil, resultError("leanmultisig_aggregate", result) - } - if outData == nil || outLen == 0 { - return nil, fmt.Errorf("leanmultisig_aggregate returned empty proof") - } - - proof := C.GoBytes(unsafe.Pointer(outData), C.int(outLen)) - C.leanmultisig_bytes_free(outData, outLen) - return proof, nil -} - -// VerifyAggregated verifies a devnet-2 aggregated proof against public keys. -func VerifyAggregated(pubkeys [][]byte, messageHash [MessageHashLength]byte, proofData []byte, epoch uint32) error { - if len(pubkeys) == 0 { - return fmt.Errorf("pubkeys must be non-empty") - } - if len(proofData) == 0 { - return fmt.Errorf("proof data must be non-empty") - } - - cPubkeys, numPubkeys, freePubkeys, err := makeCBytesViews(pubkeys) - if err != nil { - return err - } - defer freePubkeys() - - result := C.leanmultisig_verify_aggregated( - cPubkeys, - numPubkeys, - (*C.uint8_t)(unsafe.Pointer(&messageHash[0])), - C.size_t(MessageHashLength), - (*C.uint8_t)(unsafe.Pointer(&proofData[0])), - C.size_t(len(proofData)), - C.uint32_t(epoch), - ) - runtime.KeepAlive(messageHash) - runtime.KeepAlive(proofData) - if result != ResultOK { - return resultError("leanmultisig_verify_aggregated", result) - } - return nil -} - -func makeCBytesViews(chunks [][]byte) (*C.LeanmultisigBytes, C.size_t, func(), error) { - if len(chunks) == 0 { - return nil, 0, nil, fmt.Errorf("empty byte chunks") - } - - viewSize := unsafe.Sizeof(C.LeanmultisigBytes{}) - viewsPtr := C.malloc(C.size_t(len(chunks)) * C.size_t(viewSize)) - if viewsPtr == nil { - return nil, 0, nil, fmt.Errorf("allocate C LeanmultisigBytes array") - } - views := unsafe.Slice((*C.LeanmultisigBytes)(viewsPtr), len(chunks)) - - allocated := make([]unsafe.Pointer, len(chunks)) - for i := range chunks { - if len(chunks[i]) == 0 { - for j := 0; j < i; j++ { - C.free(allocated[j]) - } - C.free(viewsPtr) - return nil, 0, nil, fmt.Errorf("empty byte chunk at index %d", i) - } - data := C.CBytes(chunks[i]) - allocated[i] = data - views[i] = C.LeanmultisigBytes{ - data: (*C.uint8_t)(data), - len: C.size_t(len(chunks[i])), - } - } - - freeFn := func() { - for _, data := range allocated { - if data != nil { - C.free(data) - } - } - C.free(viewsPtr) - } - - return (*C.LeanmultisigBytes)(viewsPtr), C.size_t(len(chunks)), freeFn, nil -} - -func resultError(op string, result C.enum_LeanmultisigResult) error { - switch result { - case ResultNullPointer: - return fmt.Errorf("%s failed: null pointer", op) - case ResultInvalidLength: - return fmt.Errorf("%s failed: invalid length", op) - case ResultLengthMismatch: - return fmt.Errorf("%s failed: length mismatch", op) - case ResultDeserializationFailed: - return fmt.Errorf("%s failed: deserialization failed", op) - case ResultAggregationFailed: - return fmt.Errorf("%s failed: aggregation failed", op) - case ResultVerificationFailed: - return fmt.Errorf("%s failed: verification failed", op) - default: - return fmt.Errorf("%s failed with code %d", op, result) - } -} diff --git a/xmss/leanmultisig/leanmultisig_test.go b/xmss/leanmultisig/leanmultisig_test.go deleted file mode 100644 index 8144fc8..0000000 --- a/xmss/leanmultisig/leanmultisig_test.go +++ /dev/null @@ -1,156 +0,0 @@ -package leanmultisig_test - -import ( - "fmt" - "os" - "testing" - - "github.com/geanlabs/gean/xmss/leanmultisig" - "github.com/geanlabs/gean/xmss/leansig" -) - -const testActivationEpoch = 0 -const testNumActiveEpochs = 8 - -type multisigFixture struct { - pubkeys [][]byte - sigs [][]byte - message [leanmultisig.MessageHashLength]byte - epoch uint32 -} - -var sharedFixture multisigFixture - -func TestMain(m *testing.M) { - var err error - sharedFixture, err = createMultisigFixture() - if err != nil { - fmt.Fprintf(os.Stderr, "TestMain: create multisig fixture: %v\n", err) - os.Exit(1) - } - os.Exit(m.Run()) -} - -func createMultisigFixture() (multisigFixture, error) { - var out multisigFixture - - kp1, err := leansig.GenerateKeypair(101, testActivationEpoch, testNumActiveEpochs) - if err != nil { - return out, fmt.Errorf("generate keypair 1: %w", err) - } - defer kp1.Free() - - kp2, err := leansig.GenerateKeypair(202, testActivationEpoch, testNumActiveEpochs) - if err != nil { - return out, fmt.Errorf("generate keypair 2: %w", err) - } - defer kp2.Free() - - var message [leanmultisig.MessageHashLength]byte - for i := range message { - message[i] = byte(i + 1) - } - epoch := uint32(0) - - pk1, err := kp1.PublicKeyBytes() - if err != nil { - return out, fmt.Errorf("serialize pubkey 1: %w", err) - } - pk2, err := kp2.PublicKeyBytes() - if err != nil { - return out, fmt.Errorf("serialize pubkey 2: %w", err) - } - - sig1, err := kp1.Sign(epoch, message) - if err != nil { - return out, fmt.Errorf("sign with keypair 1: %w", err) - } - sig2, err := kp2.Sign(epoch, message) - if err != nil { - return out, fmt.Errorf("sign with keypair 2: %w", err) - } - - out = multisigFixture{ - pubkeys: [][]byte{pk1, pk2}, - sigs: [][]byte{sig1, sig2}, - message: message, - epoch: epoch, - } - return out, nil -} - -func TestAggregateAndVerifyRoundTrip(t *testing.T) { - fx := sharedFixture - - leanmultisig.SetupProver() - proof, err := leanmultisig.Aggregate(fx.pubkeys, fx.sigs, fx.message, fx.epoch) - if err != nil { - t.Fatalf("aggregate: %v", err) - } - if len(proof) == 0 { - t.Fatal("aggregate returned empty proof") - } - - leanmultisig.SetupVerifier() - if err := leanmultisig.VerifyAggregated(fx.pubkeys, fx.message, proof, fx.epoch); err != nil { - t.Fatalf("verify aggregated: %v", err) - } -} - -func TestVerifyRejectsWrongMessage(t *testing.T) { - fx := sharedFixture - - leanmultisig.SetupProver() - proof, err := leanmultisig.Aggregate(fx.pubkeys, fx.sigs, fx.message, fx.epoch) - if err != nil { - t.Fatalf("aggregate: %v", err) - } - - wrongMessage := fx.message - wrongMessage[0] ^= 0xFF - - leanmultisig.SetupVerifier() - if err := leanmultisig.VerifyAggregated(fx.pubkeys, wrongMessage, proof, fx.epoch); err == nil { - t.Fatal("expected verification failure with wrong message") - } -} - -func TestAggregateRejectsLengthMismatch(t *testing.T) { - fx := sharedFixture - - _, err := leanmultisig.Aggregate(fx.pubkeys[:1], fx.sigs, fx.message, fx.epoch) - if err == nil { - t.Fatal("expected aggregate to fail on pubkey/signature length mismatch") - } -} - -func TestAggregateAndVerify400Signatures(t *testing.T) { - if os.Getenv("GEAN_RUN_400_SIG_TEST") != "1" { - t.Skip("set GEAN_RUN_400_SIG_TEST=1 to run 400-signature aggregation test") - } - - const totalSignatures = 400 - base := sharedFixture - - pubkeys := make([][]byte, totalSignatures) - sigs := make([][]byte, totalSignatures) - for i := 0; i < totalSignatures; i++ { - src := i % len(base.pubkeys) - pubkeys[i] = base.pubkeys[src] - sigs[i] = base.sigs[src] - } - - leanmultisig.SetupProver() - proof, err := leanmultisig.Aggregate(pubkeys, sigs, base.message, base.epoch) - if err != nil { - t.Fatalf("aggregate %d signatures: %v", totalSignatures, err) - } - if len(proof) == 0 { - t.Fatalf("aggregate %d signatures returned empty proof", totalSignatures) - } - - leanmultisig.SetupVerifier() - if err := leanmultisig.VerifyAggregated(pubkeys, base.message, proof, base.epoch); err != nil { - t.Fatalf("verify %d signatures aggregated proof: %v", totalSignatures, err) - } -} diff --git a/xmss/leansig-ffi/.gitignore b/xmss/leansig-ffi/.gitignore deleted file mode 100644 index a6f89c2..0000000 --- a/xmss/leansig-ffi/.gitignore +++ /dev/null @@ -1 +0,0 @@ -/target/ \ No newline at end of file diff --git a/xmss/leansig-ffi/Cargo.lock b/xmss/leansig-ffi/Cargo.lock deleted file mode 100644 index 4d54a47..0000000 --- a/xmss/leansig-ffi/Cargo.lock +++ /dev/null @@ -1,2356 +0,0 @@ -# This file is automatically @generated by Cargo. -# It is not intended for manual editing. -version = 4 - -[[package]] -name = "alloy-primitives" -version = "1.5.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "66b1483f8c2562bf35f0270b697d5b5fe8170464e935bd855a4c5eaf6f89b354" -dependencies = [ - "alloy-rlp", - "bytes", - "cfg-if", - "const-hex", - "derive_more", - "foldhash 0.2.0", - "hashbrown 0.16.1", - "indexmap", - "itoa", - "k256", - "keccak-asm", - "paste", - "proptest", - "rand 0.9.2", - "rapidhash", - "ruint", - "rustc-hash", - "serde", - "sha3", -] - -[[package]] -name = "alloy-rlp" -version = "0.3.13" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e93e50f64a77ad9c5470bf2ad0ca02f228da70c792a8f06634801e202579f35e" -dependencies = [ - "arrayvec", - "bytes", -] - -[[package]] -name = "anstream" -version = "0.6.21" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "43d5b281e737544384e969a5ccad3f1cdd24b48086a0fc1b2a5262a26b8f4f4a" -dependencies = [ - "anstyle", - "anstyle-parse", - "anstyle-query", - "anstyle-wincon", - "colorchoice", - "is_terminal_polyfill", - "utf8parse", -] - -[[package]] -name = "anstyle" -version = "1.0.13" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5192cca8006f1fd4f7237516f40fa183bb07f8fbdfedaa0036de5ea9b0b45e78" - -[[package]] -name = "anstyle-parse" -version = "0.2.7" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4e7644824f0aa2c7b9384579234ef10eb7efb6a0deb83f9630a49594dd9c15c2" -dependencies = [ - "utf8parse", -] - -[[package]] -name = "anstyle-query" -version = "1.1.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "40c48f72fd53cd289104fc64099abca73db4166ad86ea0b4341abe65af83dadc" -dependencies = [ - "windows-sys", -] - -[[package]] -name = "anstyle-wincon" -version = "3.0.11" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "291e6a250ff86cd4a820112fb8898808a366d8f9f58ce16d1f538353ad55747d" -dependencies = [ - "anstyle", - "once_cell_polyfill", - "windows-sys", -] - -[[package]] -name = "anyhow" -version = "1.0.101" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5f0e0fee31ef5ed1ba1316088939cea399010ed7731dba877ed44aeb407a75ea" - -[[package]] -name = "ark-ff" -version = "0.3.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6b3235cc41ee7a12aaaf2c575a2ad7b46713a8a50bda2fc3b003a04845c05dd6" -dependencies = [ - "ark-ff-asm 0.3.0", - "ark-ff-macros 0.3.0", - "ark-serialize 0.3.0", - "ark-std 0.3.0", - "derivative", - "num-bigint", - "num-traits", - "paste", - "rustc_version 0.3.3", - "zeroize", -] - -[[package]] -name = "ark-ff" -version = "0.4.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ec847af850f44ad29048935519032c33da8aa03340876d351dfab5660d2966ba" -dependencies = [ - "ark-ff-asm 0.4.2", - "ark-ff-macros 0.4.2", - "ark-serialize 0.4.2", - "ark-std 0.4.0", - "derivative", - "digest 0.10.7", - "itertools 0.10.5", - "num-bigint", - "num-traits", - "paste", - "rustc_version 0.4.1", - "zeroize", -] - -[[package]] -name = "ark-ff" -version = "0.5.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a177aba0ed1e0fbb62aa9f6d0502e9b46dad8c2eab04c14258a1212d2557ea70" -dependencies = [ - "ark-ff-asm 0.5.0", - "ark-ff-macros 0.5.0", - "ark-serialize 0.5.0", - "ark-std 0.5.0", - "arrayvec", - "digest 0.10.7", - "educe", - "itertools 0.13.0", - "num-bigint", - "num-traits", - "paste", - "zeroize", -] - -[[package]] -name = "ark-ff-asm" -version = "0.3.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "db02d390bf6643fb404d3d22d31aee1c4bc4459600aef9113833d17e786c6e44" -dependencies = [ - "quote", - "syn 1.0.109", -] - -[[package]] -name = "ark-ff-asm" -version = "0.4.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3ed4aa4fe255d0bc6d79373f7e31d2ea147bcf486cba1be5ba7ea85abdb92348" -dependencies = [ - "quote", - "syn 1.0.109", -] - -[[package]] -name = "ark-ff-asm" -version = "0.5.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "62945a2f7e6de02a31fe400aa489f0e0f5b2502e69f95f853adb82a96c7a6b60" -dependencies = [ - "quote", - "syn 2.0.116", -] - -[[package]] -name = "ark-ff-macros" -version = "0.3.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "db2fd794a08ccb318058009eefdf15bcaaaaf6f8161eb3345f907222bac38b20" -dependencies = [ - "num-bigint", - "num-traits", - "quote", - "syn 1.0.109", -] - -[[package]] -name = "ark-ff-macros" -version = "0.4.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7abe79b0e4288889c4574159ab790824d0033b9fdcb2a112a3182fac2e514565" -dependencies = [ - "num-bigint", - "num-traits", - "proc-macro2", - "quote", - "syn 1.0.109", -] - -[[package]] -name = "ark-ff-macros" -version = "0.5.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "09be120733ee33f7693ceaa202ca41accd5653b779563608f1234f78ae07c4b3" -dependencies = [ - "num-bigint", - "num-traits", - "proc-macro2", - "quote", - "syn 2.0.116", -] - -[[package]] -name = "ark-serialize" -version = "0.3.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1d6c2b318ee6e10f8c2853e73a83adc0ccb88995aa978d8a3408d492ab2ee671" -dependencies = [ - "ark-std 0.3.0", - "digest 0.9.0", -] - -[[package]] -name = "ark-serialize" -version = "0.4.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "adb7b85a02b83d2f22f89bd5cac66c9c89474240cb6207cb1efc16d098e822a5" -dependencies = [ - "ark-std 0.4.0", - "digest 0.10.7", - "num-bigint", -] - -[[package]] -name = "ark-serialize" -version = "0.5.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3f4d068aaf107ebcd7dfb52bc748f8030e0fc930ac8e360146ca54c1203088f7" -dependencies = [ - "ark-std 0.5.0", - "arrayvec", - "digest 0.10.7", - "num-bigint", -] - -[[package]] -name = "ark-std" -version = "0.3.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1df2c09229cbc5a028b1d70e00fdb2acee28b1055dfb5ca73eea49c5a25c4e7c" -dependencies = [ - "num-traits", - "rand 0.8.5", -] - -[[package]] -name = "ark-std" -version = "0.4.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "94893f1e0c6eeab764ade8dc4c0db24caf4fe7cbbaafc0eba0a9030f447b5185" -dependencies = [ - "num-traits", - "rand 0.8.5", -] - -[[package]] -name = "ark-std" -version = "0.5.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "246a225cc6131e9ee4f24619af0f19d67761fff15d7ccc22e42b80846e69449a" -dependencies = [ - "num-traits", - "rand 0.8.5", -] - -[[package]] -name = "arrayvec" -version = "0.7.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7c02d123df017efcdfbd739ef81735b36c5ba83ec3c59c80a9d7ecc718f92e50" - -[[package]] -name = "auto_impl" -version = "1.3.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ffdcb70bdbc4d478427380519163274ac86e52916e10f0a8889adf0f96d3fee7" -dependencies = [ - "proc-macro2", - "quote", - "syn 2.0.116", -] - -[[package]] -name = "autocfg" -version = "1.5.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c08606f8c3cbf4ce6ec8e28fb0014a2c086708fe954eaa885384a6165172e7e8" - -[[package]] -name = "base16ct" -version = "0.2.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4c7f02d4ea65f2c1853089ffd8d2787bdbc63de2f0d29dedbcf8ccdfa0ccd4cf" - -[[package]] -name = "base64ct" -version = "1.8.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2af50177e190e07a26ab74f8b1efbfe2ef87da2116221318cb1c2e82baf7de06" - -[[package]] -name = "bit-set" -version = "0.8.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "08807e080ed7f9d5433fa9b275196cfc35414f66a0c79d864dc51a0d825231a3" -dependencies = [ - "bit-vec", -] - -[[package]] -name = "bit-vec" -version = "0.8.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5e764a1d40d510daf35e07be9eb06e75770908c27d411ee6c92109c9840eaaf7" - -[[package]] -name = "bitflags" -version = "2.11.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "843867be96c8daad0d758b57df9392b6d8d271134fce549de6ce169ff98a92af" - -[[package]] -name = "bitvec" -version = "1.0.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1bc2832c24239b0141d5674bb9174f9d68a8b5b3f2753311927c172ca46f7e9c" -dependencies = [ - "funty", - "radium", - "tap", - "wyz", -] - -[[package]] -name = "block-buffer" -version = "0.10.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3078c7629b62d3f0439517fa394996acacc5cbc91c5a20d8c658e77abd503a71" -dependencies = [ - "generic-array", -] - -[[package]] -name = "byte-slice-cast" -version = "1.2.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7575182f7272186991736b70173b0ea045398f984bf5ebbb3804736ce1330c9d" - -[[package]] -name = "byteorder" -version = "1.5.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1fd0f2584146f6f2ef48085050886acf353beff7305ebd1ae69500e27c67f64b" - -[[package]] -name = "bytes" -version = "1.11.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1e748733b7cbc798e1434b6ac524f0c1ff2ab456fe201501e6497c8417a4fc33" -dependencies = [ - "serde", -] - -[[package]] -name = "cbindgen" -version = "0.29.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "befbfd072a8e81c02f8c507aefce431fe5e7d051f83d48a23ffc9b9fe5a11799" -dependencies = [ - "clap", - "heck", - "indexmap", - "log", - "proc-macro2", - "quote", - "serde", - "serde_json", - "syn 2.0.116", - "tempfile", - "toml", -] - -[[package]] -name = "cc" -version = "1.2.56" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "aebf35691d1bfb0ac386a69bac2fde4dd276fb618cf8bf4f5318fe285e821bb2" -dependencies = [ - "find-msvc-tools", - "shlex", -] - -[[package]] -name = "cfg-if" -version = "1.0.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9330f8b2ff13f34540b44e946ef35111825727b38d33286ef986142615121801" - -[[package]] -name = "clap" -version = "4.5.59" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c5caf74d17c3aec5495110c34cc3f78644bfa89af6c8993ed4de2790e49b6499" -dependencies = [ - "clap_builder", -] - -[[package]] -name = "clap_builder" -version = "4.5.59" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "370daa45065b80218950227371916a1633217ae42b2715b2287b606dcd618e24" -dependencies = [ - "anstream", - "anstyle", - "clap_lex", - "strsim", -] - -[[package]] -name = "clap_lex" -version = "1.0.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3a822ea5bc7590f9d40f1ba12c0dc3c2760f3482c6984db1573ad11031420831" - -[[package]] -name = "colorchoice" -version = "1.0.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b05b61dc5112cbb17e4b6cd61790d9845d13888356391624cbe7e41efeac1e75" - -[[package]] -name = "const-hex" -version = "1.17.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3bb320cac8a0750d7f25280aa97b09c26edfe161164238ecbbb31092b079e735" -dependencies = [ - "cfg-if", - "cpufeatures", - "proptest", - "serde_core", -] - -[[package]] -name = "const-oid" -version = "0.9.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c2459377285ad874054d797f3ccebf984978aa39129f6eafde5cdc8315b612f8" - -[[package]] -name = "const_format" -version = "0.2.35" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7faa7469a93a566e9ccc1c73fe783b4a65c274c5ace346038dca9c39fe0030ad" -dependencies = [ - "const_format_proc_macros", -] - -[[package]] -name = "const_format_proc_macros" -version = "0.2.34" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1d57c2eccfb16dbac1f4e61e206105db5820c9d26c3c472bc17c774259ef7744" -dependencies = [ - "proc-macro2", - "quote", - "unicode-xid", -] - -[[package]] -name = "convert_case" -version = "0.10.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "633458d4ef8c78b72454de2d54fd6ab2e60f9e02be22f3c6104cdc8a4e0fceb9" -dependencies = [ - "unicode-segmentation", -] - -[[package]] -name = "cpufeatures" -version = "0.2.17" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "59ed5838eebb26a2bb2e58f6d5b5316989ae9d08bab10e0e6d103e656d1b0280" -dependencies = [ - "libc", -] - -[[package]] -name = "crossbeam-deque" -version = "0.8.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9dd111b7b7f7d55b72c0a6ae361660ee5853c9af73f70c3c2ef6858b950e2e51" -dependencies = [ - "crossbeam-epoch", - "crossbeam-utils", -] - -[[package]] -name = "crossbeam-epoch" -version = "0.9.18" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5b82ac4a3c2ca9c3460964f020e1402edd5753411d7737aa39c3714ad1b5420e" -dependencies = [ - "crossbeam-utils", -] - -[[package]] -name = "crossbeam-utils" -version = "0.8.21" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d0a5c400df2834b80a4c3327b3aad3a4c4cd4de0629063962b03235697506a28" - -[[package]] -name = "crunchy" -version = "0.2.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "460fbee9c2c2f33933d720630a6a0bac33ba7053db5344fac858d4b8952d77d5" - -[[package]] -name = "crypto-bigint" -version = "0.5.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0dc92fb57ca44df6db8059111ab3af99a63d5d0f8375d9972e319a379c6bab76" -dependencies = [ - "generic-array", - "rand_core 0.6.4", - "subtle", - "zeroize", -] - -[[package]] -name = "crypto-common" -version = "0.1.7" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "78c8292055d1c1df0cce5d180393dc8cce0abec0a7102adb6c7b1eef6016d60a" -dependencies = [ - "generic-array", - "typenum", -] - -[[package]] -name = "dashmap" -version = "6.1.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5041cc499144891f3790297212f32a74fb938e5136a14943f338ef9e0ae276cf" -dependencies = [ - "cfg-if", - "crossbeam-utils", - "hashbrown 0.14.5", - "lock_api", - "once_cell", - "parking_lot_core", -] - -[[package]] -name = "der" -version = "0.7.10" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e7c1832837b905bbfb5101e07cc24c8deddf52f93225eee6ead5f4d63d53ddcb" -dependencies = [ - "const-oid", - "zeroize", -] - -[[package]] -name = "derivative" -version = "2.2.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fcc3dd5e9e9c0b295d6e1e4d811fb6f157d5ffd784b8d202fc62eac8035a770b" -dependencies = [ - "proc-macro2", - "quote", - "syn 1.0.109", -] - -[[package]] -name = "derive_more" -version = "2.1.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d751e9e49156b02b44f9c1815bcb94b984cdcc4396ecc32521c739452808b134" -dependencies = [ - "derive_more-impl", -] - -[[package]] -name = "derive_more-impl" -version = "2.1.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "799a97264921d8623a957f6c3b9011f3b5492f557bbb7a5a19b7fa6d06ba8dcb" -dependencies = [ - "convert_case", - "proc-macro2", - "quote", - "rustc_version 0.4.1", - "syn 2.0.116", - "unicode-xid", -] - -[[package]] -name = "digest" -version = "0.9.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d3dd60d1080a57a05ab032377049e0591415d2b31afd7028356dbf3cc6dcb066" -dependencies = [ - "generic-array", -] - -[[package]] -name = "digest" -version = "0.10.7" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292" -dependencies = [ - "block-buffer", - "const-oid", - "crypto-common", - "subtle", -] - -[[package]] -name = "ecdsa" -version = "0.16.9" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ee27f32b5c5292967d2d4a9d7f1e0b0aed2c15daded5a60300e4abb9d8020bca" -dependencies = [ - "der", - "digest 0.10.7", - "elliptic-curve", - "rfc6979", - "signature", - "spki", -] - -[[package]] -name = "educe" -version = "0.6.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1d7bc049e1bd8cdeb31b68bbd586a9464ecf9f3944af3958a7a9d0f8b9799417" -dependencies = [ - "enum-ordinalize", - "proc-macro2", - "quote", - "syn 2.0.116", -] - -[[package]] -name = "either" -version = "1.15.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719" - -[[package]] -name = "elliptic-curve" -version = "0.13.8" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b5e6043086bf7973472e0c7dff2142ea0b680d30e18d9cc40f267efbf222bd47" -dependencies = [ - "base16ct", - "crypto-bigint", - "digest 0.10.7", - "ff", - "generic-array", - "group", - "pkcs8", - "rand_core 0.6.4", - "sec1", - "subtle", - "zeroize", -] - -[[package]] -name = "enum-ordinalize" -version = "4.3.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4a1091a7bb1f8f2c4b28f1fe2cef4980ca2d410a3d727d67ecc3178c9b0800f0" -dependencies = [ - "enum-ordinalize-derive", -] - -[[package]] -name = "enum-ordinalize-derive" -version = "4.3.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8ca9601fb2d62598ee17836250842873a413586e5d7ed88b356e38ddbb0ec631" -dependencies = [ - "proc-macro2", - "quote", - "syn 2.0.116", -] - -[[package]] -name = "equivalent" -version = "1.0.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f" - -[[package]] -name = "errno" -version = "0.3.14" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb" -dependencies = [ - "libc", - "windows-sys", -] - -[[package]] -name = "ethereum_serde_utils" -version = "0.8.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3dc1355dbb41fbbd34ec28d4fb2a57d9a70c67ac3c19f6a5ca4d4a176b9e997a" -dependencies = [ - "alloy-primitives", - "hex", - "serde", - "serde_derive", - "serde_json", -] - -[[package]] -name = "ethereum_ssz" -version = "0.10.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2128a84f7a3850d54ee343334e3392cca61f9f6aa9441eec481b9394b43c238b" -dependencies = [ - "alloy-primitives", - "ethereum_serde_utils", - "itertools 0.14.0", - "serde", - "serde_derive", - "smallvec", - "typenum", -] - -[[package]] -name = "fastrand" -version = "2.3.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be" - -[[package]] -name = "fastrlp" -version = "0.3.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "139834ddba373bbdd213dffe02c8d110508dcf1726c2be27e8d1f7d7e1856418" -dependencies = [ - "arrayvec", - "auto_impl", - "bytes", -] - -[[package]] -name = "fastrlp" -version = "0.4.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ce8dba4714ef14b8274c371879b175aa55b16b30f269663f19d576f380018dc4" -dependencies = [ - "arrayvec", - "auto_impl", - "bytes", -] - -[[package]] -name = "ff" -version = "0.13.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c0b50bfb653653f9ca9095b427bed08ab8d75a137839d9ad64eb11810d5b6393" -dependencies = [ - "rand_core 0.6.4", - "subtle", -] - -[[package]] -name = "find-msvc-tools" -version = "0.1.9" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5baebc0774151f905a1a2cc41989300b1e6fbb29aff0ceffa1064fdd3088d582" - -[[package]] -name = "fixed-hash" -version = "0.8.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "835c052cb0c08c1acf6ffd71c022172e18723949c8282f2b9f27efbc51e64534" -dependencies = [ - "byteorder", - "rand 0.8.5", - "rustc-hex", - "static_assertions", -] - -[[package]] -name = "fnv" -version = "1.0.7" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1" - -[[package]] -name = "foldhash" -version = "0.1.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d9c4f5dac5e15c24eb999c26181a6ca40b39fe946cbe4c263c7209467bc83af2" - -[[package]] -name = "foldhash" -version = "0.2.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "77ce24cb58228fbb8aa041425bb1050850ac19177686ea6e0f41a70416f56fdb" - -[[package]] -name = "funty" -version = "2.0.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e6d5a32815ae3f33302d95fdcb2ce17862f8c65363dcfd29360480ba1001fc9c" - -[[package]] -name = "generic-array" -version = "0.14.7" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a" -dependencies = [ - "typenum", - "version_check", - "zeroize", -] - -[[package]] -name = "getrandom" -version = "0.2.17" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ff2abc00be7fca6ebc474524697ae276ad847ad0a6b3faa4bcb027e9a4614ad0" -dependencies = [ - "cfg-if", - "libc", - "wasi", -] - -[[package]] -name = "getrandom" -version = "0.3.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "899def5c37c4fd7b2664648c28120ecec138e4d395b459e5ca34f9cce2dd77fd" -dependencies = [ - "cfg-if", - "libc", - "r-efi", - "wasip2", -] - -[[package]] -name = "getrandom" -version = "0.4.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "139ef39800118c7683f2fd3c98c1b23c09ae076556b435f8e9064ae108aaeeec" -dependencies = [ - "cfg-if", - "libc", - "r-efi", - "wasip2", - "wasip3", -] - -[[package]] -name = "group" -version = "0.13.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f0f9ef7462f7c099f518d754361858f86d8a07af53ba9af0fe635bbccb151a63" -dependencies = [ - "ff", - "rand_core 0.6.4", - "subtle", -] - -[[package]] -name = "hashbrown" -version = "0.14.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e5274423e17b7c9fc20b6e7e208532f9b19825d82dfd615708b70edd83df41f1" - -[[package]] -name = "hashbrown" -version = "0.15.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9229cfe53dfd69f0609a49f65461bd93001ea1ef889cd5529dd176593f5338a1" -dependencies = [ - "foldhash 0.1.5", -] - -[[package]] -name = "hashbrown" -version = "0.16.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "841d1cc9bed7f9236f321df977030373f4a4163ae1a7dbfe1a51a2c1a51d9100" -dependencies = [ - "foldhash 0.2.0", - "serde", - "serde_core", -] - -[[package]] -name = "heck" -version = "0.5.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea" - -[[package]] -name = "hex" -version = "0.4.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70" - -[[package]] -name = "hmac" -version = "0.12.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6c49c37c09c17a53d937dfbb742eb3a961d65a994e6bcdcf37e7399d0cc8ab5e" -dependencies = [ - "digest 0.10.7", -] - -[[package]] -name = "id-arena" -version = "2.3.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3d3067d79b975e8844ca9eb072e16b31c3c1c36928edf9c6789548c524d0d954" - -[[package]] -name = "impl-codec" -version = "0.6.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ba6a270039626615617f3f36d15fc827041df3b78c439da2cadfa47455a77f2f" -dependencies = [ - "parity-scale-codec", -] - -[[package]] -name = "impl-trait-for-tuples" -version = "0.2.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a0eb5a3343abf848c0984fe4604b2b105da9539376e24fc0a3b0007411ae4fd9" -dependencies = [ - "proc-macro2", - "quote", - "syn 2.0.116", -] - -[[package]] -name = "indexmap" -version = "2.13.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7714e70437a7dc3ac8eb7e6f8df75fd8eb422675fc7678aff7364301092b1017" -dependencies = [ - "equivalent", - "hashbrown 0.16.1", - "serde", - "serde_core", -] - -[[package]] -name = "is_terminal_polyfill" -version = "1.70.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a6cb138bb79a146c1bd460005623e142ef0181e3d0219cb493e02f7d08a35695" - -[[package]] -name = "itertools" -version = "0.10.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b0fd2260e829bddf4cb6ea802289de2f86d6a7a690192fbe91b3f46e0f2c8473" -dependencies = [ - "either", -] - -[[package]] -name = "itertools" -version = "0.13.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "413ee7dfc52ee1a4949ceeb7dbc8a33f2d6c088194d9f922fb8318faf1f01186" -dependencies = [ - "either", -] - -[[package]] -name = "itertools" -version = "0.14.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2b192c782037fadd9cfa75548310488aabdbf3d2da73885b31bd0abd03351285" -dependencies = [ - "either", -] - -[[package]] -name = "itoa" -version = "1.0.17" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "92ecc6618181def0457392ccd0ee51198e065e016d1d527a7ac1b6dc7c1f09d2" - -[[package]] -name = "k256" -version = "0.13.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f6e3919bbaa2945715f0bb6d3934a173d1e9a59ac23767fbaaef277265a7411b" -dependencies = [ - "cfg-if", - "ecdsa", - "elliptic-curve", - "once_cell", - "sha2", -] - -[[package]] -name = "keccak" -version = "0.1.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cb26cec98cce3a3d96cbb7bced3c4b16e3d13f27ec56dbd62cbc8f39cfb9d653" -dependencies = [ - "cpufeatures", -] - -[[package]] -name = "keccak-asm" -version = "0.1.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b646a74e746cd25045aa0fd42f4f7f78aa6d119380182c7e63a5593c4ab8df6f" -dependencies = [ - "digest 0.10.7", - "sha3-asm", -] - -[[package]] -name = "leansig" -version = "0.1.0" -source = "git+https://github.com/leanEthereum/leanSig?rev=73bedc26ed961b110df7ac2e234dc11361a4bf25#73bedc26ed961b110df7ac2e234dc11361a4bf25" -dependencies = [ - "dashmap", - "ethereum_ssz", - "num-bigint", - "num-traits", - "p3-baby-bear", - "p3-field", - "p3-koala-bear", - "p3-symmetric", - "rand 0.9.2", - "rayon", - "serde", - "sha3", - "thiserror", -] - -[[package]] -name = "leansig-ffi" -version = "0.1.0" -dependencies = [ - "cbindgen", - "ethereum_ssz", - "leansig", - "rand 0.9.2", -] - -[[package]] -name = "leb128fmt" -version = "0.1.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "09edd9e8b54e49e587e4f6295a7d29c3ea94d469cb40ab8ca70b288248a81db2" - -[[package]] -name = "libc" -version = "0.2.182" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6800badb6cb2082ffd7b6a67e6125bb39f18782f793520caee8cb8846be06112" - -[[package]] -name = "libm" -version = "0.2.16" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b6d2cec3eae94f9f509c767b45932f1ada8350c4bdb85af2fcab4a3c14807981" - -[[package]] -name = "linux-raw-sys" -version = "0.11.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "df1d3c3b53da64cf5760482273a98e575c651a67eec7f77df96b5b642de8f039" - -[[package]] -name = "lock_api" -version = "0.4.14" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "224399e74b87b5f3557511d98dff8b14089b3dadafcab6bb93eab67d3aace965" -dependencies = [ - "scopeguard", -] - -[[package]] -name = "log" -version = "0.4.29" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5e5032e24019045c762d3c0f28f5b6b8bbf38563a65908389bf7978758920897" - -[[package]] -name = "memchr" -version = "2.8.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f8ca58f447f06ed17d5fc4043ce1b10dd205e060fb3ce5b979b8ed8e59ff3f79" - -[[package]] -name = "num-bigint" -version = "0.4.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a5e44f723f1133c9deac646763579fdb3ac745e418f2a7af9cd0c431da1f20b9" -dependencies = [ - "num-integer", - "num-traits", -] - -[[package]] -name = "num-integer" -version = "0.1.46" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7969661fd2958a5cb096e56c8e1ad0444ac2bbcd0061bd28660485a44879858f" -dependencies = [ - "num-traits", -] - -[[package]] -name = "num-traits" -version = "0.2.19" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "071dfc062690e90b734c0b2273ce72ad0ffa95f0c74596bc250dcfd960262841" -dependencies = [ - "autocfg", - "libm", -] - -[[package]] -name = "once_cell" -version = "1.21.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d" - -[[package]] -name = "once_cell_polyfill" -version = "1.70.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "384b8ab6d37215f3c5301a95a4accb5d64aa607f1fcb26a11b5303878451b4fe" - -[[package]] -name = "p3-baby-bear" -version = "0.4.1" -source = "git+https://github.com/Plonky3/Plonky3.git?rev=d421e32#d421e32d3821174ae1f7e528d4bb92b7b18ab295" -dependencies = [ - "p3-challenger", - "p3-field", - "p3-mds", - "p3-monty-31", - "p3-poseidon2", - "p3-symmetric", - "rand 0.9.2", -] - -[[package]] -name = "p3-challenger" -version = "0.4.1" -source = "git+https://github.com/Plonky3/Plonky3.git?rev=d421e32#d421e32d3821174ae1f7e528d4bb92b7b18ab295" -dependencies = [ - "p3-field", - "p3-maybe-rayon", - "p3-monty-31", - "p3-symmetric", - "p3-util", - "tracing", -] - -[[package]] -name = "p3-dft" -version = "0.4.1" -source = "git+https://github.com/Plonky3/Plonky3.git?rev=d421e32#d421e32d3821174ae1f7e528d4bb92b7b18ab295" -dependencies = [ - "itertools 0.14.0", - "p3-field", - "p3-matrix", - "p3-maybe-rayon", - "p3-util", - "spin", - "tracing", -] - -[[package]] -name = "p3-field" -version = "0.4.1" -source = "git+https://github.com/Plonky3/Plonky3.git?rev=d421e32#d421e32d3821174ae1f7e528d4bb92b7b18ab295" -dependencies = [ - "itertools 0.14.0", - "num-bigint", - "p3-maybe-rayon", - "p3-util", - "paste", - "rand 0.9.2", - "serde", - "tracing", -] - -[[package]] -name = "p3-koala-bear" -version = "0.4.1" -source = "git+https://github.com/Plonky3/Plonky3.git?rev=d421e32#d421e32d3821174ae1f7e528d4bb92b7b18ab295" -dependencies = [ - "p3-challenger", - "p3-field", - "p3-monty-31", - "p3-poseidon2", - "p3-symmetric", - "rand 0.9.2", -] - -[[package]] -name = "p3-matrix" -version = "0.4.1" -source = "git+https://github.com/Plonky3/Plonky3.git?rev=d421e32#d421e32d3821174ae1f7e528d4bb92b7b18ab295" -dependencies = [ - "itertools 0.14.0", - "p3-field", - "p3-maybe-rayon", - "p3-util", - "rand 0.9.2", - "serde", - "tracing", - "transpose", -] - -[[package]] -name = "p3-maybe-rayon" -version = "0.4.1" -source = "git+https://github.com/Plonky3/Plonky3.git?rev=d421e32#d421e32d3821174ae1f7e528d4bb92b7b18ab295" - -[[package]] -name = "p3-mds" -version = "0.4.1" -source = "git+https://github.com/Plonky3/Plonky3.git?rev=d421e32#d421e32d3821174ae1f7e528d4bb92b7b18ab295" -dependencies = [ - "p3-dft", - "p3-field", - "p3-symmetric", - "p3-util", - "rand 0.9.2", -] - -[[package]] -name = "p3-monty-31" -version = "0.4.1" -source = "git+https://github.com/Plonky3/Plonky3.git?rev=d421e32#d421e32d3821174ae1f7e528d4bb92b7b18ab295" -dependencies = [ - "itertools 0.14.0", - "num-bigint", - "p3-dft", - "p3-field", - "p3-matrix", - "p3-maybe-rayon", - "p3-mds", - "p3-poseidon2", - "p3-symmetric", - "p3-util", - "paste", - "rand 0.9.2", - "serde", - "spin", - "tracing", - "transpose", -] - -[[package]] -name = "p3-poseidon2" -version = "0.4.1" -source = "git+https://github.com/Plonky3/Plonky3.git?rev=d421e32#d421e32d3821174ae1f7e528d4bb92b7b18ab295" -dependencies = [ - "p3-field", - "p3-mds", - "p3-symmetric", - "p3-util", - "rand 0.9.2", -] - -[[package]] -name = "p3-symmetric" -version = "0.4.1" -source = "git+https://github.com/Plonky3/Plonky3.git?rev=d421e32#d421e32d3821174ae1f7e528d4bb92b7b18ab295" -dependencies = [ - "itertools 0.14.0", - "p3-field", - "serde", -] - -[[package]] -name = "p3-util" -version = "0.4.1" -source = "git+https://github.com/Plonky3/Plonky3.git?rev=d421e32#d421e32d3821174ae1f7e528d4bb92b7b18ab295" -dependencies = [ - "serde", -] - -[[package]] -name = "parity-scale-codec" -version = "3.7.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "799781ae679d79a948e13d4824a40970bfa500058d245760dd857301059810fa" -dependencies = [ - "arrayvec", - "bitvec", - "byte-slice-cast", - "const_format", - "impl-trait-for-tuples", - "parity-scale-codec-derive", - "rustversion", - "serde", -] - -[[package]] -name = "parity-scale-codec-derive" -version = "3.7.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "34b4653168b563151153c9e4c08ebed57fb8262bebfa79711552fa983c623e7a" -dependencies = [ - "proc-macro-crate", - "proc-macro2", - "quote", - "syn 2.0.116", -] - -[[package]] -name = "parking_lot_core" -version = "0.9.12" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2621685985a2ebf1c516881c026032ac7deafcda1a2c9b7850dc81e3dfcb64c1" -dependencies = [ - "cfg-if", - "libc", - "redox_syscall", - "smallvec", - "windows-link", -] - -[[package]] -name = "paste" -version = "1.0.15" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "57c0d7b74b563b49d38dae00a0c37d4d6de9b432382b2892f0574ddcae73fd0a" - -[[package]] -name = "pest" -version = "2.8.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e0848c601009d37dfa3430c4666e147e49cdcf1b92ecd3e63657d8a5f19da662" -dependencies = [ - "memchr", - "ucd-trie", -] - -[[package]] -name = "pin-project-lite" -version = "0.2.16" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3b3cff922bd51709b605d9ead9aa71031d81447142d828eb4a6eba76fe619f9b" - -[[package]] -name = "pkcs8" -version = "0.10.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f950b2377845cebe5cf8b5165cb3cc1a5e0fa5cfa3e1f7f55707d8fd82e0a7b7" -dependencies = [ - "der", - "spki", -] - -[[package]] -name = "ppv-lite86" -version = "0.2.21" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "85eae3c4ed2f50dcfe72643da4befc30deadb458a9b590d720cde2f2b1e97da9" -dependencies = [ - "zerocopy", -] - -[[package]] -name = "prettyplease" -version = "0.2.37" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "479ca8adacdd7ce8f1fb39ce9ecccbfe93a3f1344b3d0d97f20bc0196208f62b" -dependencies = [ - "proc-macro2", - "syn 2.0.116", -] - -[[package]] -name = "primitive-types" -version = "0.12.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0b34d9fd68ae0b74a41b21c03c2f62847aa0ffea044eee893b4c140b37e244e2" -dependencies = [ - "fixed-hash", - "impl-codec", - "uint", -] - -[[package]] -name = "proc-macro-crate" -version = "3.4.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "219cb19e96be00ab2e37d6e299658a0cfa83e52429179969b0f0121b4ac46983" -dependencies = [ - "toml_edit", -] - -[[package]] -name = "proc-macro2" -version = "1.0.106" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8fd00f0bb2e90d81d1044c2b32617f68fcb9fa3bb7640c23e9c748e53fb30934" -dependencies = [ - "unicode-ident", -] - -[[package]] -name = "proptest" -version = "1.10.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "37566cb3fdacef14c0737f9546df7cfeadbfbc9fef10991038bf5015d0c80532" -dependencies = [ - "bit-set", - "bit-vec", - "bitflags", - "num-traits", - "rand 0.9.2", - "rand_chacha 0.9.0", - "rand_xorshift", - "regex-syntax", - "rusty-fork", - "tempfile", - "unarray", -] - -[[package]] -name = "quick-error" -version = "1.2.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a1d01941d82fa2ab50be1e79e6714289dd7cde78eba4c074bc5a4374f650dfe0" - -[[package]] -name = "quote" -version = "1.0.44" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "21b2ebcf727b7760c461f091f9f0f539b77b8e87f2fd88131e7f1b433b3cece4" -dependencies = [ - "proc-macro2", -] - -[[package]] -name = "r-efi" -version = "5.3.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "69cdb34c158ceb288df11e18b4bd39de994f6657d83847bdffdbd7f346754b0f" - -[[package]] -name = "radium" -version = "0.7.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dc33ff2d4973d518d823d61aa239014831e521c75da58e3df4840d3f47749d09" - -[[package]] -name = "rand" -version = "0.8.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404" -dependencies = [ - "libc", - "rand_chacha 0.3.1", - "rand_core 0.6.4", -] - -[[package]] -name = "rand" -version = "0.9.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1" -dependencies = [ - "rand_chacha 0.9.0", - "rand_core 0.9.5", - "serde", -] - -[[package]] -name = "rand_chacha" -version = "0.3.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88" -dependencies = [ - "ppv-lite86", - "rand_core 0.6.4", -] - -[[package]] -name = "rand_chacha" -version = "0.9.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d3022b5f1df60f26e1ffddd6c66e8aa15de382ae63b3a0c1bfc0e4d3e3f325cb" -dependencies = [ - "ppv-lite86", - "rand_core 0.9.5", -] - -[[package]] -name = "rand_core" -version = "0.6.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c" -dependencies = [ - "getrandom 0.2.17", -] - -[[package]] -name = "rand_core" -version = "0.9.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "76afc826de14238e6e8c374ddcc1fa19e374fd8dd986b0d2af0d02377261d83c" -dependencies = [ - "getrandom 0.3.4", - "serde", -] - -[[package]] -name = "rand_xorshift" -version = "0.4.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "513962919efc330f829edb2535844d1b912b0fbe2ca165d613e4e8788bb05a5a" -dependencies = [ - "rand_core 0.9.5", -] - -[[package]] -name = "rapidhash" -version = "4.3.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "84816e4c99c467e92cf984ee6328caa976dfecd33a673544489d79ca2caaefe5" -dependencies = [ - "rustversion", -] - -[[package]] -name = "rayon" -version = "1.11.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "368f01d005bf8fd9b1206fb6fa653e6c4a81ceb1466406b81792d87c5677a58f" -dependencies = [ - "either", - "rayon-core", -] - -[[package]] -name = "rayon-core" -version = "1.13.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "22e18b0f0062d30d4230b2e85ff77fdfe4326feb054b9783a3460d8435c8ab91" -dependencies = [ - "crossbeam-deque", - "crossbeam-utils", -] - -[[package]] -name = "redox_syscall" -version = "0.5.18" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ed2bf2547551a7053d6fdfafda3f938979645c44812fbfcda098faae3f1a362d" -dependencies = [ - "bitflags", -] - -[[package]] -name = "regex-syntax" -version = "0.8.9" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a96887878f22d7bad8a3b6dc5b7440e0ada9a245242924394987b21cf2210a4c" - -[[package]] -name = "rfc6979" -version = "0.4.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f8dd2a808d456c4a54e300a23e9f5a67e122c3024119acbfd73e3bf664491cb2" -dependencies = [ - "hmac", - "subtle", -] - -[[package]] -name = "rlp" -version = "0.5.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bb919243f34364b6bd2fc10ef797edbfa75f33c252e7998527479c6d6b47e1ec" -dependencies = [ - "bytes", - "rustc-hex", -] - -[[package]] -name = "ruint" -version = "1.17.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c141e807189ad38a07276942c6623032d3753c8859c146104ac2e4d68865945a" -dependencies = [ - "alloy-rlp", - "ark-ff 0.3.0", - "ark-ff 0.4.2", - "ark-ff 0.5.0", - "bytes", - "fastrlp 0.3.1", - "fastrlp 0.4.0", - "num-bigint", - "num-integer", - "num-traits", - "parity-scale-codec", - "primitive-types", - "proptest", - "rand 0.8.5", - "rand 0.9.2", - "rlp", - "ruint-macro", - "serde_core", - "valuable", - "zeroize", -] - -[[package]] -name = "ruint-macro" -version = "1.2.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "48fd7bd8a6377e15ad9d42a8ec25371b94ddc67abe7c8b9127bec79bebaaae18" - -[[package]] -name = "rustc-hash" -version = "2.1.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "357703d41365b4b27c590e3ed91eabb1b663f07c4c084095e60cbed4362dff0d" - -[[package]] -name = "rustc-hex" -version = "2.1.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3e75f6a532d0fd9f7f13144f392b6ad56a32696bfcd9c78f797f16bbb6f072d6" - -[[package]] -name = "rustc_version" -version = "0.3.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f0dfe2087c51c460008730de8b57e6a320782fbfb312e1f4d520e6c6fae155ee" -dependencies = [ - "semver 0.11.0", -] - -[[package]] -name = "rustc_version" -version = "0.4.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cfcb3a22ef46e85b45de6ee7e79d063319ebb6594faafcf1c225ea92ab6e9b92" -dependencies = [ - "semver 1.0.27", -] - -[[package]] -name = "rustix" -version = "1.1.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "146c9e247ccc180c1f61615433868c99f3de3ae256a30a43b49f67c2d9171f34" -dependencies = [ - "bitflags", - "errno", - "libc", - "linux-raw-sys", - "windows-sys", -] - -[[package]] -name = "rustversion" -version = "1.0.22" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b39cdef0fa800fc44525c84ccb54a029961a8215f9619753635a9c0d2538d46d" - -[[package]] -name = "rusty-fork" -version = "0.3.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cc6bf79ff24e648f6da1f8d1f011e9cac26491b619e6b9280f2b47f1774e6ee2" -dependencies = [ - "fnv", - "quick-error", - "tempfile", - "wait-timeout", -] - -[[package]] -name = "scopeguard" -version = "1.2.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49" - -[[package]] -name = "sec1" -version = "0.7.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d3e97a565f76233a6003f9f5c54be1d9c5bdfa3eccfb189469f11ec4901c47dc" -dependencies = [ - "base16ct", - "der", - "generic-array", - "pkcs8", - "subtle", - "zeroize", -] - -[[package]] -name = "semver" -version = "0.11.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f301af10236f6df4160f7c3f04eec6dbc70ace82d23326abad5edee88801c6b6" -dependencies = [ - "semver-parser", -] - -[[package]] -name = "semver" -version = "1.0.27" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d767eb0aabc880b29956c35734170f26ed551a859dbd361d140cdbeca61ab1e2" - -[[package]] -name = "semver-parser" -version = "0.10.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9900206b54a3527fdc7b8a938bffd94a568bac4f4aa8113b209df75a09c0dec2" -dependencies = [ - "pest", -] - -[[package]] -name = "serde" -version = "1.0.228" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9a8e94ea7f378bd32cbbd37198a4a91436180c5bb472411e48b5ec2e2124ae9e" -dependencies = [ - "serde_core", - "serde_derive", -] - -[[package]] -name = "serde_core" -version = "1.0.228" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "41d385c7d4ca58e59fc732af25c3983b67ac852c1a25000afe1175de458b67ad" -dependencies = [ - "serde_derive", -] - -[[package]] -name = "serde_derive" -version = "1.0.228" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d540f220d3187173da220f885ab66608367b6574e925011a9353e4badda91d79" -dependencies = [ - "proc-macro2", - "quote", - "syn 2.0.116", -] - -[[package]] -name = "serde_json" -version = "1.0.149" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "83fc039473c5595ace860d8c4fafa220ff474b3fc6bfdb4293327f1a37e94d86" -dependencies = [ - "itoa", - "memchr", - "serde", - "serde_core", - "zmij", -] - -[[package]] -name = "serde_spanned" -version = "1.0.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f8bbf91e5a4d6315eee45e704372590b30e260ee83af6639d64557f51b067776" -dependencies = [ - "serde_core", -] - -[[package]] -name = "sha2" -version = "0.10.9" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a7507d819769d01a365ab707794a4084392c824f54a7a6a7862f8c3d0892b283" -dependencies = [ - "cfg-if", - "cpufeatures", - "digest 0.10.7", -] - -[[package]] -name = "sha3" -version = "0.10.8" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "75872d278a8f37ef87fa0ddbda7802605cb18344497949862c0d4dcb291eba60" -dependencies = [ - "digest 0.10.7", - "keccak", -] - -[[package]] -name = "sha3-asm" -version = "0.1.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b31139435f327c93c6038ed350ae4588e2c70a13d50599509fee6349967ba35a" -dependencies = [ - "cc", - "cfg-if", -] - -[[package]] -name = "shlex" -version = "1.3.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64" - -[[package]] -name = "signature" -version = "2.2.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "77549399552de45a898a580c1b41d445bf730df867cc44e6c0233bbc4b8329de" -dependencies = [ - "digest 0.10.7", - "rand_core 0.6.4", -] - -[[package]] -name = "smallvec" -version = "1.15.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "67b1b7a3b5fe4f1376887184045fcf45c69e92af734b7aaddc05fb777b6fbd03" - -[[package]] -name = "spin" -version = "0.10.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d5fe4ccb98d9c292d56fec89a5e07da7fc4cf0dc11e156b41793132775d3e591" -dependencies = [ - "lock_api", -] - -[[package]] -name = "spki" -version = "0.7.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d91ed6c858b01f942cd56b37a94b3e0a1798290327d1236e4d9cf4eaca44d29d" -dependencies = [ - "base64ct", - "der", -] - -[[package]] -name = "static_assertions" -version = "1.1.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f" - -[[package]] -name = "strength_reduce" -version = "0.2.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fe895eb47f22e2ddd4dabc02bce419d2e643c8e3b585c78158b349195bc24d82" - -[[package]] -name = "strsim" -version = "0.11.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f" - -[[package]] -name = "subtle" -version = "2.6.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "13c2bddecc57b384dee18652358fb23172facb8a2c51ccc10d74c157bdea3292" - -[[package]] -name = "syn" -version = "1.0.109" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "72b64191b275b66ffe2469e8af2c1cfe3bafa67b529ead792a6d0160888b4237" -dependencies = [ - "proc-macro2", - "quote", - "unicode-ident", -] - -[[package]] -name = "syn" -version = "2.0.116" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3df424c70518695237746f84cede799c9c58fcb37450d7b23716568cc8bc69cb" -dependencies = [ - "proc-macro2", - "quote", - "unicode-ident", -] - -[[package]] -name = "tap" -version = "1.0.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "55937e1799185b12863d447f42597ed69d9928686b8d88a1df17376a097d8369" - -[[package]] -name = "tempfile" -version = "3.25.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0136791f7c95b1f6dd99f9cc786b91bb81c3800b639b3478e561ddb7be95e5f1" -dependencies = [ - "fastrand", - "getrandom 0.4.1", - "once_cell", - "rustix", - "windows-sys", -] - -[[package]] -name = "thiserror" -version = "2.0.18" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4288b5bcbc7920c07a1149a35cf9590a2aa808e0bc1eafaade0b80947865fbc4" -dependencies = [ - "thiserror-impl", -] - -[[package]] -name = "thiserror-impl" -version = "2.0.18" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ebc4ee7f67670e9b64d05fa4253e753e016c6c95ff35b89b7941d6b856dec1d5" -dependencies = [ - "proc-macro2", - "quote", - "syn 2.0.116", -] - -[[package]] -name = "toml" -version = "0.9.12+spec-1.1.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cf92845e79fc2e2def6a5d828f0801e29a2f8acc037becc5ab08595c7d5e9863" -dependencies = [ - "indexmap", - "serde_core", - "serde_spanned", - "toml_datetime", - "toml_parser", - "toml_writer", - "winnow", -] - -[[package]] -name = "toml_datetime" -version = "0.7.5+spec-1.1.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "92e1cfed4a3038bc5a127e35a2d360f145e1f4b971b551a2ba5fd7aedf7e1347" -dependencies = [ - "serde_core", -] - -[[package]] -name = "toml_edit" -version = "0.23.10+spec-1.0.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "84c8b9f757e028cee9fa244aea147aab2a9ec09d5325a9b01e0a49730c2b5269" -dependencies = [ - "indexmap", - "toml_datetime", - "toml_parser", - "winnow", -] - -[[package]] -name = "toml_parser" -version = "1.0.8+spec-1.1.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0742ff5ff03ea7e67c8ae6c93cac239e0d9784833362da3f9a9c1da8dfefcbdc" -dependencies = [ - "winnow", -] - -[[package]] -name = "toml_writer" -version = "1.0.6+spec-1.1.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ab16f14aed21ee8bfd8ec22513f7287cd4a91aa92e44edfe2c17ddd004e92607" - -[[package]] -name = "tracing" -version = "0.1.44" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "63e71662fa4b2a2c3a26f570f037eb95bb1f85397f3cd8076caed2f026a6d100" -dependencies = [ - "pin-project-lite", - "tracing-attributes", - "tracing-core", -] - -[[package]] -name = "tracing-attributes" -version = "0.1.31" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7490cfa5ec963746568740651ac6781f701c9c5ea257c58e057f3ba8cf69e8da" -dependencies = [ - "proc-macro2", - "quote", - "syn 2.0.116", -] - -[[package]] -name = "tracing-core" -version = "0.1.36" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "db97caf9d906fbde555dd62fa95ddba9eecfd14cb388e4f491a66d74cd5fb79a" - -[[package]] -name = "transpose" -version = "0.2.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1ad61aed86bc3faea4300c7aee358b4c6d0c8d6ccc36524c96e4c92ccf26e77e" -dependencies = [ - "num-integer", - "strength_reduce", -] - -[[package]] -name = "typenum" -version = "1.19.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "562d481066bde0658276a35467c4af00bdc6ee726305698a55b86e61d7ad82bb" - -[[package]] -name = "ucd-trie" -version = "0.1.7" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2896d95c02a80c6d6a5d6e953d479f5ddf2dfdb6a244441010e373ac0fb88971" - -[[package]] -name = "uint" -version = "0.9.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "76f64bba2c53b04fcab63c01a7d7427eadc821e3bc48c34dc9ba29c501164b52" -dependencies = [ - "byteorder", - "crunchy", - "hex", - "static_assertions", -] - -[[package]] -name = "unarray" -version = "0.1.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "eaea85b334db583fe3274d12b4cd1880032beab409c0d774be044d4480ab9a94" - -[[package]] -name = "unicode-ident" -version = "1.0.24" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e6e4313cd5fcd3dad5cafa179702e2b244f760991f45397d14d4ebf38247da75" - -[[package]] -name = "unicode-segmentation" -version = "1.12.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f6ccf251212114b54433ec949fd6a7841275f9ada20dddd2f29e9ceea4501493" - -[[package]] -name = "unicode-xid" -version = "0.2.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ebc1c04c71510c7f702b52b7c350734c9ff1295c464a03335b00bb84fc54f853" - -[[package]] -name = "utf8parse" -version = "0.2.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821" - -[[package]] -name = "valuable" -version = "0.1.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ba73ea9cf16a25df0c8caa16c51acb937d5712a8429db78a3ee29d5dcacd3a65" - -[[package]] -name = "version_check" -version = "0.9.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0b928f33d975fc6ad9f86c8f283853ad26bdd5b10b7f1542aa2fa15e2289105a" - -[[package]] -name = "wait-timeout" -version = "0.2.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "09ac3b126d3914f9849036f826e054cbabdc8519970b8998ddaf3b5bd3c65f11" -dependencies = [ - "libc", -] - -[[package]] -name = "wasi" -version = "0.11.1+wasi-snapshot-preview1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ccf3ec651a847eb01de73ccad15eb7d99f80485de043efb2f370cd654f4ea44b" - -[[package]] -name = "wasip2" -version = "1.0.2+wasi-0.2.9" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9517f9239f02c069db75e65f174b3da828fe5f5b945c4dd26bd25d89c03ebcf5" -dependencies = [ - "wit-bindgen", -] - -[[package]] -name = "wasip3" -version = "0.4.0+wasi-0.3.0-rc-2026-01-06" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5428f8bf88ea5ddc08faddef2ac4a67e390b88186c703ce6dbd955e1c145aca5" -dependencies = [ - "wit-bindgen", -] - -[[package]] -name = "wasm-encoder" -version = "0.244.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "990065f2fe63003fe337b932cfb5e3b80e0b4d0f5ff650e6985b1048f62c8319" -dependencies = [ - "leb128fmt", - "wasmparser", -] - -[[package]] -name = "wasm-metadata" -version = "0.244.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bb0e353e6a2fbdc176932bbaab493762eb1255a7900fe0fea1a2f96c296cc909" -dependencies = [ - "anyhow", - "indexmap", - "wasm-encoder", - "wasmparser", -] - -[[package]] -name = "wasmparser" -version = "0.244.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "47b807c72e1bac69382b3a6fb3dbe8ea4c0ed87ff5629b8685ae6b9a611028fe" -dependencies = [ - "bitflags", - "hashbrown 0.15.5", - "indexmap", - "semver 1.0.27", -] - -[[package]] -name = "windows-link" -version = "0.2.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f0805222e57f7521d6a62e36fa9163bc891acd422f971defe97d64e70d0a4fe5" - -[[package]] -name = "windows-sys" -version = "0.61.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ae137229bcbd6cdf0f7b80a31df61766145077ddf49416a728b02cb3921ff3fc" -dependencies = [ - "windows-link", -] - -[[package]] -name = "winnow" -version = "0.7.14" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5a5364e9d77fcdeeaa6062ced926ee3381faa2ee02d3eb83a5c27a8825540829" -dependencies = [ - "memchr", -] - -[[package]] -name = "wit-bindgen" -version = "0.51.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d7249219f66ced02969388cf2bb044a09756a083d0fab1e566056b04d9fbcaa5" -dependencies = [ - "wit-bindgen-rust-macro", -] - -[[package]] -name = "wit-bindgen-core" -version = "0.51.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ea61de684c3ea68cb082b7a88508a8b27fcc8b797d738bfc99a82facf1d752dc" -dependencies = [ - "anyhow", - "heck", - "wit-parser", -] - -[[package]] -name = "wit-bindgen-rust" -version = "0.51.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b7c566e0f4b284dd6561c786d9cb0142da491f46a9fbed79ea69cdad5db17f21" -dependencies = [ - "anyhow", - "heck", - "indexmap", - "prettyplease", - "syn 2.0.116", - "wasm-metadata", - "wit-bindgen-core", - "wit-component", -] - -[[package]] -name = "wit-bindgen-rust-macro" -version = "0.51.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0c0f9bfd77e6a48eccf51359e3ae77140a7f50b1e2ebfe62422d8afdaffab17a" -dependencies = [ - "anyhow", - "prettyplease", - "proc-macro2", - "quote", - "syn 2.0.116", - "wit-bindgen-core", - "wit-bindgen-rust", -] - -[[package]] -name = "wit-component" -version = "0.244.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9d66ea20e9553b30172b5e831994e35fbde2d165325bec84fc43dbf6f4eb9cb2" -dependencies = [ - "anyhow", - "bitflags", - "indexmap", - "log", - "serde", - "serde_derive", - "serde_json", - "wasm-encoder", - "wasm-metadata", - "wasmparser", - "wit-parser", -] - -[[package]] -name = "wit-parser" -version = "0.244.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ecc8ac4bc1dc3381b7f59c34f00b67e18f910c2c0f50015669dde7def656a736" -dependencies = [ - "anyhow", - "id-arena", - "indexmap", - "log", - "semver 1.0.27", - "serde", - "serde_derive", - "serde_json", - "unicode-xid", - "wasmparser", -] - -[[package]] -name = "wyz" -version = "0.5.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "05f360fc0b24296329c78fda852a1e9ae82de9cf7b27dae4b7f62f118f77b9ed" -dependencies = [ - "tap", -] - -[[package]] -name = "zerocopy" -version = "0.8.39" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "db6d35d663eadb6c932438e763b262fe1a70987f9ae936e60158176d710cae4a" -dependencies = [ - "zerocopy-derive", -] - -[[package]] -name = "zerocopy-derive" -version = "0.8.39" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4122cd3169e94605190e77839c9a40d40ed048d305bfdc146e7df40ab0f3e517" -dependencies = [ - "proc-macro2", - "quote", - "syn 2.0.116", -] - -[[package]] -name = "zeroize" -version = "1.8.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b97154e67e32c85465826e8bcc1c59429aaaf107c1e4a9e53c8d8ccd5eff88d0" -dependencies = [ - "zeroize_derive", -] - -[[package]] -name = "zeroize_derive" -version = "1.4.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "85a5b4158499876c763cb03bc4e49185d3cccbabb15b33c627f7884f43db852e" -dependencies = [ - "proc-macro2", - "quote", - "syn 2.0.116", -] - -[[package]] -name = "zmij" -version = "1.0.21" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b8848ee67ecc8aedbaf3e4122217aff892639231befc6a1b58d29fff4c2cabaa" diff --git a/xmss/leansig-ffi/Cargo.toml b/xmss/leansig-ffi/Cargo.toml deleted file mode 100644 index 1bd1a7a..0000000 --- a/xmss/leansig-ffi/Cargo.toml +++ /dev/null @@ -1,17 +0,0 @@ -[package] -name = "leansig-ffi" -version = "0.1.0" -edition = "2024" -rust-version = "1.87" - -[lib] -name = "leansig_ffi" -crate-type = ["cdylib", "staticlib"] - -[dependencies] -leansig = { git = "https://github.com/leanEthereum/leanSig", rev = "73bedc26ed961b110df7ac2e234dc11361a4bf25" } -rand = "0.9" -ssz = { package = "ethereum_ssz", version = "0.10.0" } - -[build-dependencies] -cbindgen = "0.29" diff --git a/xmss/leansig-ffi/build.rs b/xmss/leansig-ffi/build.rs deleted file mode 100644 index 4f72305..0000000 --- a/xmss/leansig-ffi/build.rs +++ /dev/null @@ -1,26 +0,0 @@ -use std::env; -use std::path::PathBuf; - -fn main() { - let crate_dir = env::var("CARGO_MANIFEST_DIR").unwrap(); - let output_file = PathBuf::from(&crate_dir) - .join("include") - .join("leansig_ffi.h"); - - // Ensure the include directory exists. - std::fs::create_dir_all(PathBuf::from(&crate_dir).join("include")).unwrap(); - - let config = cbindgen::Config::from_file(PathBuf::from(&crate_dir).join("cbindgen.toml")) - .expect("failed to read cbindgen.toml"); - - cbindgen::Builder::new() - .with_crate(&crate_dir) - .with_config(config) - .generate() - .expect("cbindgen failed to generate bindings") - .write_to_file(&output_file); - - // Only re-run if the source or config changes. - println!("cargo:rerun-if-changed=src/lib.rs"); - println!("cargo:rerun-if-changed=cbindgen.toml"); -} diff --git a/xmss/leansig-ffi/cbindgen.toml b/xmss/leansig-ffi/cbindgen.toml deleted file mode 100644 index ee7fed8..0000000 --- a/xmss/leansig-ffi/cbindgen.toml +++ /dev/null @@ -1,43 +0,0 @@ -# cbindgen configuration for leansig-ffi -# Generates include/leansig_ffi.h from src/lib.rs - -language = "C" -cpp_compat = true - -# Header guard and preamble -header = """/* - * leansig_ffi.h - Auto-generated C header for the leansig FFI library. - * - * DO NOT EDIT MANUALLY. This file is generated by cbindgen from src/lib.rs. - * Run `cargo build` to regenerate. - * - * This header provides a C-compatible interface to the leansig XMSS - * post-quantum signature scheme (devnet-1 instantiation). - * - * Memory management: Every allocated object has a corresponding _free function. - * Byte buffers returned by serialize/sign must be freed with - * leansig_bytes_free. - */""" -include_guard = "LEANSIG_FFI_H" -sys_includes = ["stddef.h", "stdint.h"] - -# After includes, emit the message length constant -after_includes = """ -/* Message length expected by sign/verify (XMSS signs 32-byte messages). */ -#define LEANSIG_MESSAGE_LENGTH 32""" - -# Documentation -documentation = true -documentation_style = "c99" -documentation_length = "full" - -# Style -style = "Both" -usize_is_size_t = true - -[export] -include = ["LeansigResult"] - -[enum] -rename_variants = "ScreamingSnakeCase" -prefix_with_name = true diff --git a/xmss/leansig-ffi/include/leansig_ffi.h b/xmss/leansig-ffi/include/leansig_ffi.h deleted file mode 100644 index bab6e50..0000000 --- a/xmss/leansig-ffi/include/leansig_ffi.h +++ /dev/null @@ -1,178 +0,0 @@ -/* - * leansig_ffi.h - Auto-generated C header for the leansig FFI library. - * - * DO NOT EDIT MANUALLY. This file is generated by cbindgen from src/lib.rs. - * Run `cargo build` to regenerate. - * - * This header provides a C-compatible interface to the leansig XMSS - * post-quantum signature scheme (devnet-1 instantiation). - * - * Memory management: Every allocated object has a corresponding _free function. - * Byte buffers returned by serialize/sign must be freed with - * leansig_bytes_free. - */ - -#ifndef LEANSIG_FFI_H -#define LEANSIG_FFI_H - -#include -#include -#include -#include -#include -#include -#include -/* Message length expected by sign/verify (XMSS signs 32-byte messages). */ -#define LEANSIG_MESSAGE_LENGTH 32 - -// Result codes returned by FFI functions. -typedef enum LeansigResult { - // Operation succeeded. - LEANSIG_RESULT_OK = 0, - // Null pointer argument. - LEANSIG_RESULT_NULL_POINTER = 1, - // Invalid buffer length. - LEANSIG_RESULT_INVALID_LENGTH = 2, - // Signing failed (encoding attempts exceeded). - LEANSIG_RESULT_SIGNING_FAILED = 3, - // Deserialization (from_bytes) failed. - LEANSIG_RESULT_DESERIALIZATION_FAILED = 4, - // Signature verification failed. - LEANSIG_RESULT_VERIFICATION_FAILED = 5, - // Epoch outside prepared interval. - LEANSIG_RESULT_EPOCH_NOT_PREPARED = 6, -} LeansigResult; - -// Opaque keypair holding both public and secret keys. -typedef struct LeansigKeypair LeansigKeypair; - -#ifdef __cplusplus -extern "C" { -#endif // __cplusplus - -// Generate a new XMSS keypair. -// -// # Arguments -// * `seed` - Random seed for the RNG (will be used to seed a SmallRng). -// * `activation_epoch` - Starting epoch for which the key is active. -// * `num_active_epochs` - Number of consecutive active epochs. -// * `out_keypair` - Pointer to receive the opaque keypair handle. -// -// # Returns -// `LeansigResult::Ok` on success. -// -// # Note -// Key generation is performed on a dedicated thread with a large stack -// (64 MB) to accommodate the deep recursion required by XMSS Merkle tree -// construction with LOG_LIFETIME=32. -enum LeansigResult leansig_keypair_generate(uint64_t seed, - uint64_t activation_epoch, - uint64_t num_active_epochs, - struct LeansigKeypair **out_keypair); - -// Restore a keypair from serialized public and secret key bytes. -// -// # Arguments -// * `pk_bytes` - Pointer to the serialized public key bytes. -// * `pk_len` - Length of the public key bytes. -// * `sk_bytes` - Pointer to the serialized secret key bytes. -// * `sk_len` - Length of the secret key bytes. -// * `out_keypair` - Pointer to receive the opaque keypair handle. -// -// # Returns -// `LeansigResult::Ok` on success, or `DeserializationFailed` if bytes are invalid. -enum LeansigResult leansig_keypair_restore(const uint8_t *pk_bytes, - size_t pk_len, - const uint8_t *sk_bytes, - size_t sk_len, - struct LeansigKeypair **out_keypair); - -// Free a keypair allocated by `leansig_keypair_generate`. -void leansig_keypair_free(struct LeansigKeypair *keypair); - -// Get the SSZ-serialized public key from a keypair. -// -// The caller must free the returned buffer with `leansig_bytes_free`. -// -// # Arguments -// * `keypair` - Opaque keypair handle. -// * `out_data` - Pointer to receive the byte buffer. -// * `out_len` - Pointer to receive the buffer length. -enum LeansigResult leansig_pubkey_serialize(const struct LeansigKeypair *keypair, - uint8_t **out_data, - size_t *out_len); - -// Get the SSZ-serialized secret key from a keypair. -// -// The caller must free the returned buffer with `leansig_bytes_free`. -enum LeansigResult leansig_seckey_serialize(const struct LeansigKeypair *keypair, - uint8_t **out_data, - size_t *out_len); - -// Free a byte buffer returned by any `leansig_*_serialize` function. -void leansig_bytes_free(uint8_t *data, size_t len); - -// Get the start of the activation interval for this secret key. -uint64_t leansig_sk_activation_start(const struct LeansigKeypair *keypair); - -// Get the end (exclusive) of the activation interval for this secret key. -uint64_t leansig_sk_activation_end(const struct LeansigKeypair *keypair); - -// Get the start of the currently prepared interval. -uint64_t leansig_sk_prepared_start(const struct LeansigKeypair *keypair); - -// Get the end (exclusive) of the currently prepared interval. -uint64_t leansig_sk_prepared_end(const struct LeansigKeypair *keypair); - -// Advance the secret key's prepared interval to the next window. -enum LeansigResult leansig_sk_advance_preparation(struct LeansigKeypair *keypair); - -// Sign a 32-byte message at a given epoch. -// -// The caller must free the returned signature buffer with `leansig_bytes_free`. -// -// # Arguments -// * `keypair` - Opaque keypair handle (secret key is used). -// * `epoch` - The epoch to sign at (must be in the prepared interval). -// * `message` - Pointer to 32-byte message. -// * `out_sig_data` - Pointer to receive the SSZ-serialized signature bytes. -// * `out_sig_len` - Pointer to receive the signature length. -enum LeansigResult leansig_sign(const struct LeansigKeypair *keypair, - uint32_t epoch, - const uint8_t *message, - uint8_t **out_sig_data, - size_t *out_sig_len); - -// Verify a signature against a public key, epoch, and message. -// -// # Arguments -// * `pk_data` - SSZ-serialized public key bytes. -// * `pk_len` - Length of public key bytes. -// * `epoch` - The epoch the signature was created at. -// * `message` - Pointer to 32-byte message. -// * `sig_data` - SSZ-serialized signature bytes. -// * `sig_len` - Length of signature bytes. -// -// # Returns -// `LeansigResult::Ok` if verification succeeds, `LeansigResult::VerificationFailed` otherwise. -enum LeansigResult leansig_verify(const uint8_t *pk_data, - size_t pk_len, - uint32_t epoch, - const uint8_t *message, - const uint8_t *sig_data, - size_t sig_len); - -// Verify a signature using the public key from a keypair handle. -// -// Convenience wrapper that avoids serialization/deserialization of the public key. -enum LeansigResult leansig_verify_with_keypair(const struct LeansigKeypair *keypair, - uint32_t epoch, - const uint8_t *message, - const uint8_t *sig_data, - size_t sig_len); - -#ifdef __cplusplus -} // extern "C" -#endif // __cplusplus - -#endif /* LEANSIG_FFI_H */ diff --git a/xmss/leansig-ffi/src/lib.rs b/xmss/leansig-ffi/src/lib.rs deleted file mode 100644 index c3b50da..0000000 --- a/xmss/leansig-ffi/src/lib.rs +++ /dev/null @@ -1,420 +0,0 @@ -//! C-compatible FFI wrapper around the leansig XMSS signature scheme. -//! -//! This crate provides a C API for the leansig library's generalized XMSS -//! signature scheme, targeted at the devnet-1 instantiation: -//! `SIGTopLevelTargetSumLifetime32Dim64Base8` (LOG_LIFETIME=32, DIM=64, BASE=8). -//! All types are passed as opaque pointers or SSZ-serialized byte buffers. -//! Memory management follows the "caller frees" pattern: every `_new` or -//! `_generate` function has a corresponding `_free` function. - -use std::slice; - -use rand::SeedableRng; - -use leansig::serialization::Serializable; -use leansig::signature::generalized_xmss::instantiations_poseidon_top_level::lifetime_2_to_the_32::hashing_optimized::SIGTopLevelTargetSumLifetime32Dim64Base8 as SigScheme; -use leansig::signature::{SignatureScheme, SignatureSchemeSecretKey}; - -// Concrete type aliases for the devnet-1 instantiation. -type PublicKey = ::PublicKey; -type SecretKey = ::SecretKey; -type Signature = ::Signature; - -/// Result codes returned by FFI functions. -#[repr(C)] -pub enum LeansigResult { - /// Operation succeeded. - Ok = 0, - /// Null pointer argument. - NullPointer = 1, - /// Invalid buffer length. - InvalidLength = 2, - /// Signing failed (encoding attempts exceeded). - SigningFailed = 3, - /// Deserialization (from_bytes) failed. - DeserializationFailed = 4, - /// Signature verification failed. - VerificationFailed = 5, - /// Epoch outside prepared interval. - EpochNotPrepared = 6, -} - -/// Opaque keypair holding both public and secret keys. -pub struct LeansigKeypair { - pk: PublicKey, - sk: SecretKey, -} - -// --------------------------------------------------------------------------- -// Key generation -// --------------------------------------------------------------------------- - -/// Generate a new XMSS keypair. -/// -/// # Arguments -/// * `seed` - Random seed for the RNG (will be used to seed a SmallRng). -/// * `activation_epoch` - Starting epoch for which the key is active. -/// * `num_active_epochs` - Number of consecutive active epochs. -/// * `out_keypair` - Pointer to receive the opaque keypair handle. -/// -/// # Returns -/// `LeansigResult::Ok` on success. -/// -/// # Note -/// Key generation is performed on a dedicated thread with a large stack -/// (64 MB) to accommodate the deep recursion required by XMSS Merkle tree -/// construction with LOG_LIFETIME=32. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leansig_keypair_generate( - seed: u64, - activation_epoch: u64, - num_active_epochs: u64, - out_keypair: *mut *mut LeansigKeypair, -) -> LeansigResult { - if out_keypair.is_null() { - return LeansigResult::NullPointer; - } - - // Spawn key_gen on a thread with 64 MB stack to avoid stack overflow - // from deep Merkle tree recursion in the LOG_LIFETIME=32 instantiation. - const STACK_SIZE: usize = 64 * 1024 * 1024; // 64 MB - let handle = std::thread::Builder::new() - .stack_size(STACK_SIZE) - .spawn(move || { - let mut rng = rand::rngs::SmallRng::seed_from_u64(seed); - SigScheme::key_gen( - &mut rng, - activation_epoch as usize, - num_active_epochs as usize, - ) - }); - - match handle { - Ok(join_handle) => match join_handle.join() { - Ok((pk, sk)) => { - let keypair = Box::new(LeansigKeypair { pk, sk }); - unsafe { - *out_keypair = Box::into_raw(keypair); - } - LeansigResult::Ok - } - Err(_) => LeansigResult::SigningFailed, // thread panicked - }, - Err(_) => LeansigResult::SigningFailed, // couldn't spawn thread - } -} - -/// Restore a keypair from serialized public and secret key bytes. -/// -/// # Arguments -/// * `pk_bytes` - Pointer to the serialized public key bytes. -/// * `pk_len` - Length of the public key bytes. -/// * `sk_bytes` - Pointer to the serialized secret key bytes. -/// * `sk_len` - Length of the secret key bytes. -/// * `out_keypair` - Pointer to receive the opaque keypair handle. -/// -/// # Returns -/// `LeansigResult::Ok` on success, or `DeserializationFailed` if bytes are invalid. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leansig_keypair_restore( - pk_bytes: *const u8, - pk_len: usize, - sk_bytes: *const u8, - sk_len: usize, - out_keypair: *mut *mut LeansigKeypair, -) -> LeansigResult { - if pk_bytes.is_null() || sk_bytes.is_null() || out_keypair.is_null() { - return LeansigResult::NullPointer; - } - - let pk_slice = unsafe { slice::from_raw_parts(pk_bytes, pk_len) }; - let sk_slice = unsafe { slice::from_raw_parts(sk_bytes, sk_len) }; - - let pk = match PublicKey::from_bytes(pk_slice) { - Ok(k) => k, - Err(_) => return LeansigResult::DeserializationFailed, - }; - - let sk = match SecretKey::from_bytes(sk_slice) { - Ok(k) => k, - Err(_) => return LeansigResult::DeserializationFailed, - }; - - let keypair = Box::new(LeansigKeypair { pk, sk }); - unsafe { - *out_keypair = Box::into_raw(keypair); - } - LeansigResult::Ok -} - -/// Free a keypair allocated by `leansig_keypair_generate`. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leansig_keypair_free(keypair: *mut LeansigKeypair) { - if !keypair.is_null() { - unsafe { - drop(Box::from_raw(keypair)); - } - } -} - -// --------------------------------------------------------------------------- -// Public key serialization -// --------------------------------------------------------------------------- - -/// Get the SSZ-serialized public key from a keypair. -/// -/// The caller must free the returned buffer with `leansig_bytes_free`. -/// -/// # Arguments -/// * `keypair` - Opaque keypair handle. -/// * `out_data` - Pointer to receive the byte buffer. -/// * `out_len` - Pointer to receive the buffer length. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leansig_pubkey_serialize( - keypair: *const LeansigKeypair, - out_data: *mut *mut u8, - out_len: *mut usize, -) -> LeansigResult { - if keypair.is_null() || out_data.is_null() || out_len.is_null() { - return LeansigResult::NullPointer; - } - - let keypair = unsafe { &*keypair }; - let bytes = keypair.pk.to_bytes(); - - let len = bytes.len(); - let ptr = bytes.leak().as_mut_ptr(); - - unsafe { - *out_data = ptr; - *out_len = len; - } - LeansigResult::Ok -} - -/// Get the SSZ-serialized secret key from a keypair. -/// -/// The caller must free the returned buffer with `leansig_bytes_free`. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leansig_seckey_serialize( - keypair: *const LeansigKeypair, - out_data: *mut *mut u8, - out_len: *mut usize, -) -> LeansigResult { - if keypair.is_null() || out_data.is_null() || out_len.is_null() { - return LeansigResult::NullPointer; - } - - let keypair = unsafe { &*keypair }; - let bytes = keypair.sk.to_bytes(); - - let len = bytes.len(); - let ptr = bytes.leak().as_mut_ptr(); - - unsafe { - *out_data = ptr; - *out_len = len; - } - LeansigResult::Ok -} - -/// Free a byte buffer returned by any `leansig_*_serialize` function. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leansig_bytes_free(data: *mut u8, len: usize) { - if !data.is_null() && len > 0 { - unsafe { - drop(Vec::from_raw_parts(data, len, len)); - } - } -} - -// --------------------------------------------------------------------------- -// Secret key operations -// --------------------------------------------------------------------------- - -/// Get the start of the activation interval for this secret key. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leansig_sk_activation_start(keypair: *const LeansigKeypair) -> u64 { - if keypair.is_null() { - return 0; - } - let keypair = unsafe { &*keypair }; - keypair.sk.get_activation_interval().start -} - -/// Get the end (exclusive) of the activation interval for this secret key. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leansig_sk_activation_end(keypair: *const LeansigKeypair) -> u64 { - if keypair.is_null() { - return 0; - } - let keypair = unsafe { &*keypair }; - keypair.sk.get_activation_interval().end -} - -/// Get the start of the currently prepared interval. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leansig_sk_prepared_start(keypair: *const LeansigKeypair) -> u64 { - if keypair.is_null() { - return 0; - } - let keypair = unsafe { &*keypair }; - keypair.sk.get_prepared_interval().start -} - -/// Get the end (exclusive) of the currently prepared interval. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leansig_sk_prepared_end(keypair: *const LeansigKeypair) -> u64 { - if keypair.is_null() { - return 0; - } - let keypair = unsafe { &*keypair }; - keypair.sk.get_prepared_interval().end -} - -/// Advance the secret key's prepared interval to the next window. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leansig_sk_advance_preparation( - keypair: *mut LeansigKeypair, -) -> LeansigResult { - if keypair.is_null() { - return LeansigResult::NullPointer; - } - let keypair = unsafe { &mut *keypair }; - keypair.sk.advance_preparation(); - LeansigResult::Ok -} - -// --------------------------------------------------------------------------- -// Signing -// --------------------------------------------------------------------------- - -/// Sign a 32-byte message at a given epoch. -/// -/// The caller must free the returned signature buffer with `leansig_bytes_free`. -/// -/// # Arguments -/// * `keypair` - Opaque keypair handle (secret key is used). -/// * `epoch` - The epoch to sign at (must be in the prepared interval). -/// * `message` - Pointer to 32-byte message. -/// * `out_sig_data` - Pointer to receive the SSZ-serialized signature bytes. -/// * `out_sig_len` - Pointer to receive the signature length. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leansig_sign( - keypair: *const LeansigKeypair, - epoch: u32, - message: *const u8, - out_sig_data: *mut *mut u8, - out_sig_len: *mut usize, -) -> LeansigResult { - if keypair.is_null() || message.is_null() || out_sig_data.is_null() || out_sig_len.is_null() { - return LeansigResult::NullPointer; - } - - let keypair = unsafe { &*keypair }; - let msg: &[u8; 32] = unsafe { &*(message as *const [u8; 32]) }; - - // Check epoch is in prepared interval - if !keypair.sk.get_prepared_interval().contains(&(epoch as u64)) { - return LeansigResult::EpochNotPrepared; - } - - match SigScheme::sign(&keypair.sk, epoch, msg) { - Ok(sig) => { - let bytes = sig.to_bytes(); - let len = bytes.len(); - let ptr = bytes.leak().as_mut_ptr(); - unsafe { - *out_sig_data = ptr; - *out_sig_len = len; - } - LeansigResult::Ok - } - Err(_) => LeansigResult::SigningFailed, - } -} - -// --------------------------------------------------------------------------- -// Verification -// --------------------------------------------------------------------------- - -/// Verify a signature against a public key, epoch, and message. -/// -/// # Arguments -/// * `pk_data` - SSZ-serialized public key bytes. -/// * `pk_len` - Length of public key bytes. -/// * `epoch` - The epoch the signature was created at. -/// * `message` - Pointer to 32-byte message. -/// * `sig_data` - SSZ-serialized signature bytes. -/// * `sig_len` - Length of signature bytes. -/// -/// # Returns -/// `LeansigResult::Ok` if verification succeeds, `LeansigResult::VerificationFailed` otherwise. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leansig_verify( - pk_data: *const u8, - pk_len: usize, - epoch: u32, - message: *const u8, - sig_data: *const u8, - sig_len: usize, -) -> LeansigResult { - if pk_data.is_null() || message.is_null() || sig_data.is_null() { - return LeansigResult::NullPointer; - } - - let pk_bytes = unsafe { slice::from_raw_parts(pk_data, pk_len) }; - let sig_bytes = unsafe { slice::from_raw_parts(sig_data, sig_len) }; - let msg: &[u8; 32] = unsafe { &*(message as *const [u8; 32]) }; - - let pk = match PublicKey::from_bytes(pk_bytes) { - Ok(pk) => pk, - Err(_) => return LeansigResult::DeserializationFailed, - }; - - let sig = match Signature::from_bytes(sig_bytes) { - Ok(sig) => sig, - Err(_) => return LeansigResult::DeserializationFailed, - }; - - if SigScheme::verify(&pk, epoch, msg, &sig) { - LeansigResult::Ok - } else { - LeansigResult::VerificationFailed - } -} - -// --------------------------------------------------------------------------- -// Verify using keypair (convenience for testing) -// --------------------------------------------------------------------------- - -/// Verify a signature using the public key from a keypair handle. -/// -/// Convenience wrapper that avoids serialization/deserialization of the public key. -#[unsafe(no_mangle)] -pub unsafe extern "C" fn leansig_verify_with_keypair( - keypair: *const LeansigKeypair, - epoch: u32, - message: *const u8, - sig_data: *const u8, - sig_len: usize, -) -> LeansigResult { - if keypair.is_null() || message.is_null() || sig_data.is_null() { - return LeansigResult::NullPointer; - } - - let keypair = unsafe { &*keypair }; - let sig_bytes = unsafe { slice::from_raw_parts(sig_data, sig_len) }; - let msg: &[u8; 32] = unsafe { &*(message as *const [u8; 32]) }; - - let sig = match Signature::from_bytes(sig_bytes) { - Ok(sig) => sig, - Err(_) => return LeansigResult::DeserializationFailed, - }; - - if SigScheme::verify(&keypair.pk, epoch, msg, &sig) { - LeansigResult::Ok - } else { - LeansigResult::VerificationFailed - } -} diff --git a/xmss/leansig/keystore.go b/xmss/leansig/keystore.go deleted file mode 100644 index 14534c2..0000000 --- a/xmss/leansig/keystore.go +++ /dev/null @@ -1,47 +0,0 @@ -package leansig - -import ( - "fmt" - "os" -) - -// LoadKeypair reads public and secret keys from disk and restores the Keypair handle. -// Both files must exist and contain valid SSZ-serialized key data. -func LoadKeypair(pkPath, skPath string) (*Keypair, error) { - pkBytes, err := os.ReadFile(pkPath) - if err != nil { - return nil, fmt.Errorf("failed to read public key from %s: %w", pkPath, err) - } - - skBytes, err := os.ReadFile(skPath) - if err != nil { - return nil, fmt.Errorf("failed to read secret key from %s: %w", skPath, err) - } - - return RestoreKeypair(pkBytes, skBytes) -} - -// SaveKeypair writes the public and secret keys of a Keypair to disk. -// Public key is written with 0644 permissions, secret key with 0600. -func SaveKeypair(kp *Keypair, pkPath, skPath string) error { - pkBytes, err := kp.PublicKeyBytes() - if err != nil { - return fmt.Errorf("failed to serialize public key: %w", err) - } - - skBytes, err := kp.SecretKeyBytes() - if err != nil { - return fmt.Errorf("failed to serialize secret key: %w", err) - } - - if err := os.WriteFile(pkPath, pkBytes, 0644); err != nil { - return fmt.Errorf("failed to write public key to %s: %w", pkPath, err) - } - - // Secret key gets restrictive permissions - if err := os.WriteFile(skPath, skBytes, 0600); err != nil { - return fmt.Errorf("failed to write secret key to %s: %w", skPath, err) - } - - return nil -} diff --git a/xmss/leansig/keystore_test.go b/xmss/leansig/keystore_test.go deleted file mode 100644 index 20f38b5..0000000 --- a/xmss/leansig/keystore_test.go +++ /dev/null @@ -1,75 +0,0 @@ -package leansig_test - -import ( - "crypto/rand" - "path/filepath" - "testing" - - "github.com/geanlabs/gean/xmss/leansig" -) - -const ( - testActivationEpoch = 0 - testNumActiveEpochs = 8 -) - -func TestSaveAndLoadKeypair(t *testing.T) { - t.Logf("Generating keypair (this may take ~30s)...") - kp, err := leansig.GenerateKeypair(999, 0, testNumActiveEpochs) - if err != nil { - t.Fatalf("GenerateKeypair failed: %v", err) - } - defer kp.Free() - - // 2. Sign some message - var msg [leansig.MessageLength]byte - if _, err := rand.Read(msg[:]); err != nil { - t.Fatalf("rand.Read failed: %v", err) - } - - t.Log("Signing message with original keypair...") - epoch := uint32(0) - sigOriginal, err := kp.Sign(epoch, msg) - if err != nil { - t.Fatalf("Sign failed: %v", err) - } - - // 3. Save to temp dir - dir := t.TempDir() - pkPath := filepath.Join(dir, "validator_test.pk") - skPath := filepath.Join(dir, "validator_test.sk") - - t.Logf("Saving keypair to %s", dir) - if err := leansig.SaveKeypair(kp, pkPath, skPath); err != nil { - t.Fatalf("SaveKeypair failed: %v", err) - } - - // 4. Load back - t.Log("Loading keypair back from disk...") - kpLoaded, err := leansig.LoadKeypair(pkPath, skPath) - if err != nil { - t.Fatalf("LoadKeypair failed: %v", err) - } - defer kpLoaded.Free() - - // 5. Verify original signature with loaded keypair - t.Log("Verifying original signature with loaded keypair...") - if err := kpLoaded.VerifyWithKeypair(epoch, msg, sigOriginal); err != nil { - t.Errorf("Verify with loaded keypair failed: %v", err) - } - - // 6. Sign with loaded keypair - t.Log("Signing new message with loaded keypair...") - sigNew, err := kpLoaded.Sign(epoch, msg) - if err != nil { - t.Fatalf("Sign with loaded keypair failed: %v", err) - } - - // 7. Verify new signature with original keypair - t.Log("Verifying new signature with original keypair...") - if err := kp.VerifyWithKeypair(epoch, msg, sigNew); err != nil { - t.Errorf("Verify new signature with original keypair failed: %v", err) - } - - t.Log("Key persistence test passed ✓") -} diff --git a/xmss/leansig/leansig.go b/xmss/leansig/leansig.go deleted file mode 100644 index 3bad795..0000000 --- a/xmss/leansig/leansig.go +++ /dev/null @@ -1,250 +0,0 @@ -// Package leansig provides Go bindings for the leansig XMSS post-quantum -// signature scheme via CGo. It wraps the Rust leansig-ffi library which -// targets the devnet-1 instantiation (SIGTopLevelTargetSumLifetime32Dim64Base8). -// -// The library must be built before using this package: -// -// cd xmss/leansig-ffi && cargo build --release -package leansig - -/* -#cgo CFLAGS: -I${SRCDIR}/../leansig-ffi/include -#cgo LDFLAGS: ${SRCDIR}/../leansig-ffi/target/release/deps/libleansig_ffi.a -lm -ldl -lpthread -#include "leansig_ffi.h" -#include -*/ -import "C" -import ( - "fmt" - "runtime" - "unsafe" -) - -// MessageLength is the fixed size of messages that can be signed (32 bytes). -const MessageLength = 32 - -// Result codes matching the LeansigResult C enum. -const ( - ResultOK = C.LEANSIG_RESULT_OK - ResultNullPointer = C.LEANSIG_RESULT_NULL_POINTER - ResultInvalidLength = C.LEANSIG_RESULT_INVALID_LENGTH - ResultSigningFailed = C.LEANSIG_RESULT_SIGNING_FAILED - ResultDeserializationFailed = C.LEANSIG_RESULT_DESERIALIZATION_FAILED - ResultVerificationFailed = C.LEANSIG_RESULT_VERIFICATION_FAILED - ResultEpochNotPrepared = C.LEANSIG_RESULT_EPOCH_NOT_PREPARED -) - -// Keypair wraps an opaque leansig keypair handle. -type Keypair struct { - ptr *C.LeansigKeypair -} - -// GenerateKeypair creates a new XMSS keypair. -// -// Parameters: -// - seed: random seed for key generation. -// - activationEpoch: starting epoch for which the key is active. -// - numActiveEpochs: number of consecutive epochs the key is active for. -func GenerateKeypair(seed uint64, activationEpoch uint64, numActiveEpochs uint64) (*Keypair, error) { - var ptr *C.LeansigKeypair - result := C.leansig_keypair_generate( - C.uint64_t(seed), - C.uint64_t(activationEpoch), - C.uint64_t(numActiveEpochs), - &ptr, - ) - if result != ResultOK { - return nil, fmt.Errorf("leansig_keypair_generate failed with code %d", result) - } - return &Keypair{ptr: ptr}, nil -} - -// RestoreKeypair reconstructs a Keypair from serialized public and secret keys. -// This is used for loading keys from disk. -func RestoreKeypair(pkBytes []byte, skBytes []byte) (*Keypair, error) { - if len(pkBytes) == 0 { - return nil, fmt.Errorf("public key bytes are empty") - } - if len(skBytes) == 0 { - return nil, fmt.Errorf("secret key bytes are empty") - } - - var kpPtr *C.LeansigKeypair - pkPtr := (*C.uint8_t)(unsafe.Pointer(&pkBytes[0])) - pkLen := C.size_t(len(pkBytes)) - skPtr := (*C.uint8_t)(unsafe.Pointer(&skBytes[0])) - skLen := C.size_t(len(skBytes)) - - result := C.leansig_keypair_restore(pkPtr, pkLen, skPtr, skLen, &kpPtr) - runtime.KeepAlive(pkBytes) - runtime.KeepAlive(skBytes) - if result != ResultOK { - return nil, fmt.Errorf("leansig_keypair_restore failed with code %d", result) - } - - return &Keypair{ptr: kpPtr}, nil -} - -// Free releases the memory associated with this keypair. -// The keypair must not be used after calling Free. -func (kp *Keypair) Free() { - if kp.ptr != nil { - C.leansig_keypair_free(kp.ptr) - kp.ptr = nil - } -} - -// PublicKeyBytes returns the SSZ-serialized public key. -func (kp *Keypair) PublicKeyBytes() ([]byte, error) { - if kp.ptr == nil { - return nil, fmt.Errorf("keypair is nil") - } - var data *C.uint8_t - var dataLen C.size_t - result := C.leansig_pubkey_serialize(kp.ptr, &data, &dataLen) - if result != ResultOK { - return nil, fmt.Errorf("leansig_pubkey_serialize failed with code %d", result) - } - // Copy the data to a Go-managed slice - goBytes := C.GoBytes(unsafe.Pointer(data), C.int(dataLen)) - C.leansig_bytes_free(data, dataLen) - return goBytes, nil -} - -// SecretKeyBytes returns the SSZ-serialized secret key. -func (kp *Keypair) SecretKeyBytes() ([]byte, error) { - if kp.ptr == nil { - return nil, fmt.Errorf("keypair is nil") - } - var data *C.uint8_t - var dataLen C.size_t - result := C.leansig_seckey_serialize(kp.ptr, &data, &dataLen) - if result != ResultOK { - return nil, fmt.Errorf("leansig_seckey_serialize failed with code %d", result) - } - goBytes := C.GoBytes(unsafe.Pointer(data), C.int(dataLen)) - C.leansig_bytes_free(data, dataLen) - return goBytes, nil -} - -// ActivationStart returns the start of the activation interval. -func (kp *Keypair) ActivationStart() uint64 { - if kp.ptr == nil { - return 0 - } - return uint64(C.leansig_sk_activation_start(kp.ptr)) -} - -// ActivationEnd returns the end (exclusive) of the activation interval. -func (kp *Keypair) ActivationEnd() uint64 { - if kp.ptr == nil { - return 0 - } - return uint64(C.leansig_sk_activation_end(kp.ptr)) -} - -// PreparedStart returns the start of the currently prepared signing window. -func (kp *Keypair) PreparedStart() uint64 { - if kp.ptr == nil { - return 0 - } - return uint64(C.leansig_sk_prepared_start(kp.ptr)) -} - -// PreparedEnd returns the end (exclusive) of the currently prepared signing window. -func (kp *Keypair) PreparedEnd() uint64 { - if kp.ptr == nil { - return 0 - } - return uint64(C.leansig_sk_prepared_end(kp.ptr)) -} - -// AdvancePreparation advances the secret key's prepared interval to the next window. -func (kp *Keypair) AdvancePreparation() error { - if kp.ptr == nil { - return fmt.Errorf("keypair is nil") - } - result := C.leansig_sk_advance_preparation(kp.ptr) - if result != ResultOK { - return fmt.Errorf("leansig_sk_advance_preparation failed with code %d", result) - } - return nil -} - -// Sign produces an XMSS signature for a 32-byte message at the given epoch. -// The epoch must be within the key's prepared interval. -// Returns the SSZ-serialized signature bytes. -func (kp *Keypair) Sign(epoch uint32, message [MessageLength]byte) ([]byte, error) { - if kp.ptr == nil { - return nil, fmt.Errorf("keypair is nil") - } - var sigData *C.uint8_t - var sigLen C.size_t - result := C.leansig_sign( - kp.ptr, - C.uint32_t(epoch), - (*C.uint8_t)(unsafe.Pointer(&message[0])), - &sigData, - &sigLen, - ) - runtime.KeepAlive(message) - if result != ResultOK { - return nil, fmt.Errorf("leansig_sign failed with code %d", result) - } - goBytes := C.GoBytes(unsafe.Pointer(sigData), C.int(sigLen)) - C.leansig_bytes_free(sigData, sigLen) - return goBytes, nil -} - -// Verify checks an XMSS signature against a serialized public key, epoch, and message. -// Returns nil if the signature is valid, an error otherwise. -func Verify(pubkeyBytes []byte, epoch uint32, message [MessageLength]byte, sigBytes []byte) error { - if len(pubkeyBytes) == 0 || len(sigBytes) == 0 { - return fmt.Errorf("empty pubkey or signature bytes") - } - result := C.leansig_verify( - (*C.uint8_t)(unsafe.Pointer(&pubkeyBytes[0])), - C.size_t(len(pubkeyBytes)), - C.uint32_t(epoch), - (*C.uint8_t)(unsafe.Pointer(&message[0])), - (*C.uint8_t)(unsafe.Pointer(&sigBytes[0])), - C.size_t(len(sigBytes)), - ) - runtime.KeepAlive(pubkeyBytes) - runtime.KeepAlive(message) - runtime.KeepAlive(sigBytes) - if result == ResultOK { - return nil - } - if result == ResultVerificationFailed { - return fmt.Errorf("signature verification failed") - } - return fmt.Errorf("leansig_verify failed with code %d", result) -} - -// VerifyWithKeypair checks an XMSS signature using the public key from a keypair. -// Convenience wrapper that avoids public key serialization/deserialization. -func (kp *Keypair) VerifyWithKeypair(epoch uint32, message [MessageLength]byte, sigBytes []byte) error { - if kp.ptr == nil { - return fmt.Errorf("keypair is nil") - } - if len(sigBytes) == 0 { - return fmt.Errorf("empty signature bytes") - } - result := C.leansig_verify_with_keypair( - kp.ptr, - C.uint32_t(epoch), - (*C.uint8_t)(unsafe.Pointer(&message[0])), - (*C.uint8_t)(unsafe.Pointer(&sigBytes[0])), - C.size_t(len(sigBytes)), - ) - runtime.KeepAlive(message) - runtime.KeepAlive(sigBytes) - if result == ResultOK { - return nil - } - if result == ResultVerificationFailed { - return fmt.Errorf("signature verification failed") - } - return fmt.Errorf("leansig_verify_with_keypair failed with code %d", result) -} diff --git a/xmss/leansig/leansig_test.go b/xmss/leansig/leansig_test.go deleted file mode 100644 index 4118768..0000000 --- a/xmss/leansig/leansig_test.go +++ /dev/null @@ -1,171 +0,0 @@ -package leansig_test - -import ( - "crypto/rand" - "fmt" - "os" - "testing" - - "github.com/geanlabs/gean/xmss/leansig" -) - -// Devnet-1 parameters for SIGTopLevelTargetSumLifetime32Dim64Base8: -// LOG_LIFETIME=32, sqrt(LIFETIME)=65536, min active range = 2*65536 = 131072 -// Devnet-1 spec uses activation_time = 2^3 = 8 -const testLsigActivationEpoch = 0 -const testLsigNumActiveEpochs = 262144 // 2^18, matching devnet-1 spec - -// Shared keypair generated once in TestMain to avoid redundant ~80s keygen per test. -var sharedKP *leansig.Keypair - -func TestMain(m *testing.M) { - var err error - sharedKP, err = leansig.GenerateKeypair(42, testLsigActivationEpoch, testLsigNumActiveEpochs) - if err != nil { - fmt.Fprintf(os.Stderr, "TestMain: GenerateKeypair failed: %v\n", err) - os.Exit(1) - } - code := m.Run() - sharedKP.Free() - os.Exit(code) -} - -// TestKeyGeneration verifies that keypair generation succeeds and returns -// valid activation and prepared intervals. -func TestKeyGeneration(t *testing.T) { - if sharedKP.ActivationEnd() <= sharedKP.ActivationStart() { - t.Errorf("activation interval is empty or invalid") - } - if sharedKP.PreparedEnd() <= sharedKP.PreparedStart() { - t.Errorf("prepared interval is empty or invalid") - } - t.Logf("Activation interval: [%d, %d)", sharedKP.ActivationStart(), sharedKP.ActivationEnd()) - t.Logf("Prepared interval: [%d, %d)", sharedKP.PreparedStart(), sharedKP.PreparedEnd()) -} - -func TestKeySerializationRoundtrip(t *testing.T) { - pkBytes, err := sharedKP.PublicKeyBytes() - if err != nil { - t.Fatalf("PublicKeyBytes failed: %v", err) - } - if len(pkBytes) == 0 { - t.Fatal("public key bytes are empty") - } - t.Logf("Public key size: %d bytes", len(pkBytes)) - - skBytes, err := sharedKP.SecretKeyBytes() - if err != nil { - t.Fatalf("SecretKeyBytes failed: %v", err) - } - if len(skBytes) == 0 { - t.Fatal("secret key bytes are empty") - } - t.Logf("Secret key size: %d bytes", len(skBytes)) -} - -func TestSignAndVerifyWithKeypair(t *testing.T) { - epoch := uint32(0) - var msg [leansig.MessageLength]byte - if _, err := rand.Read(msg[:]); err != nil { - t.Fatalf("rand.Read failed: %v", err) - } - - sig, err := sharedKP.Sign(epoch, msg) - if err != nil { - t.Fatalf("Sign failed: %v", err) - } - t.Logf("Signature size: %d bytes", len(sig)) - - err = sharedKP.VerifyWithKeypair(epoch, msg, sig) - if err != nil { - t.Fatalf("VerifyWithKeypair failed: %v", err) - } -} - -func TestSignAndVerifyWithSerializedPubkey(t *testing.T) { - pkBytes, err := sharedKP.PublicKeyBytes() - if err != nil { - t.Fatalf("PublicKeyBytes failed: %v", err) - } - - epoch := uint32(0) - var msg [leansig.MessageLength]byte - copy(msg[:], []byte("test message for devnet-1 xmss")) - - sig, err := sharedKP.Sign(epoch, msg) - if err != nil { - t.Fatalf("Sign failed: %v", err) - } - - err = leansig.Verify(pkBytes, epoch, msg, sig) - if err != nil { - t.Fatalf("Verify failed: %v", err) - } -} - -func TestVerifyRejectsWrongMessage(t *testing.T) { - epoch := uint32(0) - var msg [leansig.MessageLength]byte - copy(msg[:], []byte("correct message")) - - sig, err := sharedKP.Sign(epoch, msg) - if err != nil { - t.Fatalf("Sign failed: %v", err) - } - - var wrongMsg [leansig.MessageLength]byte - copy(wrongMsg[:], []byte("wrong message!!")) - - err = sharedKP.VerifyWithKeypair(epoch, wrongMsg, sig) - if err == nil { - t.Fatal("Expected verification to fail with wrong message, but it succeeded") - } -} - -func TestVerifyRejectsWrongEpoch(t *testing.T) { - epoch := uint32(0) - var msg [leansig.MessageLength]byte - copy(msg[:], []byte("epoch test message")) - - sig, err := sharedKP.Sign(epoch, msg) - if err != nil { - t.Fatalf("Sign failed: %v", err) - } - - err = sharedKP.VerifyWithKeypair(epoch+1, msg, sig) - if err == nil { - t.Fatal("Expected verification to fail with wrong epoch, but it succeeded") - } -} - -func TestAdvancePreparation(t *testing.T) { - // We need > 131072 epochs to trigger window advancement. - // 200000 epochs roughly covers 1.5 windows. - const largeNumEpochs = 200000 - t.Logf("Generating large keypair for advance test (%d epochs)...", largeNumEpochs) - kp, err := leansig.GenerateKeypair(42, testLsigActivationEpoch, largeNumEpochs) - if err != nil { - t.Fatalf("GenerateKeypair failed: %v", err) - } - defer kp.Free() - - startBefore := kp.PreparedStart() - endBefore := kp.PreparedEnd() - t.Logf("Before advance: [%d, %d)", startBefore, endBefore) - - err = kp.AdvancePreparation() - if err != nil { - t.Fatalf("AdvancePreparation failed: %v", err) - } - - startAfter := kp.PreparedStart() - endAfter := kp.PreparedEnd() - t.Logf("After advance: [%d, %d)", startAfter, endAfter) - - if startAfter <= startBefore { - t.Errorf("prepared start did not advance: before=%d after=%d", startBefore, startAfter) - } - if endAfter <= endBefore { - t.Errorf("prepared end did not advance: before=%d after=%d", endBefore, endAfter) - } -} diff --git a/xmss/multi_aggregate_test.go b/xmss/multi_aggregate_test.go new file mode 100644 index 0000000..d7891cf --- /dev/null +++ b/xmss/multi_aggregate_test.go @@ -0,0 +1,63 @@ +package xmss + +import ( + "testing" +) + +// TestMultipleAggregationsSequential tests calling aggregate multiple times +// with different data — simulates multiple interval 2 ticks. +func TestMultipleAggregationsSequential(t *testing.T) { + kp1, _ := GenerateKeyPair("multi-agg-0", 0, 1<<18) + defer kp1.Close() + kp2, _ := GenerateKeyPair("multi-agg-1", 0, 1<<18) + defer kp2.Close() + + pk1, _ := kp1.PublicKeyBytes() + pk2, _ := kp2.PublicKeyBytes() + + EnsureProverReady() + EnsureVerifierReady() + + for slot := uint32(0); slot < 5; slot++ { + var message [32]byte + message[0] = byte(slot) + message[1] = 0xab + + sig1, err := kp1.Sign(slot, message) + if err != nil { + t.Fatalf("slot %d sign kp1: %v", slot, err) + } + sig2, err := kp2.Sign(slot, message) + if err != nil { + t.Fatalf("slot %d sign kp2: %v", slot, err) + } + + // Parse from SSZ bytes (simulating P2P round-trip) + csig1, _ := ParseSignature(sig1[:]) + defer FreeSignature(csig1) + csig2, _ := ParseSignature(sig2[:]) + defer FreeSignature(csig2) + + cpk1, _ := ParsePublicKey(pk1) + defer FreePublicKey(cpk1) + cpk2, _ := ParsePublicKey(pk2) + defer FreePublicKey(cpk2) + + proof, err := AggregateSignatures( + []CPubKey{cpk1, cpk2}, + []CSig{csig1, csig2}, + message, + slot, + ) + if err != nil { + t.Fatalf("slot %d aggregate FAILED: %v", slot, err) + } + + err = VerifyAggregatedSignature(proof, []CPubKey{cpk1, cpk2}, message, slot) + if err != nil { + t.Fatalf("slot %d verify FAILED: %v", slot, err) + } + + t.Logf("slot %d: aggregate + verify OK (proof=%d bytes)", slot, len(proof)) + } +} diff --git a/xmss/proof_pool.go b/xmss/proof_pool.go new file mode 100644 index 0000000..973cb3b --- /dev/null +++ b/xmss/proof_pool.go @@ -0,0 +1,21 @@ +package xmss + +import "sync" + +// proofBufPool reuses 1 MiB proof serialization buffers to reduce GC pressure. +var proofBufPool = sync.Pool{ + New: func() any { + buf := make([]byte, MaxProofSize) + return &buf + }, +} + +// getProofBuf returns a 1 MiB buffer from the pool. +func getProofBuf() *[]byte { + return proofBufPool.Get().(*[]byte) +} + +// putProofBuf returns a buffer to the pool. +func putProofBuf(buf *[]byte) { + proofBufPool.Put(buf) +} diff --git a/xmss/pubkey_cache.go b/xmss/pubkey_cache.go new file mode 100644 index 0000000..bc29669 --- /dev/null +++ b/xmss/pubkey_cache.go @@ -0,0 +1,61 @@ +package xmss + +import ( + "sync" + + "github.com/geanlabs/gean/types" +) + +// PubKeyCache caches parsed C PublicKey handles to avoid repeated FFI calls. +// Keyed by the raw 52-byte pubkey bytes. Thread-safe via mutex. +// +// The cache owns all handles and frees them on Close(). +// Callers must NOT call FreePublicKey on cached handles. +type PubKeyCache struct { + mu sync.Mutex + cache map[[types.PubkeySize]byte]CPubKey +} + +// NewPubKeyCache creates an empty pubkey cache. +func NewPubKeyCache() *PubKeyCache { + return &PubKeyCache{ + cache: make(map[[types.PubkeySize]byte]CPubKey), + } +} + +// Get returns a cached pubkey handle, parsing and caching it on first access. +// The returned handle is owned by the cache — do NOT free it. +func (c *PubKeyCache) Get(pubkeyBytes [types.PubkeySize]byte) (CPubKey, error) { + c.mu.Lock() + defer c.mu.Unlock() + + if pk, ok := c.cache[pubkeyBytes]; ok { + return pk, nil + } + + pk, err := ParsePublicKey(pubkeyBytes) + if err != nil { + return nil, err + } + + c.cache[pubkeyBytes] = pk + return pk, nil +} + +// Len returns the number of cached pubkeys. +func (c *PubKeyCache) Len() int { + c.mu.Lock() + defer c.mu.Unlock() + return len(c.cache) +} + +// Close frees all cached pubkey handles. +func (c *PubKeyCache) Close() { + c.mu.Lock() + defer c.mu.Unlock() + + for key, pk := range c.cache { + FreePublicKey(pk) + delete(c.cache, key) + } +} diff --git a/xmss/pubkey_cache_test.go b/xmss/pubkey_cache_test.go new file mode 100644 index 0000000..9d5d4fa --- /dev/null +++ b/xmss/pubkey_cache_test.go @@ -0,0 +1,82 @@ +package xmss + +import ( + "testing" + + "github.com/geanlabs/gean/types" +) + +func TestPubKeyCacheGetAndReuse(t *testing.T) { + cache := NewPubKeyCache() + defer cache.Close() + + // Use a real-looking pubkey (52 bytes). + var pubkey [types.PubkeySize]byte + pubkey[0] = 0x01 + pubkey[51] = 0xFF + + // First call parses (FFI). + pk1, err := cache.Get(pubkey) + if err != nil { + t.Fatalf("first Get: %v", err) + } + if pk1 == nil { + t.Fatal("first Get returned nil") + } + + // Second call returns cached (no FFI). + pk2, err := cache.Get(pubkey) + if err != nil { + t.Fatalf("second Get: %v", err) + } + if pk1 != pk2 { + t.Fatal("expected same pointer from cache") + } + + if cache.Len() != 1 { + t.Fatalf("expected 1 cached entry, got %d", cache.Len()) + } +} + +func TestPubKeyCacheMultipleKeys(t *testing.T) { + cache := NewPubKeyCache() + defer cache.Close() + + var pk1, pk2 [types.PubkeySize]byte + pk1[0] = 0x01 + pk2[0] = 0x02 + + h1, err := cache.Get(pk1) + if err != nil { + t.Fatalf("get pk1: %v", err) + } + h2, err := cache.Get(pk2) + if err != nil { + t.Fatalf("get pk2: %v", err) + } + + if h1 == h2 { + t.Fatal("different keys should produce different handles") + } + if cache.Len() != 2 { + t.Fatalf("expected 2 cached entries, got %d", cache.Len()) + } +} + +func TestPubKeyCacheClose(t *testing.T) { + cache := NewPubKeyCache() + + var pubkey [types.PubkeySize]byte + pubkey[0] = 0xAA + + _, err := cache.Get(pubkey) + if err != nil { + t.Fatalf("get: %v", err) + } + + cache.Close() + + if cache.Len() != 0 { + t.Fatalf("expected 0 entries after Close, got %d", cache.Len()) + } +} diff --git a/xmss/roundtrip_test.go b/xmss/roundtrip_test.go new file mode 100644 index 0000000..8a157c9 --- /dev/null +++ b/xmss/roundtrip_test.go @@ -0,0 +1,62 @@ +package xmss + +import ( + "testing" +) + +// TestSignatureSSZRoundtripThenAggregate tests the exact path that fails in production: +// Generate key → sign → serialize to [3112]byte → deserialize back → aggregate. +// This simulates what happens when a proposer attestation arrives via P2P. +func TestSignatureSSZRoundtripThenAggregate(t *testing.T) { + kp, err := GenerateKeyPair("roundtrip-test-0", 0, 1<<18) + if err != nil { + t.Fatalf("keygen: %v", err) + } + defer kp.Close() + + var message [32]byte + message[0] = 0xab + + // Sign → get [3112]byte (SSZ serialized via hashsig_signature_to_bytes) + sigBytes, err := kp.Sign(0, message) + if err != nil { + t.Fatalf("sign: %v", err) + } + + // Parse pubkey + pkBytes, _ := kp.PublicKeyBytes() + cpk, err := ParsePublicKey(pkBytes) + if err != nil { + t.Fatalf("parse pk: %v", err) + } + defer FreePublicKey(cpk) + + // NOW: simulate SSZ round-trip by parsing from the serialized bytes. + // This is what happens when processProposerAttestation receives a block from P2P. + csig, err := ParseSignature(sigBytes[:]) + if err != nil { + t.Fatalf("parse sig from SSZ bytes: %v", err) + } + defer FreeSignature(csig) + + // Aggregate with the round-tripped signature. + EnsureProverReady() + proofBytes, err := AggregateSignatures( + []CPubKey{cpk}, + []CSig{csig}, + message, + 0, + ) + if err != nil { + t.Fatalf("aggregate with SSZ-round-tripped sig FAILED: %v", err) + } + t.Logf("aggregate with SSZ-round-tripped sig succeeded: proof=%d bytes", len(proofBytes)) + + // Verify + EnsureVerifierReady() + err = VerifyAggregatedSignature(proofBytes, []CPubKey{cpk}, message, 0) + if err != nil { + t.Fatalf("verify failed: %v", err) + } + t.Log("verification succeeded") +} diff --git a/xmss/rust/.gitignore b/xmss/rust/.gitignore new file mode 100644 index 0000000..2f7896d --- /dev/null +++ b/xmss/rust/.gitignore @@ -0,0 +1 @@ +target/ diff --git a/xmss/leanmultisig-ffi/Cargo.lock b/xmss/rust/Cargo.lock similarity index 79% rename from xmss/leanmultisig-ffi/Cargo.lock rename to xmss/rust/Cargo.lock index 90d7a5c..0bd09c0 100644 --- a/xmss/leanmultisig-ffi/Cargo.lock +++ b/xmss/rust/Cargo.lock @@ -17,7 +17,7 @@ version = "0.1.0" source = "git+https://github.com/leanEthereum/leanMultisig.git?rev=e4474138487eeb1ed7c2e1013674fe80ac9f3165#e4474138487eeb1ed7c2e1013674fe80ac9f3165" dependencies = [ "multilinear-toolkit", - "p3-util 0.3.0", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "tracing", "utils", ] @@ -27,7 +27,7 @@ name = "air" version = "0.3.0" source = "git+https://github.com/leanEthereum/multilinear-toolkit.git?branch=lean-vm-simple#e06cba2e214879c00c7fbc0e5b12908ddfcba588" dependencies = [ - "p3-field 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", ] [[package]] @@ -76,56 +76,6 @@ dependencies = [ "winapi", ] -[[package]] -name = "anstream" -version = "0.6.21" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "43d5b281e737544384e969a5ccad3f1cdd24b48086a0fc1b2a5262a26b8f4f4a" -dependencies = [ - "anstyle", - "anstyle-parse", - "anstyle-query", - "anstyle-wincon", - "colorchoice", - "is_terminal_polyfill", - "utf8parse", -] - -[[package]] -name = "anstyle" -version = "1.0.13" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5192cca8006f1fd4f7237516f40fa183bb07f8fbdfedaa0036de5ea9b0b45e78" - -[[package]] -name = "anstyle-parse" -version = "0.2.7" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4e7644824f0aa2c7b9384579234ef10eb7efb6a0deb83f9630a49594dd9c15c2" -dependencies = [ - "utf8parse", -] - -[[package]] -name = "anstyle-query" -version = "1.1.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "40c48f72fd53cd289104fc64099abca73db4166ad86ea0b4341abe65af83dadc" -dependencies = [ - "windows-sys 0.61.2", -] - -[[package]] -name = "anstyle-wincon" -version = "3.0.11" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "291e6a250ff86cd4a820112fb8898808a366d8f9f58ce16d1f538353ad55747d" -dependencies = [ - "anstyle", - "once_cell_polyfill", - "windows-sys 0.61.2", -] - [[package]] name = "ark-ff" version = "0.3.0" @@ -345,8 +295,8 @@ source = "git+https://github.com/leanEthereum/multilinear-toolkit.git?branch=lea dependencies = [ "fiat-shamir", "itertools 0.14.0", - "p3-field 0.3.0", - "p3-util 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "rand 0.9.2", "rayon", "tracing", @@ -406,6 +356,15 @@ dependencies = [ "wyz", ] +[[package]] +name = "block-buffer" +version = "0.9.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4152116fd6e9dadb291ae18fc1ec3575ed6d84c29642d97890f4b4a3417297e4" +dependencies = [ + "generic-array", +] + [[package]] name = "block-buffer" version = "0.10.4" @@ -429,32 +388,13 @@ checksum = "1fd0f2584146f6f2ef48085050886acf353beff7305ebd1ae69500e27c67f64b" [[package]] name = "bytes" -version = "1.11.0" +version = "1.11.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b35204fbdc0b3f4446b89fc1ac2cf84a8a68971995d0bf2e925ec7cd960f9cb3" +checksum = "1e748733b7cbc798e1434b6ac524f0c1ff2ab456fe201501e6497c8417a4fc33" dependencies = [ "serde", ] -[[package]] -name = "cbindgen" -version = "0.29.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "befbfd072a8e81c02f8c507aefce431fe5e7d051f83d48a23ffc9b9fe5a11799" -dependencies = [ - "clap", - "heck", - "indexmap", - "log", - "proc-macro2", - "quote", - "serde", - "serde_json", - "syn 2.0.111", - "tempfile", - "toml", -] - [[package]] name = "cc" version = "1.2.49" @@ -471,46 +411,13 @@ version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9330f8b2ff13f34540b44e946ef35111825727b38d33286ef986142615121801" -[[package]] -name = "clap" -version = "4.5.53" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c9e340e012a1bf4935f5282ed1436d1489548e8f72308207ea5df0e23d2d03f8" -dependencies = [ - "clap_builder", -] - -[[package]] -name = "clap_builder" -version = "4.5.53" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d76b5d13eaa18c901fd2f7fca939fefe3a0727a953561fefdf3b2922b8569d00" -dependencies = [ - "anstream", - "anstyle", - "clap_lex", - "strsim", -] - -[[package]] -name = "clap_lex" -version = "0.7.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a1d728cc89cf3aee9ff92b05e62b19ee65a02b5702cff7d5a377e32c6ae29d8d" - -[[package]] -name = "colorchoice" -version = "1.0.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b05b61dc5112cbb17e4b6cd61790d9845d13888356391624cbe7e41efeac1e75" - [[package]] name = "colored" version = "3.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fde0e0ec90c9dfb3b4b1a0891a7dcd0e2bffde2f7efed5fe7c9bb00e5bfb915e" dependencies = [ - "windows-sys 0.59.0", + "windows-sys", ] [[package]] @@ -558,7 +465,7 @@ source = "git+https://github.com/leanEthereum/multilinear-toolkit.git?branch=lea dependencies = [ "air 0.3.0", "fiat-shamir", - "p3-field 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", ] [[package]] @@ -632,6 +539,41 @@ dependencies = [ "typenum", ] +[[package]] +name = "darling" +version = "0.20.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "fc7f46116c46ff9ab3eb1597a45688b6715c6e628b5c133e288e709a29bcb4ee" +dependencies = [ + "darling_core", + "darling_macro", +] + +[[package]] +name = "darling_core" +version = "0.20.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0d00b9596d185e565c2207a0b01f8bd1a135483d02d9b7b0a54b11da8d53412e" +dependencies = [ + "fnv", + "ident_case", + "proc-macro2", + "quote", + "strsim", + "syn 2.0.111", +] + +[[package]] +name = "darling_macro" +version = "0.20.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "fc34b93ccb385b40dc71c6fceac4b2ad23662c7eeb248cf10d529b7e055b6ead" +dependencies = [ + "darling_core", + "quote", + "syn 2.0.111", +] + [[package]] name = "dashmap" version = "6.1.0" @@ -705,7 +647,7 @@ version = "0.10.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292" dependencies = [ - "block-buffer", + "block-buffer 0.10.4", "const-oid", "crypto-common", "subtle", @@ -795,7 +737,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb" dependencies = [ "libc", - "windows-sys 0.61.2", + "windows-sys", ] [[package]] @@ -826,6 +768,18 @@ dependencies = [ "typenum", ] +[[package]] +name = "ethereum_ssz_derive" +version = "0.10.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "78d247bc40823c365a62e572441a8f8b12df03f171713f06bc76180fcd56ab71" +dependencies = [ + "darling", + "proc-macro2", + "quote", + "syn 2.0.111", +] + [[package]] name = "fastrand" version = "2.3.0" @@ -870,8 +824,8 @@ version = "0.1.0" source = "git+https://github.com/leanEthereum/fiat-shamir.git?branch=lean-vm-simple#9d4dc22f06cfa65f15bf5f1b07912a64c7feff0f" dependencies = [ "p3-challenger 0.3.0", - "p3-field 0.3.0", - "p3-koala-bear 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-koala-bear 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "serde", ] @@ -974,10 +928,18 @@ dependencies = [ ] [[package]] -name = "heck" -version = "0.5.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea" +name = "hashsig-glue" +version = "0.1.0" +dependencies = [ + "ethereum_ssz", + "leansig 0.1.0 (git+https://github.com/leanEthereum/leanSig?rev=f10dcbefac2502d356d93f686e8b4ecd8dc8840a)", + "rand 0.9.2", + "rand_chacha 0.9.0", + "serde", + "serde_json", + "sha2 0.9.9", + "thiserror", +] [[package]] name = "hex" @@ -994,6 +956,12 @@ dependencies = [ "digest 0.10.7", ] +[[package]] +name = "ident_case" +version = "1.0.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b9e0384b61958566e926dc50660321d12159025e767c18e043daf26b70104c39" + [[package]] name = "impl-codec" version = "0.6.0" @@ -1026,12 +994,6 @@ dependencies = [ "serde_core", ] -[[package]] -name = "is_terminal_polyfill" -version = "1.70.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a6cb138bb79a146c1bd460005623e142ef0181e3d0219cb493e02f7d08a35695" - [[package]] name = "itertools" version = "0.10.5" @@ -1075,7 +1037,7 @@ dependencies = [ "ecdsa", "elliptic-curve", "once_cell", - "sha2", + "sha2 0.10.9", ] [[package]] @@ -1113,10 +1075,10 @@ dependencies = [ "lookup", "multilinear-toolkit", "p3-challenger 0.3.0", - "p3-koala-bear 0.3.0", - "p3-poseidon2 0.3.0", - "p3-symmetric 0.3.0", - "p3-util 0.3.0", + "p3-koala-bear 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-poseidon2 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-symmetric 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "pest", "pest_derive", "rand 0.9.2", @@ -1138,10 +1100,10 @@ dependencies = [ "lookup", "multilinear-toolkit", "p3-challenger 0.3.0", - "p3-koala-bear 0.3.0", - "p3-poseidon2 0.3.0", - "p3-symmetric 0.3.0", - "p3-util 0.3.0", + "p3-koala-bear 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-poseidon2 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-symmetric 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "pest", "pest_derive", "rand 0.9.2", @@ -1165,10 +1127,10 @@ dependencies = [ "multilinear-toolkit", "num_enum", "p3-challenger 0.3.0", - "p3-koala-bear 0.3.0", - "p3-poseidon2 0.3.0", - "p3-symmetric 0.3.0", - "p3-util 0.3.0", + "p3-koala-bear 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-poseidon2 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-symmetric 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "pest", "pest_derive", "rand 0.9.2", @@ -1180,28 +1142,39 @@ dependencies = [ ] [[package]] -name = "leanmultisig-ffi" +name = "leansig" version = "0.1.0" +source = "git+https://github.com/leanEthereum/leansig.git?rev=73bedc26ed961b110df7ac2e234dc11361a4bf25#73bedc26ed961b110df7ac2e234dc11361a4bf25" dependencies = [ - "cbindgen", + "dashmap", "ethereum_ssz", - "leansig", - "rec_aggregation", + "num-bigint", + "num-traits", + "p3-baby-bear 0.4.1", + "p3-field 0.4.1", + "p3-koala-bear 0.4.1", + "p3-symmetric 0.4.1", + "rand 0.9.2", + "rayon", + "serde", + "sha3", + "thiserror", ] [[package]] name = "leansig" version = "0.1.0" -source = "git+https://github.com/leanEthereum/leanSig?rev=73bedc26ed961b110df7ac2e234dc11361a4bf25#73bedc26ed961b110df7ac2e234dc11361a4bf25" +source = "git+https://github.com/leanEthereum/leanSig?rev=f10dcbefac2502d356d93f686e8b4ecd8dc8840a#f10dcbefac2502d356d93f686e8b4ecd8dc8840a" dependencies = [ "dashmap", "ethereum_ssz", + "ethereum_ssz_derive", "num-bigint", "num-traits", - "p3-baby-bear 0.4.1", - "p3-field 0.4.1", - "p3-koala-bear 0.4.1", - "p3-symmetric 0.4.1", + "p3-baby-bear 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-field 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-koala-bear 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-symmetric 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", "rand 0.9.2", "rayon", "serde", @@ -1249,8 +1222,8 @@ source = "git+https://github.com/leanEthereum/leanMultisig.git?rev=e4474138487ee dependencies = [ "multilinear-toolkit", "p3-challenger 0.3.0", - "p3-koala-bear 0.3.0", - "p3-util 0.3.0", + "p3-koala-bear 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "rand 0.9.2", "tracing", "utils", @@ -1281,20 +1254,31 @@ dependencies = [ "backend", "constraints-folder", "fiat-shamir", - "p3-field 0.3.0", - "p3-util 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "rayon", "sumcheck", "tracing", ] +[[package]] +name = "multisig-glue" +version = "0.1.0" +dependencies = [ + "ethereum_ssz", + "leansig 0.1.0 (git+https://github.com/leanEthereum/leansig.git?rev=73bedc26ed961b110df7ac2e234dc11361a4bf25)", + "p3-koala-bear 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "rec_aggregation", + "whir-p3", +] + [[package]] name = "nu-ansi-term" version = "0.50.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7957b9740744892f114936ab4a57b3f487491bbeafaf8083688b16841a4240e5" dependencies = [ - "windows-sys 0.61.2", + "windows-sys", ] [[package]] @@ -1355,21 +1339,34 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d" [[package]] -name = "once_cell_polyfill" -version = "1.70.2" +name = "opaque-debug" +version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "384b8ab6d37215f3c5301a95a4accb5d64aa607f1fcb26a11b5303878451b4fe" +checksum = "c08d65885ee38876c4f86fa503fb49d7b507c2b62552df7c70b2fce627e06381" [[package]] name = "p3-baby-bear" version = "0.3.0" source = "git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple#4897086b6f460b969dc0baad5c4dff91a4eb1d67" dependencies = [ - "p3-field 0.3.0", - "p3-mds 0.3.0", - "p3-monty-31 0.3.0", - "p3-poseidon2 0.3.0", - "p3-symmetric 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-mds 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-monty-31 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-poseidon2 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-symmetric 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "rand 0.9.2", +] + +[[package]] +name = "p3-baby-bear" +version = "0.3.0" +source = "git+https://github.com/Plonky3/Plonky3.git?rev=a33a312#a33a31274a5e78bb5fbe3f82ffd2c294e17fa830" +dependencies = [ + "p3-field 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-mds 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-monty-31 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-poseidon2 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-symmetric 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", "rand 0.9.2", ] @@ -1392,10 +1389,10 @@ name = "p3-challenger" version = "0.3.0" source = "git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple#4897086b6f460b969dc0baad5c4dff91a4eb1d67" dependencies = [ - "p3-field 0.3.0", - "p3-maybe-rayon 0.3.0", - "p3-symmetric 0.3.0", - "p3-util 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-maybe-rayon 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-symmetric 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "tracing", ] @@ -1419,10 +1416,10 @@ source = "git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple# dependencies = [ "itertools 0.14.0", "p3-challenger 0.3.0", - "p3-dft 0.3.0", - "p3-field 0.3.0", - "p3-matrix 0.3.0", - "p3-util 0.3.0", + "p3-dft 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-matrix 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "serde", ] @@ -1432,10 +1429,24 @@ version = "0.3.0" source = "git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple#4897086b6f460b969dc0baad5c4dff91a4eb1d67" dependencies = [ "itertools 0.14.0", - "p3-field 0.3.0", - "p3-matrix 0.3.0", - "p3-maybe-rayon 0.3.0", - "p3-util 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-matrix 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-maybe-rayon 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "tracing", +] + +[[package]] +name = "p3-dft" +version = "0.3.0" +source = "git+https://github.com/Plonky3/Plonky3.git?rev=a33a312#a33a31274a5e78bb5fbe3f82ffd2c294e17fa830" +dependencies = [ + "itertools 0.14.0", + "p3-field 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-matrix 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-maybe-rayon 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-util 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "spin", "tracing", ] @@ -1460,8 +1471,23 @@ source = "git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple# dependencies = [ "itertools 0.14.0", "num-bigint", - "p3-maybe-rayon 0.3.0", - "p3-util 0.3.0", + "p3-maybe-rayon 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "paste", + "rand 0.9.2", + "serde", + "tracing", +] + +[[package]] +name = "p3-field" +version = "0.3.0" +source = "git+https://github.com/Plonky3/Plonky3.git?rev=a33a312#a33a31274a5e78bb5fbe3f82ffd2c294e17fa830" +dependencies = [ + "itertools 0.14.0", + "num-bigint", + "p3-maybe-rayon 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-util 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", "paste", "rand 0.9.2", "serde", @@ -1488,10 +1514,10 @@ name = "p3-interpolation" version = "0.3.0" source = "git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple#4897086b6f460b969dc0baad5c4dff91a4eb1d67" dependencies = [ - "p3-field 0.3.0", - "p3-matrix 0.3.0", - "p3-maybe-rayon 0.3.0", - "p3-util 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-matrix 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-maybe-rayon 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", ] [[package]] @@ -1501,15 +1527,27 @@ source = "git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple# dependencies = [ "itertools 0.14.0", "num-bigint", - "p3-field 0.3.0", - "p3-monty-31 0.3.0", - "p3-poseidon2 0.3.0", - "p3-symmetric 0.3.0", - "p3-util 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-monty-31 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-poseidon2 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-symmetric 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "rand 0.9.2", "serde", ] +[[package]] +name = "p3-koala-bear" +version = "0.3.0" +source = "git+https://github.com/Plonky3/Plonky3.git?rev=a33a312#a33a31274a5e78bb5fbe3f82ffd2c294e17fa830" +dependencies = [ + "p3-field 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-monty-31 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-poseidon2 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-symmetric 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "rand 0.9.2", +] + [[package]] name = "p3-koala-bear" version = "0.4.1" @@ -1529,9 +1567,24 @@ version = "0.3.0" source = "git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple#4897086b6f460b969dc0baad5c4dff91a4eb1d67" dependencies = [ "itertools 0.14.0", - "p3-field 0.3.0", - "p3-maybe-rayon 0.3.0", - "p3-util 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-maybe-rayon 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "rand 0.9.2", + "serde", + "tracing", + "transpose", +] + +[[package]] +name = "p3-matrix" +version = "0.3.0" +source = "git+https://github.com/Plonky3/Plonky3.git?rev=a33a312#a33a31274a5e78bb5fbe3f82ffd2c294e17fa830" +dependencies = [ + "itertools 0.14.0", + "p3-field 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-maybe-rayon 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-util 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", "rand 0.9.2", "serde", "tracing", @@ -1561,6 +1614,11 @@ dependencies = [ "rayon", ] +[[package]] +name = "p3-maybe-rayon" +version = "0.3.0" +source = "git+https://github.com/Plonky3/Plonky3.git?rev=a33a312#a33a31274a5e78bb5fbe3f82ffd2c294e17fa830" + [[package]] name = "p3-maybe-rayon" version = "0.4.1" @@ -1571,10 +1629,22 @@ name = "p3-mds" version = "0.3.0" source = "git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple#4897086b6f460b969dc0baad5c4dff91a4eb1d67" dependencies = [ - "p3-dft 0.3.0", - "p3-field 0.3.0", - "p3-symmetric 0.3.0", - "p3-util 0.3.0", + "p3-dft 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-symmetric 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "rand 0.9.2", +] + +[[package]] +name = "p3-mds" +version = "0.3.0" +source = "git+https://github.com/Plonky3/Plonky3.git?rev=a33a312#a33a31274a5e78bb5fbe3f82ffd2c294e17fa830" +dependencies = [ + "p3-dft 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-field 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-symmetric 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-util 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", "rand 0.9.2", ] @@ -1597,11 +1667,11 @@ source = "git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple# dependencies = [ "itertools 0.14.0", "p3-commit", - "p3-field 0.3.0", - "p3-matrix 0.3.0", - "p3-maybe-rayon 0.3.0", - "p3-symmetric 0.3.0", - "p3-util 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-matrix 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-maybe-rayon 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-symmetric 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "rand 0.9.2", "serde", "tracing", @@ -1614,17 +1684,40 @@ source = "git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple# dependencies = [ "itertools 0.14.0", "num-bigint", - "p3-dft 0.3.0", - "p3-field 0.3.0", - "p3-matrix 0.3.0", - "p3-maybe-rayon 0.3.0", - "p3-mds 0.3.0", - "p3-poseidon2 0.3.0", - "p3-symmetric 0.3.0", - "p3-util 0.3.0", + "p3-dft 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-matrix 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-maybe-rayon 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-mds 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-poseidon2 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-symmetric 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "paste", + "rand 0.9.2", + "serde", + "tracing", + "transpose", +] + +[[package]] +name = "p3-monty-31" +version = "0.3.0" +source = "git+https://github.com/Plonky3/Plonky3.git?rev=a33a312#a33a31274a5e78bb5fbe3f82ffd2c294e17fa830" +dependencies = [ + "itertools 0.14.0", + "num-bigint", + "p3-dft 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-field 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-matrix 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-maybe-rayon 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-mds 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-poseidon2 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-symmetric 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-util 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", "paste", "rand 0.9.2", "serde", + "spin", "tracing", "transpose", ] @@ -1657,10 +1750,22 @@ name = "p3-poseidon2" version = "0.3.0" source = "git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple#4897086b6f460b969dc0baad5c4dff91a4eb1d67" dependencies = [ - "p3-field 0.3.0", - "p3-mds 0.3.0", - "p3-symmetric 0.3.0", - "p3-util 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-mds 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-symmetric 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "rand 0.9.2", +] + +[[package]] +name = "p3-poseidon2" +version = "0.3.0" +source = "git+https://github.com/Plonky3/Plonky3.git?rev=a33a312#a33a31274a5e78bb5fbe3f82ffd2c294e17fa830" +dependencies = [ + "p3-field 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-mds 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-symmetric 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", + "p3-util 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", "rand 0.9.2", ] @@ -1682,7 +1787,17 @@ version = "0.3.0" source = "git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple#4897086b6f460b969dc0baad5c4dff91a4eb1d67" dependencies = [ "itertools 0.14.0", - "p3-field 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "serde", +] + +[[package]] +name = "p3-symmetric" +version = "0.3.0" +source = "git+https://github.com/Plonky3/Plonky3.git?rev=a33a312#a33a31274a5e78bb5fbe3f82ffd2c294e17fa830" +dependencies = [ + "itertools 0.14.0", + "p3-field 0.3.0 (git+https://github.com/Plonky3/Plonky3.git?rev=a33a312)", "serde", ] @@ -1705,6 +1820,14 @@ dependencies = [ "serde", ] +[[package]] +name = "p3-util" +version = "0.3.0" +source = "git+https://github.com/Plonky3/Plonky3.git?rev=a33a312#a33a31274a5e78bb5fbe3f82ffd2c294e17fa830" +dependencies = [ + "serde", +] + [[package]] name = "p3-util" version = "0.4.1" @@ -1800,7 +1923,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bf1d70880e76bdc13ba52eafa6239ce793d85c8e43896507e43dd8984ff05b82" dependencies = [ "pest", - "sha2", + "sha2 0.10.9", ] [[package]] @@ -2005,14 +2128,14 @@ dependencies = [ "lean_compiler", "lean_prover", "lean_vm", - "leansig", + "leansig 0.1.0 (git+https://github.com/leanEthereum/leansig.git?rev=73bedc26ed961b110df7ac2e234dc11361a4bf25)", "lookup", "multilinear-toolkit", "p3-challenger 0.3.0", - "p3-koala-bear 0.3.0", - "p3-poseidon2 0.3.0", - "p3-symmetric 0.3.0", - "p3-util 0.3.0", + "p3-koala-bear 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-poseidon2 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-symmetric 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "rand 0.9.2", "serde", "serde_json", @@ -2070,9 +2193,9 @@ dependencies = [ [[package]] name = "ruint" -version = "1.17.0" +version = "1.17.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a68df0380e5c9d20ce49534f292a36a7514ae21350726efe1865bdb1fa91d278" +checksum = "c141e807189ad38a07276942c6623032d3753c8859c146104ac2e4d68865945a" dependencies = [ "alloy-rlp", "ark-ff 0.3.0", @@ -2142,7 +2265,7 @@ dependencies = [ "errno", "libc", "linux-raw-sys", - "windows-sys 0.61.2", + "windows-sys", ] [[package]] @@ -2257,12 +2380,16 @@ dependencies = [ ] [[package]] -name = "serde_spanned" -version = "1.0.4" +name = "sha2" +version = "0.9.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f8bbf91e5a4d6315eee45e704372590b30e260ee83af6639d64557f51b067776" +checksum = "4d58a1e1bf39749807d89cf2d98ac2dfa0ff1cb3faa38fbb64dd88ac8013d800" dependencies = [ - "serde_core", + "block-buffer 0.9.0", + "cfg-if", + "cpufeatures", + "digest 0.9.0", + "opaque-debug", ] [[package]] @@ -2372,7 +2499,7 @@ dependencies = [ "derive_more", "lookup", "multilinear-toolkit", - "p3-util 0.3.0", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "tracing", "utils", "whir-p3", @@ -2393,8 +2520,8 @@ dependencies = [ "backend", "constraints-folder", "fiat-shamir", - "p3-field 0.3.0", - "p3-util 0.3.0", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "rayon", ] @@ -2436,7 +2563,7 @@ dependencies = [ "getrandom 0.3.4", "once_cell", "rustix", - "windows-sys 0.61.2", + "windows-sys", ] [[package]] @@ -2477,26 +2604,11 @@ dependencies = [ "crunchy", ] -[[package]] -name = "toml" -version = "0.9.12+spec-1.1.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cf92845e79fc2e2def6a5d828f0801e29a2f8acc037becc5ab08595c7d5e9863" -dependencies = [ - "indexmap", - "serde_core", - "serde_spanned", - "toml_datetime", - "toml_parser", - "toml_writer", - "winnow", -] - [[package]] name = "toml_datetime" -version = "0.7.5+spec-1.1.0" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "92e1cfed4a3038bc5a127e35a2d360f145e1f4b971b551a2ba5fd7aedf7e1347" +checksum = "f2cdb639ebbc97961c51720f858597f7f24c4fc295327923af55b74c3c724533" dependencies = [ "serde_core", ] @@ -2515,19 +2627,13 @@ dependencies = [ [[package]] name = "toml_parser" -version = "1.0.9+spec-1.1.0" +version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "702d4415e08923e7e1ef96cd5727c0dfed80b4d2fa25db9647fe5eb6f7c5a4c4" +checksum = "c0cbe268d35bdb4bb5a56a2de88d0ad0eb70af5384a99d648cd4b3d04039800e" dependencies = [ "winnow", ] -[[package]] -name = "toml_writer" -version = "1.0.6+spec-1.1.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ab16f14aed21ee8bfd8ec22513f7287cd4a91aa92e44edfe2c17ddd004e92607" - [[package]] name = "tracing" version = "0.1.43" @@ -2672,12 +2778,6 @@ version = "0.2.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ebc1c04c71510c7f702b52b7c350734c9ff1295c464a03335b00bb84fc54f853" -[[package]] -name = "utf8parse" -version = "0.2.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821" - [[package]] name = "utils" version = "0.1.0" @@ -2685,10 +2785,10 @@ source = "git+https://github.com/leanEthereum/leanMultisig.git?rev=e4474138487ee dependencies = [ "multilinear-toolkit", "p3-challenger 0.3.0", - "p3-koala-bear 0.3.0", - "p3-poseidon2 0.3.0", - "p3-symmetric 0.3.0", - "p3-util 0.3.0", + "p3-koala-bear 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-poseidon2 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-symmetric 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "tracing", "tracing-forest 0.3.0", "tracing-subscriber", @@ -2737,18 +2837,18 @@ source = "git+https://github.com/TomWambsgans/whir-p3?branch=lean-vm-simple#f74b dependencies = [ "itertools 0.14.0", "multilinear-toolkit", - "p3-baby-bear 0.3.0", + "p3-baby-bear 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "p3-challenger 0.3.0", "p3-commit", - "p3-dft 0.3.0", - "p3-field 0.3.0", + "p3-dft 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-field 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "p3-interpolation", - "p3-koala-bear 0.3.0", - "p3-matrix 0.3.0", - "p3-maybe-rayon 0.3.0", + "p3-koala-bear 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-matrix 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-maybe-rayon 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "p3-merkle-tree", - "p3-symmetric 0.3.0", - "p3-util 0.3.0", + "p3-symmetric 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "rand 0.9.2", "rayon", "thiserror", @@ -2794,15 +2894,6 @@ dependencies = [ "windows-targets", ] -[[package]] -name = "windows-sys" -version = "0.61.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ae137229bcbd6cdf0f7b80a31df61766145077ddf49416a728b02cb3921ff3fc" -dependencies = [ - "windows-link", -] - [[package]] name = "windows-targets" version = "0.52.6" @@ -2894,11 +2985,11 @@ dependencies = [ "lookup", "multilinear-toolkit", "p3-challenger 0.3.0", - "p3-koala-bear 0.3.0", - "p3-monty-31 0.3.0", - "p3-poseidon2 0.3.0", - "p3-symmetric 0.3.0", - "p3-util 0.3.0", + "p3-koala-bear 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-monty-31 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-poseidon2 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-symmetric 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", + "p3-util 0.3.0 (git+https://github.com/TomWambsgans/Plonky3.git?branch=lean-vm-simple)", "pest", "pest_derive", "rand 0.9.2", diff --git a/xmss/rust/Cargo.toml b/xmss/rust/Cargo.toml new file mode 100644 index 0000000..5994f19 --- /dev/null +++ b/xmss/rust/Cargo.toml @@ -0,0 +1,11 @@ +[workspace] +resolver = "2" +members = [ + "hashsig-glue", + "multisig-glue", +] + +[profile.release] +lto = false +strip = "symbols" +codegen-units = 16 diff --git a/xmss/rust/hashsig-glue/Cargo.toml b/xmss/rust/hashsig-glue/Cargo.toml new file mode 100644 index 0000000..8cc235c --- /dev/null +++ b/xmss/rust/hashsig-glue/Cargo.toml @@ -0,0 +1,18 @@ +[package] +name = "hashsig-glue" +version = "0.1.0" +edition = "2021" + +[dependencies] +sha2 = "0.9" +leansig = { git = "https://github.com/leanEthereum/leanSig", rev = "f10dcbefac2502d356d93f686e8b4ecd8dc8840a" } +rand = "0.9.2" +rand_chacha = "0.9.0" +thiserror = "2.0.17" +ssz = { package = "ethereum_ssz", version = "0.10" } +serde = { version = "1.0", features = ["derive"] } +serde_json = "1.0" + +[lib] +crate-type = ["staticlib"] +name = "hashsig_glue" diff --git a/xmss/rust/hashsig-glue/src/lib.rs b/xmss/rust/hashsig-glue/src/lib.rs new file mode 100644 index 0000000..d26eb0d --- /dev/null +++ b/xmss/rust/hashsig-glue/src/lib.rs @@ -0,0 +1,378 @@ +use leansig::{signature::SignatureScheme, MESSAGE_LENGTH}; +use rand::Rng; +use rand::SeedableRng; +use rand_chacha::ChaCha20Rng; +use sha2::{Digest, Sha256}; +use std::ffi::CStr; +use std::os::raw::c_char; +use std::ptr; +use std::slice; + +pub type HashSigScheme = + leansig::signature::generalized_xmss::instantiations_poseidon_top_level::lifetime_2_to_the_32::hashing_optimized::SIGTopLevelTargetSumLifetime32Dim64Base8; +pub type HashSigPrivateKey = ::SecretKey; +pub type HashSigPublicKey = ::PublicKey; +pub type HashSigSignature = ::Signature; + +#[repr(C)] +pub struct PrivateKey { + inner: HashSigPrivateKey, +} + +#[repr(C)] +pub struct PublicKey { + pub inner: HashSigPublicKey, +} + +#[repr(C)] +pub struct Signature { + pub inner: HashSigSignature, +} + +#[repr(C)] +pub struct KeyPair { + pub public_key: PublicKey, + pub private_key: PrivateKey, +} + +impl PrivateKey { + pub fn new(inner: HashSigPrivateKey) -> Self { + Self { inner } + } + + pub fn generate( + rng: &mut R, + activation_epoch: usize, + num_active_epochs: usize, + ) -> (PublicKey, Self) { + let (public_key, private_key) = + ::key_gen(rng, activation_epoch, num_active_epochs); + (PublicKey::new(public_key), Self::new(private_key)) + } + + pub fn sign( + &self, + message: &[u8; MESSAGE_LENGTH], + epoch: u32, + ) -> Result { + Ok(Signature::new(::sign( + &self.inner, + epoch, + message, + )?)) + } +} + +impl PublicKey { + pub fn new(inner: HashSigPublicKey) -> Self { + Self { inner } + } +} + +impl Signature { + pub fn new(inner: HashSigSignature) -> Self { + Self { inner } + } + + pub fn verify( + &self, + message: &[u8; MESSAGE_LENGTH], + public_key: &PublicKey, + epoch: u32, + ) -> bool { + ::verify(&public_key.inner, epoch, message, &self.inner) + } +} + +// --- FFI Functions --- + +#[no_mangle] +pub unsafe extern "C" fn hashsig_keypair_generate( + seed_phrase: *const c_char, + activation_epoch: usize, + num_active_epochs: usize, +) -> *mut KeyPair { + let seed_phrase = unsafe { CStr::from_ptr(seed_phrase).to_string_lossy().into_owned() }; + let mut hasher = Sha256::new(); + hasher.update(seed_phrase.as_bytes()); + let seed = hasher.finalize().into(); + + let (public_key, private_key) = PrivateKey::generate( + &mut ::from_seed(seed), + activation_epoch, + num_active_epochs, + ); + + Box::into_raw(Box::new(KeyPair { + public_key, + private_key, + })) +} + +#[no_mangle] +pub unsafe extern "C" fn hashsig_keypair_from_ssz( + private_key_ptr: *const u8, + private_key_len: usize, + public_key_ptr: *const u8, + public_key_len: usize, +) -> *mut KeyPair { + if private_key_ptr.is_null() || public_key_ptr.is_null() { + return ptr::null_mut(); + } + unsafe { + let sk_slice = slice::from_raw_parts(private_key_ptr, private_key_len); + let pk_slice = slice::from_raw_parts(public_key_ptr, public_key_len); + + let private_key: HashSigPrivateKey = match HashSigPrivateKey::from_ssz_bytes(sk_slice) { + Ok(key) => key, + Err(_) => return ptr::null_mut(), + }; + let public_key: HashSigPublicKey = match HashSigPublicKey::from_ssz_bytes(pk_slice) { + Ok(key) => key, + Err(_) => return ptr::null_mut(), + }; + + Box::into_raw(Box::new(KeyPair { + public_key: PublicKey::new(public_key), + private_key: PrivateKey::new(private_key), + })) + } +} + +#[no_mangle] +pub unsafe extern "C" fn hashsig_keypair_free(keypair: *mut KeyPair) { + if !keypair.is_null() { + unsafe { + let _ = Box::from_raw(keypair); + } + } +} + +#[no_mangle] +pub unsafe extern "C" fn hashsig_keypair_get_public_key( + keypair: *const KeyPair, +) -> *const PublicKey { + if keypair.is_null() { + return ptr::null(); + } + &(*keypair).public_key +} + +#[no_mangle] +pub unsafe extern "C" fn hashsig_keypair_get_private_key( + keypair: *const KeyPair, +) -> *const PrivateKey { + if keypair.is_null() { + return ptr::null(); + } + &(*keypair).private_key +} + +#[no_mangle] +pub unsafe extern "C" fn hashsig_public_key_from_ssz( + public_key_ptr: *const u8, + public_key_len: usize, +) -> *mut PublicKey { + if public_key_ptr.is_null() { + return ptr::null_mut(); + } + unsafe { + let pk_slice = slice::from_raw_parts(public_key_ptr, public_key_len); + let public_key: HashSigPublicKey = match HashSigPublicKey::from_ssz_bytes(pk_slice) { + Ok(key) => key, + Err(_) => return ptr::null_mut(), + }; + Box::into_raw(Box::new(PublicKey::new(public_key))) + } +} + +#[no_mangle] +pub unsafe extern "C" fn hashsig_public_key_free(public_key: *mut PublicKey) { + if !public_key.is_null() { + unsafe { + let _ = Box::from_raw(public_key); + } + } +} + +#[no_mangle] +pub unsafe extern "C" fn hashsig_sign( + private_key: *const PrivateKey, + message_ptr: *const u8, + epoch: u32, +) -> *mut Signature { + if private_key.is_null() || message_ptr.is_null() { + return ptr::null_mut(); + } + unsafe { + let private_key_ref = &*private_key; + let message_slice = slice::from_raw_parts(message_ptr, MESSAGE_LENGTH); + let message_array: &[u8; MESSAGE_LENGTH] = match message_slice.try_into() { + Ok(arr) => arr, + Err(_) => return ptr::null_mut(), + }; + match private_key_ref.sign(message_array, epoch) { + Ok(sig) => Box::into_raw(Box::new(sig)), + Err(_) => ptr::null_mut(), + } + } +} + +#[no_mangle] +pub unsafe extern "C" fn hashsig_signature_free(signature: *mut Signature) { + if !signature.is_null() { + unsafe { + let _ = Box::from_raw(signature); + } + } +} + +#[no_mangle] +pub unsafe extern "C" fn hashsig_signature_from_ssz( + signature_ptr: *const u8, + signature_len: usize, +) -> *mut Signature { + if signature_ptr.is_null() || signature_len == 0 { + return ptr::null_mut(); + } + unsafe { + let sig_slice = slice::from_raw_parts(signature_ptr, signature_len); + let signature: HashSigSignature = match HashSigSignature::from_ssz_bytes(sig_slice) { + Ok(sig) => sig, + Err(_) => return ptr::null_mut(), + }; + Box::into_raw(Box::new(Signature { inner: signature })) + } +} + +#[no_mangle] +pub unsafe extern "C" fn hashsig_verify( + public_key: *const PublicKey, + message_ptr: *const u8, + epoch: u32, + signature: *const Signature, +) -> i32 { + if public_key.is_null() || message_ptr.is_null() || signature.is_null() { + return -1; + } + unsafe { + let public_key_ref = &*public_key; + let signature_ref = &*signature; + let message_slice = slice::from_raw_parts(message_ptr, MESSAGE_LENGTH); + let message_array: &[u8; MESSAGE_LENGTH] = match message_slice.try_into() { + Ok(arr) => arr, + Err(_) => return -1, + }; + if signature_ref.verify(message_array, public_key_ref, epoch) { + 1 + } else { + 0 + } + } +} + +#[no_mangle] +pub extern "C" fn hashsig_message_length() -> usize { + MESSAGE_LENGTH +} + +use ssz::{Decode, Encode}; + +#[no_mangle] +pub unsafe extern "C" fn hashsig_signature_to_bytes( + signature: *const Signature, + buffer: *mut u8, + buffer_len: usize, +) -> usize { + if signature.is_null() || buffer.is_null() { + return 0; + } + unsafe { + let sig_ref = &*signature; + let ssz_bytes = sig_ref.inner.as_ssz_bytes(); + if ssz_bytes.len() > buffer_len { + return 0; + } + let output_slice = slice::from_raw_parts_mut(buffer, buffer_len); + output_slice[..ssz_bytes.len()].copy_from_slice(&ssz_bytes); + ssz_bytes.len() + } +} + +#[no_mangle] +pub unsafe extern "C" fn hashsig_public_key_to_bytes( + public_key: *const PublicKey, + buffer: *mut u8, + buffer_len: usize, +) -> usize { + if public_key.is_null() || buffer.is_null() { + return 0; + } + unsafe { + let public_key_ref = &*public_key; + let ssz_bytes = public_key_ref.inner.as_ssz_bytes(); + if ssz_bytes.len() > buffer_len { + return 0; + } + let output_slice = slice::from_raw_parts_mut(buffer, buffer_len); + output_slice[..ssz_bytes.len()].copy_from_slice(&ssz_bytes); + ssz_bytes.len() + } +} + +#[no_mangle] +pub unsafe extern "C" fn hashsig_private_key_to_bytes( + private_key: *const PrivateKey, + buffer: *mut u8, + buffer_len: usize, +) -> usize { + if private_key.is_null() || buffer.is_null() { + return 0; + } + unsafe { + let private_key_ref = &*private_key; + let ssz_bytes = private_key_ref.inner.as_ssz_bytes(); + if ssz_bytes.len() > buffer_len { + return 0; + } + let output_slice = slice::from_raw_parts_mut(buffer, buffer_len); + output_slice[..ssz_bytes.len()].copy_from_slice(&ssz_bytes); + ssz_bytes.len() + } +} + +#[no_mangle] +pub unsafe extern "C" fn hashsig_verify_ssz( + pubkey_bytes: *const u8, + pubkey_len: usize, + message: *const u8, + epoch: u32, + signature_bytes: *const u8, + signature_len: usize, +) -> i32 { + if pubkey_bytes.is_null() || message.is_null() || signature_bytes.is_null() { + return -1; + } + unsafe { + let pk_data = slice::from_raw_parts(pubkey_bytes, pubkey_len); + let sig_data = slice::from_raw_parts(signature_bytes, signature_len); + let msg_data = slice::from_raw_parts(message, MESSAGE_LENGTH); + let message_array: &[u8; MESSAGE_LENGTH] = match msg_data.try_into() { + Ok(arr) => arr, + Err(_) => return -1, + }; + let pk: HashSigPublicKey = match HashSigPublicKey::from_ssz_bytes(pk_data) { + Ok(pk) => pk, + Err(_) => return -1, + }; + let sig: HashSigSignature = match HashSigSignature::from_ssz_bytes(sig_data) { + Ok(sig) => sig, + Err(_) => return -1, + }; + if ::verify(&pk, epoch, message_array, &sig) { + 1 + } else { + 0 + } + } +} diff --git a/xmss/rust/multisig-glue/Cargo.toml b/xmss/rust/multisig-glue/Cargo.toml new file mode 100644 index 0000000..be90be8 --- /dev/null +++ b/xmss/rust/multisig-glue/Cargo.toml @@ -0,0 +1,15 @@ +[package] +name = "multisig-glue" +version = "0.1.0" +edition = "2021" + +[dependencies] +rec_aggregation = { git = "https://github.com/leanEthereum/leanMultisig.git", rev = "e4474138487eeb1ed7c2e1013674fe80ac9f3165" } +leansig = { git = "https://github.com/leanEthereum/leansig.git", rev = "73bedc26ed961b110df7ac2e234dc11361a4bf25" } +whir-p3 = { git = "https://github.com/TomWambsgans/whir-p3", branch = "lean-vm-simple" } +p3-koala-bear = { git = "https://github.com/TomWambsgans/Plonky3.git", branch = "lean-vm-simple" } +ssz = { package = "ethereum_ssz", version = "0.10" } + +[lib] +crate-type = ["staticlib"] +name = "multisig_glue" diff --git a/xmss/rust/multisig-glue/src/lib.rs b/xmss/rust/multisig-glue/src/lib.rs new file mode 100644 index 0000000..8c4d861 --- /dev/null +++ b/xmss/rust/multisig-glue/src/lib.rs @@ -0,0 +1,184 @@ +use rec_aggregation::xmss_aggregate::{ + config::{LeanSigPubKey, LeanSigSignature}, + xmss_aggregate_signatures, xmss_setup_aggregation_program, xmss_verify_aggregated_signatures, + Devnet2XmssAggregateSignature, +}; +use ssz::{Decode, Encode}; +use std::slice; +use std::sync::Once; + +use leansig::signature::generalized_xmss::instantiations_poseidon_top_level::lifetime_2_to_the_32::hashing_optimized::SIGTopLevelTargetSumLifetime32Dim64Base8; +use leansig::signature::SignatureScheme; + +type HashSigScheme = SIGTopLevelTargetSumLifetime32Dim64Base8; +type HashSigPublicKey = ::PublicKey; +type HashSigSignature = ::Signature; + +static PROVER_INIT: Once = Once::new(); +static VERIFIER_INIT: Once = Once::new(); + +// Must match hashsig-glue's struct layout exactly. +#[repr(C)] +pub struct PublicKey { + pub inner: HashSigPublicKey, +} + +#[repr(C)] +pub struct Signature { + pub inner: HashSigSignature, +} + +pub fn to_ssz_bytes(agg_sig: &Devnet2XmssAggregateSignature) -> Vec { + agg_sig.as_ssz_bytes() +} + +pub fn from_ssz_bytes(bytes: &[u8]) -> Result { + Devnet2XmssAggregateSignature::from_ssz_bytes(bytes) +} + +#[no_mangle] +pub extern "C" fn xmss_setup_prover() { + PROVER_INIT.call_once(|| { + xmss_setup_aggregation_program(); + whir_p3::precompute_dft_twiddles::(1 << 24); + }); +} + +#[no_mangle] +pub extern "C" fn xmss_setup_verifier() { + VERIFIER_INIT.call_once(|| { + xmss_setup_aggregation_program(); + }); +} + +#[no_mangle] +pub unsafe extern "C" fn xmss_aggregate( + public_keys: *const *const PublicKey, + num_keys: usize, + signatures: *const *const Signature, + num_sigs: usize, + message_hash_ptr: *const u8, + epoch: u32, +) -> *const Devnet2XmssAggregateSignature { + if public_keys.is_null() || signatures.is_null() || message_hash_ptr.is_null() { + return std::ptr::null(); + } + if num_keys != num_sigs { + return std::ptr::null(); + } + + let message_hash_slice = slice::from_raw_parts(message_hash_ptr, 32); + let message_hash: &[u8; 32] = match message_hash_slice.try_into() { + Ok(arr) => arr, + Err(_) => return std::ptr::null_mut(), + }; + + let pub_key_ptrs = slice::from_raw_parts(public_keys, num_keys); + let mut pub_keys: Vec = Vec::with_capacity(num_keys); + for &pk_ptr in pub_key_ptrs { + if pk_ptr.is_null() { + return std::ptr::null(); + } + pub_keys.push((*pk_ptr).inner.clone()); + } + + let sig_ptrs = slice::from_raw_parts(signatures, num_sigs); + let mut lean_signatures: Vec = Vec::with_capacity(num_sigs); + for &sig_ptr in sig_ptrs { + if sig_ptr.is_null() { + return std::ptr::null(); + } + lean_signatures.push((*sig_ptr).inner.clone()); + } + + // Run inline with catch_unwind to prevent CGo crash from Rust panics. + // Panics can occur when SSZ-round-tripped signatures have corrupted rho fields. + match std::panic::catch_unwind(std::panic::AssertUnwindSafe(|| { + xmss_aggregate_signatures(&pub_keys, &lean_signatures, message_hash, epoch) + })) { + Ok(Ok(sig)) => Box::into_raw(Box::new(sig)), + Ok(Err(_)) => std::ptr::null(), + Err(_) => std::ptr::null(), // panic caught + } +} + +#[no_mangle] +pub unsafe extern "C" fn xmss_verify_aggregated( + public_keys: *const *const PublicKey, + num_keys: usize, + message_hash_ptr: *const u8, + agg_sig: *const Devnet2XmssAggregateSignature, + epoch: u32, +) -> bool { + if public_keys.is_null() || message_hash_ptr.is_null() || agg_sig.is_null() { + return false; + } + + let message_hash_slice = slice::from_raw_parts(message_hash_ptr, 32); + let message_hash: &[u8; 32] = match message_hash_slice.try_into() { + Ok(arr) => arr, + Err(_) => return false, + }; + + let pub_key_ptrs = slice::from_raw_parts(public_keys, num_keys); + let mut pub_keys: Vec = Vec::with_capacity(num_keys); + for &pk_ptr in pub_key_ptrs { + if pk_ptr.is_null() { + return false; + } + pub_keys.push((*pk_ptr).inner.clone()); + } + + let agg_sig_ref = &*agg_sig; + let message_owned = *message_hash; + let epoch_owned = epoch; + // Wrap in catch_unwind for CGo safety. + std::panic::catch_unwind(std::panic::AssertUnwindSafe(|| { + xmss_verify_aggregated_signatures(&pub_keys, &message_owned, agg_sig_ref, epoch_owned) + .is_ok() + })) + .unwrap_or_default() +} + +#[no_mangle] +pub unsafe extern "C" fn xmss_free_aggregate_signature( + agg_sig: *mut Devnet2XmssAggregateSignature, +) { + if !agg_sig.is_null() { + drop(Box::from_raw(agg_sig)); + } +} + +#[no_mangle] +pub unsafe extern "C" fn xmss_aggregate_signature_to_bytes( + agg_sig: *const Devnet2XmssAggregateSignature, + buffer: *mut u8, + buffer_len: usize, +) -> usize { + if agg_sig.is_null() || buffer.is_null() { + return 0; + } + let agg_sig_ref = &*agg_sig; + let ssz_bytes = to_ssz_bytes(agg_sig_ref); + if ssz_bytes.len() > buffer_len { + return 0; + } + let output_slice = slice::from_raw_parts_mut(buffer, buffer_len); + output_slice[..ssz_bytes.len()].copy_from_slice(&ssz_bytes); + ssz_bytes.len() +} + +#[no_mangle] +pub unsafe extern "C" fn xmss_aggregate_signature_from_bytes( + bytes: *const u8, + bytes_len: usize, +) -> *mut Devnet2XmssAggregateSignature { + if bytes.is_null() || bytes_len == 0 { + return std::ptr::null_mut(); + } + let input_slice = slice::from_raw_parts(bytes, bytes_len); + match from_ssz_bytes(input_slice) { + Ok(agg_sig) => Box::into_raw(Box::new(agg_sig)), + Err(_) => std::ptr::null_mut(), + } +} diff --git a/xmss/rust/rust-toolchain.toml b/xmss/rust/rust-toolchain.toml new file mode 100644 index 0000000..ff100ed --- /dev/null +++ b/xmss/rust/rust-toolchain.toml @@ -0,0 +1,2 @@ +[toolchain] +channel = "1.90.0"