Skip to content

feat: featureExport / featureConsensusEntropy#693

Draft
sublimator wants to merge 254 commits intodevfrom
feature-export-rng
Draft

feat: featureExport / featureConsensusEntropy#693
sublimator wants to merge 254 commits intodevfrom
feature-export-rng

Conversation

@sublimator
Copy link
Copy Markdown
Collaborator

@sublimator sublimator commented Feb 21, 2026


Last updated: 2026-03-24 | branch: feature-export-rng | commit: 9562b45 (chore: remove stale Peer-level RNG forwarders)


featureExport + featureConsensusEntropy: Cross-Chain Exports + Decentralized Secure Randomness

This PR introduces two features under separate amendment flags:

  • featureExport — Cross-chain transaction export: hooks or users create transactions signed by the network's validators for use on external chains
  • featureConsensusEntropy — Consensus-derived randomness: deterministic, manipulation-resistant entropy available to hooks via dice() and random()

Both share UNL infrastructure (quorum calculation, UNLReport trust model) but are independently amendment-gated.


Amendment Relationship

The two amendments are independent but complementary:

Configuration Export Behavior RNG Behavior
Neither Disabled Disabled
CE only Disabled Full commit/reveal/entropy pipeline
Export only Works with 100% quorum (unanimity) Disabled
CE + Export Works with 80% quorum + SHAMap convergence Full pipeline + export sig convergence in parallel

Why Export benefits from CE: Without CE, export uses ephemeral signature collection. Different validators may see different sig counts at ledger close. With 80% quorum this could cause validators to flip between success and retry across ledger boundaries — acceptable but not ideal. Unanimity avoids this churn entirely, at the cost of any missing validator blocking the export.

With CE enabled, Export piggybacks on CE's ExtendedPosition serialization, sub-state machine, and SHAMap fetch/merge infrastructure to converge on a shared sig set before closing the ledger. All validators agree on exactly which exports have quorum, enabling the standard 80% threshold.

Neither blocks the other: CE's RNG sub-states and Export's sig convergence run in parallel during the establish phase. Both check their own gates independently and fall back gracefully if they can't converge.


UNL Source and Fallback Behavior

Both features use UNLReport ActiveValidators (UNLReport.sfActiveValidators) as the canonical validator set when available.

For liveness on early ledgers and test/dev environments, both use a fallback model:

  • If UNLReport is unavailable or empty, use the node's local validator configuration as a temporary proxy.
  • Export applies this report-first/fallback model consistently for signing eligibility, inbound signature acceptance, quorum sizing, and final verification.
  • RNG applies the same report-first/fallback model when caching membership for commit/reveal participation.

Part 1: Cross-Chain Transaction Export

What This Feature Does

This feature enables cross-chain transaction export — allowing a hook or user on Xahau to create a transaction that Xahau's validators collectively sign. The resulting multisigned transaction is a normal, valid transaction on XRPL — no protocol changes or cooperation from XRPL are required.

How it works from XRPL's perspective

An account on XRPL is set up with a SignerList pointing to Xahau validator keys. When Xahau's validators sign an exported transaction, they're effectively acting as members of that SignerList. Once enough signatures are collected, the transaction can be submitted to XRPL as a standard multi-signed transaction. XRPL doesn't know or care that the signatures came from Xahau.

This design requires zero XRPL-side changes, no XLS specification, and no amendment on XRPL.

How it works from Xahau's perspective

A hook calls xport() or a user submits a ttEXPORT transaction (type 91). The transaction enters the open ledger (tesSUCCESS provisional), and validators attach their multisign signatures to consensus proposals. When enough validators have signed (quorum met), the export succeeds with sfExportResult in the transaction metadata — containing the fully multisigned transaction as sfExportedTxn (readable JSON, ready for raw submission to XRPL). If quorum isn't reached before LastLedgerSequence, the export expires with tecEXPORT_EXPIRED.

In standalone mode (unit tests / dev), Export::doApply signs directly with the node's own validator keys — no consensus proposals needed.

Exported transactions must use TicketSequence (with Sequence=0) because a bounced transaction on the destination chain would jam sequential sequence numbers. A NetworkID guard rejects exports targeting the local network or from unconfigured nodes.


Design Evolution

The export signing mechanism went through three iterations:

V1 — On-ledger signature accumulation (built, abandoned): Each validator submitted a signing transaction that modified a shared ledger entry. Every modification produced Previous/Final metadata containing the entire growing signer array — O(n²) metadata explosion with 35 validators.

V2 — Validation-based ephemeral sigs (built, replaced): Moved signatures out of the ledger into memory. An ltEXPORTED_TXN entry stored the pending export, validators signed at validation time and broadcast via TMValidation, and the TxQ injected a ttEXPORT_FINALIZE pseudo-tx on quorum. This solved the metadata bloat but required multiple ledger closes and a complex lifecycle (separate ledger entry → signature collection → pseudo-tx injection → cleanup).

V3 — Retriable proposal-based sigs (current): A single ttEXPORT transaction enters the open ledger, validators attach signatures to consensus proposals, and the transactor either succeeds (quorum) or retries (terRETRY_EXPORT). No intermediate ledger objects. Result in metadata. Same-ledger finalization.

How It Works

Instead, exports use a retriable transaction pattern with proposal-based signature collection:

  1. User or hook submits ttEXPORT → enters the open ledger with tesSUCCESS (provisional, consumes sequence + fee)
  2. Validators sign via proposals → each validator extracts the inner tx (sfExportedTxn), computes its multisign signature (buildMultiSigningData + sign), and attaches txHash(32) + pubkey(33) + signature(~72) to TMProposeSet.exportSignatures (deduplicated per round via markSent())
  3. Peers harvest sigs → trusted proposals are harvested, signatures stored in ExportSigCollector (validated against UNL)
  4. Closed ledger: quorum check → Export transactor checks sig count against threshold
    • Quorum met → builds the fully multisigned tx (Signers array, empty SigningPubKey), stores it as sfExportedTxn inside sfExportResult metadata, creates shadow ticket keyed by the signed tx hash (getHash(HashPrefix::transactionID) — includes Signers)
    • Not enough sigs, before LLSterRETRY_EXPORT (retained for next ledger, no fee/sequence consumed)
    • Not enough sigs, on LLS ledgertecEXPORT_EXPIRED (sequence consumed, clean failure — prevents tefMAX_LEDGER silently dropping the tx on the next ledger)

Only the final result touches the ledger. No intermediate signature accumulation. The multisigned blob in metadata is ready for raw submission to XRPL. Typical latency: same ledger (99%+ on healthy network).


Tiered Quorum: Safety Without Complexity

Export quorum adapts based on whether ConsensusEntropy is also enabled:

Configuration Quorum Mechanism Rationale
Export only 100% (unanimity) Ephemeral ExportSigCollector Without CE's convergence infra, 80% could cause validators to flip between success/retry across ledger boundaries. Unanimity avoids this churn.
Export + CE 80% (calculateQuorumThreshold) SHAMap-based sig set convergence CE provides ExtendedPosition, sub-state machine, and SHAMap fetch/merge for deterministic agreement
Standalone Auto-sign Export::doApply signs directly with the node's validator keys Unit tests / dev — no consensus needed

SHAMap Convergence (with CE)

When CE is enabled, export sigs converge using the same infrastructure as RNG commit/reveal:

  • exportSigSetHash added to ExtendedPosition (flag 0x10)
  • Export sig SHAMap built from collected sigs, keyed by sha512Half(txHash, validatorPK)
  • Convergence gate runs in parallel with RNG sub-states (independent, neither blocks the other)
  • If sigs can't converge before timeout, exports retry next round (graceful fallback)

Signature Collection

Proposal Attachment

Each validator scans the open ledger for ttEXPORT txns, extracts the inner tx (sfExportedTxn), computes its multisign signature via buildMultiSigningData(), and attaches txHash + pubkey + signature to the proposal. Gated on featureExport, deduplicated per round via markSent().

Proposal attachment code (ConsensusExtensions.cpp)

📍 src/xrpld/app/consensus/ConsensusExtensions.cpp:1555-1666

1555 void
1556 ConsensusExtensions::decorateMessage(
1557     protocol::TMProposeSet& prop,
1558     RCLCxPeerPos::Proposal const& proposal,
1559     Buffer const& proposalSig)
1560 {
1561     auto const& valKeys = app_.getValidatorKeys();
1562 
1563     // Self-seed our own reveal so we count toward reveal quorum
1564     // (harvestRngData only sees peer proposals, not our own).
1565     if (proposal.position().myReveal)
1566     {
1567         pendingReveals_[valKeys.nodeID] = *proposal.position().myReveal;
1568         nodeIdToKey_.insert_or_assign(valKeys.nodeID, valKeys.keys->publicKey);
1569         JLOG(j_.trace()) << "RNG: self-seeded reveal for " << valKeys.nodeID;
1570     }
1571 
1572     // Store our own proposal proof for embedding in SHAMap entries.
1573     // commitProofs_ gets seq=0 only (deterministic commitSet).
1574     // proposalProofs_ gets the latest with a reveal (for entropySet).
1575     if (proposal.position().myCommitment || proposal.position().myReveal)
1576     {
1577         auto makeProof = [&]() {
1578             ProposalProof proof;
1579             proof.proposeSeq = proposal.proposeSeq();
1580             proof.closeTime = static_cast<std::uint32_t>(
1581                 proposal.closeTime().time_since_epoch().count());
1582             proof.prevLedger = proposal.prevLedger();
1583             Serializer s;
1584             proposal.position().add(s);
1585             proof.positionData = std::move(s);
1586             proof.signature = Buffer(proposalSig.data(), proposalSig.size());
1587             return proof;
1588         };
1589 
1590         if (proposal.position().myCommitment && proposal.proposeSeq() == 0)
1591             commitProofs_.emplace(valKeys.nodeID, makeProof());
1592 
1593         if (proposal.position().myReveal)
1594             proposalProofs_[valKeys.nodeID] = makeProof();
1595     }
1596 
1597     // Attach export signatures for any ttEXPORT txns in the open ledger.
1598     // Gated on featureExport amendment.
1599     // XAHAUD_NO_EXPORT_SIG=1 disables sig attachment (for testing sub-quorum).
1600     if (auto const* noSig = std::getenv("XAHAUD_NO_EXPORT_SIG");
1601         noSig && std::string(noSig) == "1")
1602     {
1603         JLOG(j_.debug()) << "Export: XAHAUD_NO_EXPORT_SIG=1, skipping sigs";
1604         return;
1605     }
1606 
1607     auto const openLedger = app_.openLedger().current();
1608     if (!openLedger || !openLedger->rules().enabled(featureExport))
1609         return;
1610 
1611     auto const& valPK = valKeys.keys->publicKey;
1612     auto const& valSK = valKeys.keys->secretKey;
1613     auto const signerAcctID = calcAccountID(valPK);
1614 
1615     for (auto const& [stx, meta] : openLedger->txs)
1616     {
1617         if (!stx || stx->getTxnType() != ttEXPORT)
1618             continue;
1619 
1620         auto const txHash = stx->getTransactionID();
1621 
1622         // Only attach our sig on the first proposal this round.
1623         if (!exportSigCollector_.markSent(txHash))
1624             continue;
1625 
1626         //@@start export-compute-proposal-sig
1627         Buffer sigBuf;
1628         if (stx->isFieldPresent(sfExportedTxn))
1629         {
1630             auto const& exportedObj = const_cast<STTx&>(*stx)
1631                                           .peekAtField(sfExportedTxn)
1632                                           .downcast<STObject>();
1633 
1634             Serializer innerSer;
1635             exportedObj.add(innerSer);
1636             SerialIter sit(innerSer.slice());
1637 
1638             try
1639             {
1640                 STTx innerTx(std::ref(sit));
1641                 auto sigData = buildMultiSigningData(innerTx, signerAcctID);
1642                 sigBuf = sign(valPK, valSK, sigData.slice());
1643             }
1644             catch (std::exception const& e)
1645             {
1646                 JLOG(j_.warn()) << "Export: failed to sign inner tx " << txHash
1647                                 << ": " << e.what();
1648             }
1649         }
1650         //@@end export-compute-proposal-sig
1651 
1652         //@@start export-attach-wire-sigs
1653         Serializer s;
1654         s.addBitString(txHash);
1655         s.addRaw(valPK.slice());
1656         if (sigBuf.size() > 0)
1657             s.addRaw(Slice(sigBuf.data(), sigBuf.size()));
1658         prop.add_exportsignatures(s.peekData().data(), s.peekData().size());
1659         //@@end export-attach-wire-sigs
1660 
1661         exportSigCollector_.addSignature(txHash, valPK, sigBuf);
1662 
1663         JLOG(j_.debug()) << "Export: attached sig for " << txHash
1664                          << " to proposal (sigLen=" << sigBuf.size() << ")";
1665     }
1666 }

Sig Harvesting

Trusted proposals are harvested in PeerImp.cpp. Each entry's pubkey is validated against the trusted validator set before acceptance. The wire format is variable-length: txHash(32) + pubkey(33) + signature(~72). Signatures are stored in ExportSigCollector alongside pubkeys.

ExportSigCollector

Thread-safe collector (ExportSigCollector.h) with stale cleanup (256-ledger timeout). Stores pubkeys for quorum counting and signature buffers for assembling the multisigned tx. Key methods:

  • addSignature(txHash, pubkey) — pubkey-only (quorum counting)
  • addSignature(txHash, pubkey, signature) — with multisign signature
  • snapshotWithSigs() — returns all collected signatures for blob assembly
  • markSent(txHash) — deduplicates per consensus round
  • cleanupStale(ledgerSeq) — removes entries older than 256 ledgers

Export Transactor

The transactor has three outcomes on the closed ledger:

  1. Standalone: skips quorum check, signs directly with the node's validator keys
  2. Quorum met: assembles the Signers array, builds the multisigned blob, computes the signed tx hash, creates the shadow ticket, writes everything to sfExportResult metadata
  3. Quorum not met: terRETRY_EXPORT (before LLS) or tecEXPORT_EXPIRED (on the LLS ledger — last chance, sequence consumed cleanly)

Key detail: the shadow ticket stores getHash(HashPrefix::transactionID) of the multisigned blob (which includes the Signers array), not the unsigned inner tx hash. This ensures the hash matches what ends up in the XPOP when the tx executes on XRPL.

Export::doApply (full source)

📍 src/xrpld/app/tx/detail/Export.cpp:85-310

  85 TER
  86 Export::doApply()
  87 {
  88     auto const account = ctx_.tx.getAccountID(sfAccount);
  89 
  90     // --- Shadow ticket cancel path (mutually exclusive with export) ---
  91     if (ctx_.tx.isFieldPresent(sfCancelTicketSequence))
  92     {
  93         auto const ticketSeq = ctx_.tx.getFieldU32(sfCancelTicketSequence);
  94         return ExportLedgerOps::cancelShadowTicket(
  95             view(), account, ticketSeq, j_);
  96     }
  97 
  98     // --- Export path ---
  99 
 100     auto const txId = ctx_.tx.getTransactionID();
 101     auto const currentSeq = view().info().seq;
 102 
 103     // Open ledger: return tesSUCCESS to consume sequence + fee and
 104     // get the transaction relayed/broadcast to all validators.
 105     if (view().open())
 106     {
 107         JLOG(j_.info()) << "Export: open ledger at " << currentSeq
 108                         << " -> tesSUCCESS (provisional)";
 109         return tesSUCCESS;
 110     }
 111 
 112     // Closed ledger: check if we have enough validator signatures.
 113     // UNL size from UNLReport ActiveValidators, fallback to local trusted keys.
 114     std::size_t unlSize = 0;
 115     {
 116         auto const unlReport = view().read(keylet::UNLReport());
 117         if (unlReport && unlReport->isFieldPresent(sfActiveValidators))
 118             unlSize = unlReport->getFieldArray(sfActiveValidators).size();
 119         else
 120             unlSize = ctx_.app.validators().getTrustedMasterKeys().size();
 121     }
 122     // Standalone mode: no consensus running, so we skip the quorum
 123     // check and sign directly with our validator keys in the blob
 124     // assembly step below.
 125     //
 126     // Network mode:
 127     //   With CE: 80% quorum (SHAMap convergence ensures agreement).
 128     //   Without CE: unanimity (avoids non-deterministic disagreement).
 129     if (!ctx_.app.config().standalone())
 130     {
 131         std::size_t threshold;
 132         if (unlSize == 0)
 133             threshold = 1;
 134         else if (view().rules().enabled(featureConsensusEntropy))
 135             threshold = calculateQuorumThreshold(unlSize);
 136         else
 137             threshold = unlSize;
 138         auto const sigCount = ctx_.app.getConsensusExtensions()
 139                                   .exportSigCollector()
 140                                   .signatureCount(txId);
 141 
 142         if (sigCount < threshold)
 143         {
 144             // LLS semantics for retriable exports:
 145             //
 146             // Transactor::preclaim rejects with tefMAX_LEDGER when
 147             // seq > LLS, so this tx can never run past ledger LLS.
 148             // Within that window the export has three possible outcomes
 149             // each ledger:
 150             //
 151             //   ledger < LLS:  tesSUCCESS (quorum) or terRETRY_EXPORT
 152             //   ledger == LLS: tesSUCCESS (quorum) or tecEXPORT_EXPIRED
 153             //   ledger > LLS:  tefMAX_LEDGER (never reaches doApply)
 154             //
 155             // The >= check here only fires in the no-quorum branch, so
 156             // if quorum IS met on the LLS ledger it still succeeds.
 157             // tecEXPORT_EXPIRED consumes the sequence cleanly rather
 158             // than letting tefMAX_LEDGER silently drop the tx.
 159             if (ctx_.tx.isFieldPresent(sfLastLedgerSequence))
 160             {
 161                 auto const lls = ctx_.tx.getFieldU32(sfLastLedgerSequence);
 162                 if (currentSeq >= lls)
 163                 {
 164                     ctx_.app.getConsensusExtensions()
 165                         .exportSigCollector()
 166                         .clear(txId);
 167                     JLOG(j_.info()) << "Export: LLS expired at ledger "
 168                                     << currentSeq << " sigs=" << sigCount << "/"
 169                                     << threshold << " -> tecEXPORT_EXPIRED";
 170                     return tecEXPORT_EXPIRED;
 171                 }
 172             }
 173 
 174             JLOG(j_.info())
 175                 << "Export: not enough sigs at ledger " << currentSeq
 176                 << " sigs=" << sigCount << " threshold=" << threshold
 177                 << " unlSize=" << unlSize << " -> terRETRY_EXPORT";
 178             return terRETRY_EXPORT;
 179         }
 180     }
 181 
 182     // Build the multisigned transaction blob FIRST, then use its
 183     // hash for the shadow ticket.  getTransactionID() includes ALL
 184     // fields (including Signers), so the shadow ticket must store
 185     // the hash of the final signed blob — not the unsigned inner tx.
 186 
 187     auto const& exportedObj =
 188         ctx_.tx.peekAtField(sfExportedTxn).downcast<STObject>();
 189 
 190     Serializer innerSer;
 191     exportedObj.add(innerSer);
 192     SerialIter sit(innerSer.slice());
 193 
 194     STTx innerTx(std::ref(sit));
 195 
 196     STArray signers(sfSigners);
 197 
 198     if (ctx_.app.config().standalone())
 199     {
 200         // Standalone mode: no consensus proposals, so we sign
 201         // the inner tx directly with our own validator keys.
 202         auto const& valKeys = ctx_.app.getValidatorKeys();
 203         if (valKeys.keys)
 204         {
 205             auto const& pk = valKeys.keys->publicKey;
 206             auto const& sk = valKeys.keys->secretKey;
 207             auto const signerAcctID = calcAccountID(pk);
 208 
 209             auto const sigData = buildMultiSigningData(innerTx, signerAcctID);
 210             auto const sig = ripple::sign(pk, sk, sigData.slice());
 211 
 212             STObject signer(sfSigner);
 213             signer.setAccountID(sfAccount, signerAcctID);
 214             signer.setFieldVL(sfSigningPubKey, pk.slice());
 215             signer.setFieldVL(sfTxnSignature, sig);
 216             signers.push_back(std::move(signer));
 217         }
 218     }
 219     else
 220     {
 221         // Network mode: collect real signatures from peers
 222         // via ExportSigCollector (populated from proposals).
 223         auto const allSigs = ctx_.app.getConsensusExtensions()
 224                                  .exportSigCollector()
 225                                  .snapshotWithSigs();
 226         auto it = allSigs.find(txId);
 227 
 228         if (it != allSigs.end())
 229         {
 230             for (auto const& [valPK, sigBuf] : it->second)
 231             {
 232                 if (sigBuf.size() == 0)
 233                     continue;  // pubkey-only, no real signature
 234 
 235                 STObject signer(sfSigner);
 236                 signer.setAccountID(sfAccount, calcAccountID(valPK));
 237                 signer.setFieldVL(sfSigningPubKey, valPK.slice());
 238                 signer.setFieldVL(
 239                     sfTxnSignature, Slice(sigBuf.data(), sigBuf.size()));
 240                 signers.push_back(std::move(signer));
 241             }
 242         }
 243     }
 244 
 245     // Sort signers by AccountID (required by XRPL multisign).
 246     std::sort(
 247         signers.begin(),
 248         signers.end(),
 249         [](STObject const& a, STObject const& b) {
 250             return a.getAccountID(sfAccount) < b.getAccountID(sfAccount);
 251         });
 252 
 253     // Build the multisigned tx.  Use sfExportedTxn as the field type
 254     // so it nests properly in ExportResult metadata as readable JSON.
 255     STObject multiSigned(sfExportedTxn);
 256     {
 257         // Copy all non-signing fields from innerTx, then we'll add
 258         // signing fields (empty SigningPubKey + Signers) below.
 259         Serializer s;
 260         innerTx.addWithoutSigningFields(s);
 261         SerialIter sit(s.slice());
 262         multiSigned.set(sit);
 263     }
 264 
 265     // Set empty SigningPubKey (indicates multisigned).
 266     multiSigned.setFieldVL(sfSigningPubKey, Slice{});
 267 
 268     if (signers.size() > 0)
 269         multiSigned.setFieldArray(sfSigners, signers);
 270 
 271     // Compute the signed tx hash for the shadow ticket.
 272     // getHash(transactionID) includes ALL fields (Signers etc.),
 273     // matching what STTx::getTransactionID() produces.
 274     auto const signedTxHash = multiSigned.getHash(HashPrefix::transactionID);
 275 
 276     // Create the shadow ticket with the signed tx hash.
 277     {
 278         TER ter = ExportLedgerOps::createShadowTicket(
 279             view(), account, innerTx, signedTxHash, j_);
 280         if (!isTesSuccess(ter))
 281             return ter;
 282     }
 283 
 284     // Write the export result to metadata.  The multisigned tx is
 285     // stored as sfExportedTxn (OBJECT) so it renders as readable
 286     // JSON in metadata, not an opaque hex blob.
 287     STObject exportResult(sfExportResult);
 288     exportResult.setFieldU32(sfLedgerSequence, currentSeq);
 289     exportResult.setFieldH256(sfTransactionHash, txId);
 290     exportResult.set(std::move(multiSigned));
 291 
 292     auto* avi = dynamic_cast<ApplyViewImpl*>(&view());
 293     if (!avi)
 294     {
 295         JLOG(j_.fatal()) << "Export: cannot write ExportResult metadata "
 296                          << "(view is not ApplyViewImpl)";
 297         return tefINTERNAL;
 298     }
 299     avi->setExportResultMetaData(std::move(exportResult));
 300 
 301     // Clean up the collector.
 302     ctx_.app.getConsensusExtensions().exportSigCollector().clear(txId);
 303 
 304     JLOG(j_.info()) << "Export: success at ledger " << currentSeq
 305                     << (ctx_.app.config().standalone() ? " (standalone)"
 306                                                        : " (quorum met)")
 307                     << " -> tesSUCCESS";
 308 
 309     return tesSUCCESS;
 310 }

Shadow Tickets and the Export Round-Trip

Export is a 3-way handshake, not fire-and-forget:

  1. Export (Xahau): User or hook creates the export. On quorum success:
    • The fully multisigned transaction is assembled (Signers array + empty SigningPubKey) and stored as sfExportedTxn inside sfExportResult metadata — readable JSON, ready for raw submission
    • A shadow ticket (ltSHADOW_TICKET) is created, keyed by account + ticket sequence, storing the signed tx hash (getHash(HashPrefix::transactionID) — includes all fields including Signers)
    • The account pays reserve for the shadow ticket
  2. Execute (XRPL): The multisigned blob from step 1 is submitted raw to XRPL. The XRPL account's SignerList points to the Xahau validator keys, so XRPL validates it as a standard multisigned transaction. It executes (or bounces), producing an XPOP.
  3. Callback (XRPL → Xahau): The XPOP is imported back via ttIMPORT. Import checks the shadow ticket exists, verifies the XPOP inner tx hash matches the shadow ticket's stored hash, consumes the shadow ticket (frees reserve), and fires hooks.

Shadow tickets are round-trip completion tokens:

  • Account-owned (like trustlines) — the account can cancel unused tickets via xport_cancel() hook API or sfCancelTicketSequence on ttEXPORT
  • Export and cancel are mutually exclusive on a single ttEXPORT transaction
  • The stored tx hash prevents replaying a different XPOP against the same shadow ticket

When ttIMPORT sees sfTicketSequence on the inner transaction, it takes the export callback path: verify shadow ticket hash match, consume the shadow ticket, fire hooks, done. The export callback path skips the sfOperationLimit and signing key match checks (the shadow ticket already proves the relationship). No B2M balance crediting. When there's no sfTicketSequence, it takes the existing Burn-to-Mint path unchanged.


Hook Integration

Hooks call xport() which internally constructs a ttEXPORT wrapper (with sfEmitDetails) and pushes it onto the emitted txn queue. The wrapper flows through the normal emitted txn path:

  1. xport_reserve(N) — reserves N export slots (also reserves emit slots)
  2. xport(inner_tx_blob) — validates inner tx → constructs ttEXPORT wrapper → emits it
  3. Emitted ttEXPORT enters the open ledger next round → proposal-based sig collection → retriable transactor

The hook receives the inner tx hash (the cross-chain transaction it built), while the ttEXPORT wrapper handles the Xahau-side lifecycle.


Export Protocol Additions

Type Name Purpose
Field sfExportResult (OBJECT 98) Export result in metadata (contains sfExportedTxn + sfLedgerSequence + sfTransactionHash)
Field sfCancelTicketSequence (UINT32 101) Cancel a shadow ticket by sequence
Field sfExportedTxn (OBJECT 90) Inner cross-chain transaction (on ttEXPORT: unsigned template; in ExportResult: fully multisigned, ready for submission)
Ledger Entry ltSHADOW_TICKET (0x5374) Round-trip completion token (account-owned)
Transaction ttEXPORT (91) User-submittable export / shadow ticket cancel
TER terRETRY_EXPORT Retained in retry set for next ledger
TER tecEXPORT_EXPIRED (200) LLS expiry, sequence consumed
Proto TMProposeSet.exportSignatures (field 13) Proposal-piggybacked validator sigs
Hook API xport(), xport_reserve(), xport_cancel() Export APIs for hooks

Part 2: Decentralized Secure Randomness

Adding randomness to deterministic consensus sounds simple until you try to do it without breaking safety. This part implements Same-Ledger Usable Randomness: finalizing entropy after user intent is locked, but before normal execution in that same ledger.

🔎 Review Scope

  • Blast radius (high level): consensus proposal encoding, establish sub-state logic, pseudo-tx injection, ledger apply ordering, transactor apply path, and hook-facing entropy APIs.
  • Amendment gating: featureConsensusEntropy is DefaultNo; behavior is inert until enabled by amendment vote.
  • Migration/upgrade: no migration required, no config changes required, and no manual database steps; amendment-gated behavior remains inert until vote-in.
  • Test coverage highlights: ConsensusEntropy_test (Hook dice()/random(), fallback semantics) and ExtendedPosition_test (serialization compatibility and malformed wire cases).

🛠 How It Works (The Final Solution)

The architecture centers on converging on signed input sets rather than voting on a derived output hash. This ensures that every node can independently verify and reconstruct the final result.

1. Transport: Piggybacked Proposals

The ConsensusProposal wire format is extended via ExtendedPosition. Most entropy data (commitments and reveals) flows through existing proposal gossip with low incremental payload overhead on the fast path, while consensus latency cost comes from the added sub-state progression/timeouts.

  • Equality Firewall: ExtendedPosition::operator== only compares the txSetHash. RNG sub-state differences never stall the core consensus on user transactions.

2. Pipelined Sub-states

RNG progression runs inside internal establish sub-states. These are checkpoints within the existing consensus cadence:

  • ConvergingTx: Normal transaction convergence while harvesting entropy commitments.
  • ConvergingCommit: Locking the commitSet once an 80% quorum over UNLReport ActiveValidators (or fallback set if UNLReport is unavailable) is reached.
  • ConvergingReveal: Targets reveals from 100% of known committers, bounded by timeout/fallback paths (including the 1.5s reveal timeout) to preserve liveness.

3. SHAMap Union Convergence

Harvested commitments and reveals are stored in ephemeral, unbacked SHAMaps.

  • Honest validators always agree on inclusion (every valid contribution belongs).
  • Differences are reconciled via Union Merge (monotonic set growth).
  • If a packet is dropped, the node uses the native InboundTransactions pipeline to fetch only the missing leaves from peers.

4. Synthetic Injection & Same-Ledger Execution

Once reveals are collected, the final entropy is computed deterministically (sha512Half(sorted_reveals)).

  • Synthetic Transaction: Right before buildLCL (Ledger Construction), the node locally synthesizes a ttCONSENSUS_ENTROPY pseudo-transaction.
  • Deterministic Ordering: This transaction is sorted to execute first, ensuring its entropy is available to every Hook and user transaction in the same block.
  • Verification: Because the inputs were agreed upon in consensus, nodes synthesize identical transactions. Any local fault is caught by the validation phase (Example 5 intuition + Theorem 8 safety framing).

⚓ Hook API Integration

Provides two new deterministic WebAssembly APIs for Hook developers:

  • dice(sides): Returns a fair integer from 0 to sides-1.
  • random(write_ptr, write_len): Fills a buffer with cryptographically secure consensus-derived randomness.

🛡 Safety & Liveness

  • Safety: RNG machinery resides entirely in the deliberation path. Safety remains anchored to the validation-phase quorum (Chase & MacBrough 2018, §4.1 / Theorem 8).
  • Liveness: Entropy availability degrades deterministically to a zero-path under extreme stress (e.g., impossible quorums or timeouts), ensuring the ledger always closes.

🛠 Infrastructure & Support Logic

Several non-obvious plumbing changes were required to make the RNG pipeline robust and testable:

1. Fast Polling during RNG Transitions

To reduce the latency impact of the extra sub-states, the heartbeat timer accelerates to 250ms (tunable via XAHAU_RNG_POLL_MS) while in the RNG pipeline.
📍 src/xrpld/app/misc/NetworkOPs.cpp:1031-1044

1031     // Use faster polling during RNG sub-state transitions
1032     // to reduce latency of commit-reveal rounds.
1033     // Tunable via XAHAU_RNG_POLL_MS env var (default 250ms).
1034     if (mConsensus.extensionsBusy())
1035     {
1036         static auto const rngPollMs = []() -> std::chrono::milliseconds {
1037             if (auto const* env = std::getenv("XAHAU_RNG_POLL_MS"))
1038                 return std::chrono::milliseconds{std::atoi(env)};
1039             return std::chrono::milliseconds{250};
1040         }();
1041         setHeartbeatTimer(rngPollMs);
1042     }
1043     else
1044         setHeartbeatTimer();

2. Local Testnet Resource Charging

Connections from 127.0.0.1 normally share a single IP resource bucket. This change preserves the port for loopback addresses so that local multi-node testnets don't hit peer resource limits due to the increased RNG set traffic.
📍 include/xrpl/resource/detail/Logic.h:113-117

 113         // Inbound connections from the same IP normally share one
 114         // resource bucket (port stripped) for DoS protection.  For
 115         // loopback addresses, preserve the port so local testnet nodes
 116         // each get their own bucket instead of all sharing one.
 117         auto const key = is_loopback(address) ? address : address.at_port(0);

3. Test Environment Gating

featureConsensusEntropy is excluded from default jtx::Env tests to prevent its automatic pseudo-tx injection from breaking existing test suites that rely on specific transaction counts.
📍 src/test/jtx/Env.h:86-89

  86         // TODO: ConsensusEntropy injects a pseudo-tx every ledger which
  87         // breaks existing test transaction count assumptions. Exclude from
  88         // default test set until dedicated tests are written.
  89         return FeatureBitset(feats) - featureConsensusEntropy;

4. Pseudo-transaction Filtering

Internal metadata (commits/reveals) is stored as pseudo-transactions in ephemeral SHAMaps for transport. This logic ensures they are filtered out and never submitted to the actual transaction processing engine.
📍 src/xrpld/app/ledger/ConsensusTransSetSF.cpp:70-74

  70             // Don't submit pseudo-transactions (consensus entropy, fees,
  71             // amendments, etc.) — they exist as SHAMap entries for
  72             // content-addressed identification but are not real user txns.
  73             if (isPseudoTx(*stx))
  74                 return;

Guided Code Review (Projected Source)

This section follows runtime order so the code reads as a story, not a file dump.

1) Proposal payload: ExtendedPosition carries RNG + Export sidecar fields

ExtendedPosition adds commit/reveal set identities, export sig set hash, and per-validator leaves while keeping tx-set identity explicit.
Non-obvious: operator== compares only txSetHash on purpose. That decouples core tx-set convergence from RNG/Export sub-state drift.
operator== (equality firewall):
📍 src/xrpld/app/consensus/RCLCxPeerPos.h:110-145

 110     bool
 111     operator==(ExtendedPosition const& other) const
 112     {
 113         return txSetHash == other.txSetHash;
 114     }
 115 
 116     bool
 117     operator!=(ExtendedPosition const& other) const
 118     {
 119         return !(*this == other);
 120     }
 121 
 122     // Comparison with uint256 (compares txSetHash only)
 123     bool
 124     operator==(uint256 const& hash) const
 125     {
 126         return txSetHash == hash;
 127     }
 128 
 129     bool
 130     operator!=(uint256 const& hash) const
 131     {
 132         return txSetHash != hash;
 133     }
 134 
 135     friend bool
 136     operator==(uint256 const& hash, ExtendedPosition const& pos)
 137     {
 138         return pos.txSetHash == hash;
 139     }
 140 
 141     friend bool
 142     operator!=(uint256 const& hash, ExtendedPosition const& pos)
 143     {
 144         return pos.txSetHash != hash;
 145     }

add() (signed serialization of all sidecar fields):
📍 src/xrpld/app/consensus/RCLCxPeerPos.h:159-193

 159     void
 160     add(Serializer& s) const
 161     {
 162         s.addBitString(txSetHash);
 163 
 164         // Wire compatibility: if no extensions, emit exactly 32 bytes
 165         // so legacy nodes that expect a plain uint256 work unchanged.
 166         if (!commitSetHash && !entropySetHash && !exportSigSetHash &&
 167             !myCommitment && !myReveal)
 168             return;
 169 
 170         std::uint8_t flags = 0;
 171         if (commitSetHash)
 172             flags |= 0x01;
 173         if (entropySetHash)
 174             flags |= 0x02;
 175         if (myCommitment)
 176             flags |= 0x04;
 177         if (myReveal)
 178             flags |= 0x08;
 179         if (exportSigSetHash)
 180             flags |= 0x10;
 181         s.add8(flags);
 182 
 183         if (commitSetHash)
 184             s.addBitString(*commitSetHash);
 185         if (entropySetHash)
 186             s.addBitString(*entropySetHash);
 187         if (myCommitment)
 188             s.addBitString(*myCommitment);
 189         if (myReveal)
 190             s.addBitString(*myReveal);
 191         if (exportSigSetHash)
 192             s.addBitString(*exportSigSetHash);
 193     }

fromSerialIter() (legacy + extended wire decode):
📍 src/xrpld/app/consensus/RCLCxPeerPos.h:216-261

 216     static std::optional<ExtendedPosition>
 217     fromSerialIter(SerialIter& sit, std::size_t totalSize)
 218     {
 219         if (totalSize < 32)
 220             return std::nullopt;
 221 
 222         ExtendedPosition pos;
 223         pos.txSetHash = sit.get256();
 224 
 225         // Legacy format: exactly 32 bytes
 226         if (totalSize == 32)
 227             return pos;
 228 
 229         // Extended format: flags byte + optional uint256 fields
 230         if (sit.empty())
 231             return pos;
 232 
 233         std::uint8_t flags = sit.get8();
 234 
 235         // Reject unknown flag bits (reduces wire malleability)
 236         if (flags & 0xE0)
 237             return std::nullopt;
 238 
 239         // Validate exact byte count for the flagged fields.
 240         // Each flag bit indicates a 32-byte uint256.
 241         int fieldCount = 0;
 242         for (int i = 0; i < 5; ++i)
 243             if (flags & (1 << i))
 244                 ++fieldCount;
 245 
 246         if (sit.getBytesLeft() != static_cast<std::size_t>(fieldCount * 32))
 247             return std::nullopt;
 248 
 249         if (flags & 0x01)
 250             pos.commitSetHash = sit.get256();
 251         if (flags & 0x02)
 252             pos.entropySetHash = sit.get256();
 253         if (flags & 0x04)
 254             pos.myCommitment = sit.get256();
 255         if (flags & 0x08)
 256             pos.myReveal = sit.get256();
 257         if (flags & 0x10)
 258             pos.exportSigSetHash = sit.get256();
 259 
 260         return pos;
 261     }

2) Harvest stage: trust boundary + reveal verification

Incoming RNG data is rejected for senders outside UNLReport ActiveValidators (or fallback set when UNLReport is unavailable), and reveals are accepted only if they match prior commitments.
📍 src/xrpld/app/consensus/ConsensusExtensions.cpp:1155-1264

1155     // Reject data from validators not in the active UNL
1156     if (!isUNLReportMember(nodeId))
1157     {
1158         JLOG(j_.trace()) << "RNG: rejecting data from non-UNL validator "
1159                          << nodeId;
1160         return;
1161     }
1162 
1163     // RuntimeConfig: randomly drop RNG claims for testing
1164     auto& rc = app_.getRuntimeConfig();
1165     if (rc.active())
1166     {
1167         if (auto cfg = rc.getConfig("*"))
1168         {
1169             if (cfg->rngClaimDropPctX100 && *cfg->rngClaimDropPctX100 > 0)
1170             {
1171                 static thread_local std::mt19937 rng{std::random_device{}()};
1172                 if (std::uniform_int_distribution<int>{0, 9999}(rng) <
1173                     *cfg->rngClaimDropPctX100)
1174                 {
1175                     JLOG(j_.warn())
1176                         << "RNG: TESTING dropping claim from " << nodeId;
1177                     return;
1178                 }
1179             }
1180         }
1181     }
1182 
1183     // Store nodeId -> publicKey mapping for deterministic ordering
1184     nodeIdToKey_.insert_or_assign(nodeId, publicKey);
1185 
1186     //@@start rng-harvest-commit
1187     // Harvest commitment if present
1188     if (position.myCommitment)
1189     {
1190         auto [it, inserted] =
1191             pendingCommits_.emplace(nodeId, *position.myCommitment);
1192         if (!inserted && it->second != *position.myCommitment)
1193         {
1194             JLOG(j_.warn())
1195                 << "Validator " << nodeId << " changed commitment from "
1196                 << it->second << " to " << *position.myCommitment;
1197             it->second = *position.myCommitment;
1198 
1199             // commitProofs_ stores seq=0 proofs. If a validator changes its
1200             // commitment later in the round, that old proof no longer matches
1201             // the new digest and must not be embedded into a fetched commitSet.
1202             commitProofs_.erase(nodeId);
1203 
1204             // Any reveal accepted against the prior commitment is now stale.
1205             // Drop it so reveal quorum cannot be satisfied by mismatched data.
1206             if (pendingReveals_.erase(nodeId) > 0)
1207                 proposalProofs_.erase(nodeId);
1208         }
1209         else if (inserted)
1210         {
1211             JLOG(j_.trace()) << "Harvested commitment from " << nodeId << ": "
1212                              << *position.myCommitment;
1213         }
1214     }
1215     //@@end rng-harvest-commit
1216 
1217     //@@start rng-harvest-reveal-verification
1218     // Harvest reveal if present — verify it matches the stored commitment
1219     if (position.myReveal)
1220     {
1221         auto commitIt = pendingCommits_.find(nodeId);
1222         if (commitIt == pendingCommits_.end())
1223         {
1224             // No commitment on record — cannot verify. Ignore to prevent
1225             // grinding attacks where a validator skips the commit phase.
1226             JLOG(j_.warn()) << "RNG: rejecting reveal from " << nodeId
1227                             << " (no commitment on record)";
1228             return;
1229         }
1230 
1231         // Verify Hash(reveal | pubKey | seq) == commitment
1232         auto const prevLgr = app_.getLedgerMaster().getLedgerByHash(prevLedger);
1233         if (!prevLgr)
1234         {
1235             JLOG(j_.warn()) << "RNG: cannot verify reveal from " << nodeId
1236                             << " (prevLedger not available)";
1237             return;
1238         }
1239 
1240         auto const seq = prevLgr->info().seq + 1;
1241         auto const calculated = sha512Half(*position.myReveal, publicKey, seq);
1242 
1243         if (calculated != commitIt->second)
1244         {
1245             JLOG(j_.warn()) << "RNG: fraudulent reveal from " << nodeId
1246                             << " (does not match commitment)";
1247             return;
1248         }
1249 
1250         auto [it, inserted] =
1251             pendingReveals_.emplace(nodeId, *position.myReveal);
1252         if (!inserted && it->second != *position.myReveal)
1253         {
1254             JLOG(j_.warn()) << "Validator " << nodeId << " changed reveal from "
1255                             << it->second << " to " << *position.myReveal;
1256             it->second = *position.myReveal;
1257         }
1258         else if (inserted)
1259         {
1260             JLOG(j_.trace()) << "Harvested reveal from " << nodeId << ": "
1261                              << *position.myReveal;
1262         }
1263     }
1264     //@@end rng-harvest-reveal-verification

3) Quorum basis: expected proposers first, UNLReport/fallback set

📍 src/xrpld/app/consensus/ConsensusExtensions.cpp:57-67

  57 std::size_t
  58 ConsensusExtensions::quorumThreshold() const
  59 {
  60     // Non-zero entropy is only allowed once a fixed 80% quorum of the active
  61     // UNL snapshot has committed. Recent proposers are useful for liveness
  62     // heuristics, but they do not lower this floor.
  63     auto const base = unlReportNodeIds_.size();
  64     if (base == 0)
  65         return 1;  // safety: need at least one commit
  66     return calculateQuorumThreshold(base);
  67 }

4) State-machine checkpoints: ConvergingTx -> ConvergingCommit -> ConvergingReveal

📍 src/xrpld/consensus/ConsensusExtensionsTick.h:22-670

  22     // --- RNG Sub-state Checkpoints ---
  23     // These sub-states use union convergence (not avalanche).
  24     // Commits and reveals arrive piggybacked on proposals, so by the time
  25     // we reach these checkpoints most data is already collected. The
  26     // SHAMap fetch/diff/merge in onAcquiredSidecarSet is a safety net
  27     // for stragglers, not a voting mechanism.
  28     //
  29     // Why 80% for commits but 100% for reveals?
  30     //
  31     // COMMITS: quorum is based on the active UNL, but we don't know
  32     // which UNL members are actually online until they propose — and
  33     // commitments ride on those same proposals.  Chicken-and-egg: we
  34     // learn who's active by receiving their commits.  80% of the UNL
  35     // says "we've heard from enough validators, let's go."  The
  36     // impossible-quorum early-exit handles the case where too few
  37     // participants exist to ever reach 80%.
  38     //
  39     // REVEALS: the commit set is now locked and we know *exactly* who
  40     // committed.  Every committer broadcasts their reveal immediately.
  41     // So we wait for ALL of them, with rngREVEAL_TIMEOUT (measured
  42     // from ConvergingReveal entry) as the safety valve for nodes that
  43     // crash between commit and reveal.
  44 
  45     bool const isRngEnabled = ext.rngEnabled();
  46 
  47     JLOG(ext.j_.trace()) << "RNGGATE: phaseEstablish prevSeq="
  48                          << (static_cast<std::uint32_t>(ctx.buildSeq) - 1)
  49                          << " ext.rngEnabled=" << (isRngEnabled ? "yes" : "no")
  50                          << " estState=" << static_cast<int>(ext.estState_)
  51                          << " phase=establish"
  52                          << " mode=" << to_string(ctx.mode)
  53                          << " roundMs=" << ctx.roundTime.count();
  54 
  55     if (isRngEnabled)
  56     {
  57         auto const buildSeq = ctx.buildSeq;
  58         auto const estStateName = [&]() -> char const* {
  59             switch (ext.estState_)
  60             {
  61                 case EstablishState::ConvergingTx:
  62                     return "ConvergingTx";
  63                 case EstablishState::ConvergingCommit:
  64                     return "ConvergingCommit";
  65                 case EstablishState::ConvergingReveal:
  66                     return "ConvergingReveal";
  67             }
  68             return "Unknown";
  69         };
  70         auto logRngDiag = [&](char const* reason) {
  71             auto const ourPos = ctx.getPosition();
  72             auto const participants = ctx.peerPositions.size() + 1;
  73             JLOG(ext.j_.debug())
  74                 << "STALLDIAG: " << reason << " state=" << estStateName()
  75                 << " phase=establish"
  76                 << " mode=" << to_string(ctx.mode)
  77                 << " roundMs=" << ctx.roundTime.count()
  78                 << " convergePct=" << ctx.convergePercent
  79                 << " participants=" << participants
  80                 << " peerPositions=" << ctx.peerPositions.size()
  81                 << " prevProposers=" << ctx.prevProposers
  82                 << " explicitFinalSent="
  83                 << (ext.explicitFinalProposalSent_ ? "yes" : "no")
  84                 << " closeTimeConsensus="
  85                 << (ctx.haveCloseTimeConsensus ? "yes" : "no")
  86                 << " txSet=" << ourPos;
  87 
  88             JLOG(ext.j_.debug())
  89                 << "STALLDIAG: sidecar"
  90                 << " commitSetHash="
  91                 << (ourPos.commitSetHash ? to_string(*ourPos.commitSetHash)
  92                                          : std::string{"none"})
  93                 << " entropySetHash="
  94                 << (ourPos.entropySetHash ? to_string(*ourPos.entropySetHash)
  95                                           : std::string{"none"})
  96                 << " myCommitment=" << (ourPos.myCommitment ? "yes" : "no")
  97                 << " myReveal=" << (ourPos.myReveal ? "yes" : "no");
  98 
  99             auto const commits = ext.pendingCommitCount();
 100             auto const quorum = ext.quorumThreshold();
 101             auto const commitQuorum = ext.hasQuorumOfCommits();
 102             auto const minReveals = ext.hasMinimumReveals();
 103             auto const anyReveals = ext.hasAnyReveals();
 104             auto const reveals = ext.pendingRevealCount();
 105             auto const likelyParticipants = ext.expectedProposerCount();
 106 
 107             JLOG(ext.j_.debug())
 108                 << "STALLDIAG: rng-counters"
 109                 << " commits=" << commits << " quorum=" << quorum
 110                 << " commitQuorum=" << (commitQuorum ? "yes" : "no")
 111                 << " reveals=" << std::to_string(reveals)
 112                 << " minReveals=" << (minReveals ? "yes" : "no")
 113                 << " anyReveals=" << (anyReveals ? "yes" : "no")
 114                 << " likelyParticipants=" << std::to_string(likelyParticipants);
 115         };
 116         auto publishEntropySet = [&]() {
 117             auto entropySetHash = ext.buildEntropySet(buildSeq);
 118             auto newPos = ctx.getPosition();
 119             if (newPos.entropySetHash &&
 120                 *newPos.entropySetHash == entropySetHash)
 121             {
 122                 JLOG(ext.j_.debug())
 123                     << "RNG: entropySet already published hash="
 124                     << entropySetHash;
 125                 return;
 126             }
 127 
 128             newPos.entropySetHash = entropySetHash;
 129 
 130             ctx.updatePosition(newPos);
 131 
 132             // Publish entropySetHash before accepting so lagging peers
 133             // can fetch/merge reveal sets in ConvergingReveal.
 134             //
 135             // This can look redundant in healthy rounds because txSetHash
 136             // may be unchanged versus the prior proposal (for example,
 137             // seq=2 and seq=3 showing the same tx summary in monitors). We
 138             // still publish to create an additional delivery window for
 139             // entropySetHash and to trigger fetch/merge on peers that
 140             // missed earlier packets.
 141             if (ctx.mode == ConsensusMode::proposing)
 142                 ctx.propose();
 143 
 144             JLOG(ext.j_.debug()) << "RNG: built entropySet";
 145         };
 146 
 147         JLOG(ext.j_.trace()) << "RNG: phaseEstablish estState="
 148                              << static_cast<int>(ext.estState_);
 149 
 150         // Bootstrap fast-path: if the previous round didn't have
 151         // enough proposers for RNG to have succeeded, the network
 152         // is still converging.  Skip the entire commit/reveal
 153         // pipeline — it can only produce zero entropy anyway, but
 154         // each substate transition and timeout (PIPELINE_TIMEOUT,
 155         // REVEAL_TIMEOUT, conflict-wait) adds seconds of latency
 156         // per round that compound across staggered startup.
 157         //
 158         // Once prevProposers reaches quorum the pipeline engages
 159         // normally with all its coordination delays intact.
 160         bool rngBootstrapSkip = false;
 161         {
 162             auto const threshold = ext.quorumThreshold();
 163             if (ctx.prevProposers < threshold)
 164             {
 165                 JLOG(ext.j_.debug())
 166                     << "RNG: bootstrap skip (prevProposers="
 167                     << ctx.prevProposers << " < threshold=" << threshold << ")";
 168                 rngBootstrapSkip = true;
 169             }
 170         }
 171 
 172         if (!rngBootstrapSkip && ext.estState_ == EstablishState::ConvergingTx)
 173         {
 174             // Commit quorum is fixed to 80% of the active UNL snapshot for
 175             // the round. We move immediately once that floor is met;
 176             // recent-proposer tracking is only for deciding whether more
 177             // waiting is worthwhile.
 178             if (ext.hasQuorumOfCommits())
 179             {
 180                 auto commitSetHash = ext.buildCommitSet(buildSeq);
 181 
 182                 // Keep the same entropy secret from onClose() — do NOT
 183                 // regenerate.  The commitment in the commitSet was built
 184                 // from that original secret; regenerating would make the
 185                 // later reveal fail verification.
 186                 auto newPos = ctx.getPosition();
 187                 newPos.commitSetHash = commitSetHash;
 188 
 189                 ctx.updatePosition(newPos);
 190 
 191                 if (ctx.mode == ConsensusMode::proposing)
 192                     ctx.propose();
 193 
 194                 ext.estState_ = EstablishState::ConvergingCommit;
 195                 ext.commitHashConflictStart_ = {};
 196                 JLOG(ext.j_.debug()) << "RNG: transitioned to ConvergingCommit"
 197                                      << " commitSet=" << commitSetHash;
 198                 return {};  // Wait for next tick
 199             }
 200 
 201             // Don't let the round close while waiting for commit quorum.
 202             // Without this gate, execution falls through to the normal
 203             // consensus close logic and nodes inject partial/zero entropy
 204             // while others are still collecting — causing ledger
 205             // mismatches.
 206             //
 207             // However, if we've already converged on the txSet (which we
 208             // have — haveConsensus() passed above) and there aren't enough
 209             // currently participating validators to ever reach the fixed
 210             // UNL quorum, skip immediately. With 3 active UNL validators
 211             // and quorum=3, losing one node means 2/3 commits forever —
 212             // waiting 3s per round just delays recovery.
 213             //
 214             // NOTE: Late-joining nodes (e.g. restarting after a crash)
 215             // cannot help here.  They enter the round as proposing=false
 216             // and onClose() skips commitment generation for non-proposers.
 217             // It takes at least one full round of observing before
 218             // consensus promotes them to proposing.
 219             {
 220                 // participants = peers + ourselves
 221                 auto const participants = ctx.peerPositions.size() + 1;
 222                 auto const threshold = ext.quorumThreshold();
 223                 bool const impossible = participants < threshold;
 224 
 225                 if (impossible)
 226                 {
 227                     JLOG(ext.j_.debug())
 228                         << "RNG: skipping commit wait (participants="
 229                         << participants << " < threshold=" << threshold << ")";
 230                     logRngDiag("rng-commit-wait-impossible-quorum");
 231                     // Fall through to close with zero entropy
 232                 }
 233                 else
 234                 {
 235                     bool timeout =
 236                         ctx.roundTime > ctx.parms.rngPIPELINE_TIMEOUT;
 237                     if (!timeout)
 238                     {
 239                         logRngDiag("rng-commit-wait");
 240                         return {};  // Wait for more commits
 241                     }
 242 
 243                     // Timeout waiting for additional likely participants.
 244                     // If we already have the fixed UNL quorum, proceed
 245                     // with what we have — the SHAMap merge handles any
 246                     // remaining straggler fuzz for this transition round.
 247                     auto const commits = ext.pendingCommitCount();
 248                     auto const quorum = ext.quorumThreshold();
 249                     if (commits >= quorum)
 250                     {
 251                         JLOG(ext.j_.info())
 252                             << "RNG: commit timeout but have quorum ("
 253                             << commits << "/" << quorum
 254                             << "), proceeding with partial set";
 255                         // Jump to the same path as ext.hasQuorumOfCommits
 256                         auto commitSetHash = ext.buildCommitSet(buildSeq);
 257                         auto newPos = ctx.getPosition();
 258                         newPos.commitSetHash = commitSetHash;
 259                         ctx.updatePosition(newPos);
 260                         if (ctx.mode == ConsensusMode::proposing)
 261                             ctx.propose();
 262                         ext.estState_ = EstablishState::ConvergingCommit;
 263                         ext.commitHashConflictStart_ = {};
 264                         JLOG(ext.j_.debug())
 265                             << "RNG: transitioned to ConvergingCommit"
 266                             << " commitSet=" << commitSetHash
 267                             << " (timeout fallback)";
 268                         return {};
 269                     }
 270                     logRngDiag("rng-commit-timeout-below-quorum");
 271                     // Truly below quorum: fall through to zero entropy
 272                 }
 273             }
 274         }
 275         else if (
 276             !rngBootstrapSkip &&
 277             ext.estState_ == EstablishState::ConvergingCommit)
 278         {
 279             // If commit hashes diverge, we may not receive any additional
 280             // tx-converged proposals in this state (peers can move to the
 281             // next ledger quickly, causing prevLedger rejects). In that
 282             // case, hashes observed during ConvergingTx would never be
 283             // fetched because fetch is intentionally deferred there.
 284             //
 285             // Sweep currently tx-converged peer positions each tick so
 286             // deferred commitSet hashes still get fetched/merged even
 287             // without new accepted proposals in ConvergingCommit.
 288             {
 289                 auto const ourPos = ctx.getPosition();
 290                 for (auto const& [nodeId, peerPos] : ctx.peerPositions)
 291                 {
 292                     auto const& peerPosition = peerPos.proposal().position();
 293                     if (!(peerPosition == ourPos))
 294                         continue;
 295                     ext.fetchRngSetIfNeeded(peerPosition.commitSetHash);
 296                 }
 297             }
 298 
 299             // Fast path: if no commit-set conflicts are observed, do
 300             // exactly what we did before (immediate reveal transition).
 301             //
 302             // Safety path: haveConsensus() only compares tx-set hash, not
 303             // RNG sidecar fields. So commitSetHash disagreements can exist
 304             // transiently even while tx consensus is true. We only add
 305             // delay when we *actually* observe conflicting non-empty
 306             // commitSetHash values among tx-converged positions.
 307 
 308             // --- hasConflictingCommitSetHashes logic (inlined) ---
 309             auto hasConflictingCommitSetHashes = [&]() -> bool {
 310                 auto const ourPos = ctx.getPosition();
 311                 std::optional<uint256> observed;
 312 
 313                 auto note = [&](auto const& pos) -> bool {
 314                     if (!pos.commitSetHash)
 315                         return false;
 316                     if (!observed)
 317                     {
 318                         observed = *pos.commitSetHash;
 319                         return false;
 320                     }
 321                     return *observed != *pos.commitSetHash;
 322                 };
 323 
 324                 if (note(ourPos))
 325                     return true;
 326 
 327                 for (auto const& [nodeId, peerPos] : ctx.peerPositions)
 328                 {
 329                     auto const& peerPosition = peerPos.proposal().position();
 330                     if (!(peerPosition == ourPos))
 331                         continue;
 332                     if (note(peerPosition))
 333                         return true;
 334                 }
 335                 return false;
 336             };
 337 
 338             if (hasConflictingCommitSetHashes())
 339             {
 340                 // Fetch/merge may have added missing commits since we last
 341                 // published our commitSetHash. Rebuild and re-publish so
 342                 // peers can converge on one deterministic hash instead of
 343                 // timing out.
 344                 auto pos = ctx.getPosition();
 345                 auto const previousHash = pos.commitSetHash;
 346                 auto const refreshedHash = ext.buildCommitSet(buildSeq);
 347                 if (!previousHash || *previousHash != refreshedHash)
 348                 {
 349                     pos.commitSetHash = refreshedHash;
 350                     ctx.updatePosition(pos);
 351 
 352                     if (ctx.mode == ConsensusMode::proposing)
 353                         ctx.propose();
 354 
 355                     JLOG(ext.j_.debug())
 356                         << "RNG: refreshed commitSetHash after merge to "
 357                         << refreshedHash;
 358                 }
 359 
 360                 // Re-check after refreshing our own hash.
 361                 if (hasConflictingCommitSetHashes())
 362                 {
 363                     auto const nowSteady = ctx.nowSteady;
 364                     if (ext.commitHashConflictStart_ ==
 365                         std::chrono::steady_clock::time_point{})
 366                     {
 367                         // First observed conflict: start a bounded grace
 368                         // window so benign ordering/fetch races can settle.
 369                         ext.commitHashConflictStart_ = nowSteady;
 370                         JLOG(ext.j_.warn())
 371                             << "RNG: conflicting commitSetHash detected; "
 372                                "waiting briefly for convergence/fetch";
 373                         logRngDiag("rng-commit-conflict-start");
 374                         return {};
 375                     }
 376 
 377                     auto const conflictElapsed =
 378                         nowSteady - ext.commitHashConflictStart_;
 379                     if (conflictElapsed <= ctx.parms.rngREVEAL_TIMEOUT)
 380                     {
 381                         // We are still inside the grace window, so keep
 382                         // waiting. This preserves the fast path when peers
 383                         // converge after a short delay.
 384                         JLOG(ext.j_.debug())
 385                             << "RNG: commitSetHash still conflicting after "
 386                             << std::chrono::duration_cast<
 387                                    std::chrono::milliseconds>(conflictElapsed)
 388                                    .count()
 389                             << "ms; staying in ConvergingCommit";
 390                         logRngDiag("rng-commit-conflict-wait");
 391                         return {};
 392                     }
 393 
 394                     // If conflict persists past a bounded wait, force
 395                     // deterministic fallback for this round.
 396                     ext.setEntropyFailed();
 397                     ext.estState_ = EstablishState::ConvergingReveal;
 398                     // Backdate ext.revealPhaseStart_ so the ConvergingReveal
 399                     // timeout path fires immediately next tick.
 400                     ext.revealPhaseStart_ = nowSteady -
 401                         ctx.parms.rngREVEAL_TIMEOUT -
 402                         std::chrono::milliseconds{1};
 403                     ext.commitHashConflictStart_ = {};
 404                     JLOG(ext.j_.warn())
 405                         << "RNG: commitSetHash conflict persisted; forcing "
 406                            "zero-entropy fallback";
 407                     logRngDiag("rng-commit-conflict-timeout-fallback");
 408                     return {};
 409                 }
 410             }
 411 
 412             ext.commitHashConflictStart_ = {};
 413 
 414             //@@start rng-reveal-transition
 415             auto newPos = ctx.getPosition();
 416             newPos.myReveal = ext.getEntropySecret();
 417 
 418             ctx.updatePosition(newPos);
 419 
 420             if (ctx.mode == ConsensusMode::proposing)
 421                 ctx.propose();
 422 
 423             ext.estState_ = EstablishState::ConvergingReveal;
 424             //@@end rng-reveal-transition
 425             ext.revealPhaseStart_ = ctx.nowSteady;
 426             JLOG(ext.j_.debug()) << "RNG: transitioned to ConvergingReveal"
 427                                  << " reveal=" << ext.getEntropySecret();
 428 
 429             // Fast path:
 430             // If all required reveals are already present at transition
 431             // time, publish entropySet immediately and finish in this timer
 432             // pass. This is state-based (reveal completeness), not tied to
 433             // any particular proposal sequence number.
 434             if (ext.hasMinimumReveals())
 435             {
 436                 publishEntropySet();
 437                 JLOG(ext.j_.debug())
 438                     << "RNG: fast-path published entropySet in same tick";
 439             }
 440             else
 441             {
 442                 logRngDiag("rng-reveal-wait-after-transition");
 443                 return {};  // Wait for next tick
 444             }
 445         }
 446         else if (
 447             !rngBootstrapSkip &&
 448             ext.estState_ == EstablishState::ConvergingReveal)
 449         {
 450             // Wait for ALL committers to reveal (not just 80%).
 451             // Timeout measured from ConvergingReveal entry, not round
 452             // start.
 453             auto const elapsed = ctx.nowSteady - ext.revealPhaseStart_;
 454             bool timeout = elapsed > ctx.parms.rngREVEAL_TIMEOUT;
 455             bool ready = false;
 456             bool const revealConsensus =
 457                 ctx.haveConsensus() && ext.hasMinimumReveals();
 458 
 459             if (revealConsensus || timeout)
 460             {
 461                 JLOG(ext.j_.debug())
 462                     << "STALLDIAG: rng-reveal-gate-open"
 463                     << " revealConsensus=" << (revealConsensus ? "yes" : "no")
 464                     << " timeout=" << (timeout ? "yes" : "no") << " elapsedMs="
 465                     << std::chrono::duration_cast<std::chrono::milliseconds>(
 466                            elapsed)
 467                            .count();
 468                 if (timeout && !ext.hasAnyReveals())
 469                 {
 470                     ext.setEntropyFailed();
 471                     JLOG(ext.j_.warn()) << "RNG: entropy failed (no reveals)";
 472                     logRngDiag("rng-reveal-timeout-no-reveals");
 473                 }
 474                 else
 475                 {
 476                     publishEntropySet();
 477                     logRngDiag("rng-reveal-published-entropy-set");
 478                 }
 479                 ready = true;
 480             }
 481 
 482             if (!ready)
 483             {
 484                 JLOG(ext.j_.debug())
 485                     << "STALLDIAG: rng-reveal-gate-blocked"
 486                     << " revealConsensus=" << (revealConsensus ? "yes" : "no")
 487                     << " timeout=" << (timeout ? "yes" : "no") << " elapsedMs="
 488                     << std::chrono::duration_cast<std::chrono::milliseconds>(
 489                            elapsed)
 490                            .count();
 491                 logRngDiag("rng-reveal-wait");
 492                 return {};
 493             }
 494 
 495             // Optional explicit final proposal (seq=4 style):
 496             // publish a synthetic tx-set hash that includes the
 497             // consensus-entropy pseudo-tx just before accept.
 498             //
 499             // IMPORTANT DESIGN NOTE (read before editing this block):
 500             //
 501             // This path is intentionally OPTIONAL and default-off. It
 502             // exists for diagnostics/perf experiments (for example, making
 503             // monitor visibility of the final pseudo-tx set more direct),
 504             // NOT as a required step for consensus correctness.
 505             //
 506             // Why so conservative?
 507             // - The main consensus engine still keys agreement on tx-set
 508             // hash.
 509             // - Updating our tx-set hash here creates a "late identity
 510             //   change" in establish.
 511             // - Under lossy/reordered networks, peers can be slightly out
 512             // of
 513             //   phase: some nodes may have switched to the synthetic hash
 514             //   while others are still on the base hash.
 515             // - That can fragment agreement during a critical window (two
 516             //   hashes in flight for one ledger), increase proposal
 517             //   chatter, and trigger sync churn.
 518             //
 519             // Therefore this logic must remain best-effort only:
 520             // - Never required for liveness/safety.
 521             // - No extra wait tick is introduced.
 522             // - If gates are not met, we skip and continue to accept via
 523             // the
 524             //   normal implicit path (accept-time pseudo-tx injection).
 525             //
 526             // TBD (2026-03-03): We did not find a robust timing model that
 527             // folds this into a guaranteed-safe explicit final proposal
 528             // across lossy/reordered links without increasing churn. Keep
 529             // this path as opt-in for future evaluation.
 530             {
 531                 bool fullParticipantCoverage = false;
 532                 bool entropyAligned = false;
 533                 {
 534                     // Guard against "early switch" churn:
 535                     // require at least as many participants as the previous
 536                     // round before attempting the explicit-final mutation.
 537                     //
 538                     // This is a heuristic to reduce risk, not a proof of
 539                     // safety. We still keep the feature
 540                     // optional/default-off.
 541                     auto const participants = ctx.peerPositions.size() + 1;
 542                     auto const expectedParticipants = ctx.prevProposers + 1;
 543                     fullParticipantCoverage =
 544                         participants >= expectedParticipants;
 545                     // Require a majority aligned on entropySetHash before
 546                     // mutating tx-set hash. If this threshold is loosened,
 547                     // the probability of hash fragmentation rises quickly.
 548                     auto const requiredEntropyAligned =
 549                         (expectedParticipants / 2) + 1;
 550                     auto const ourPos = ctx.getPosition();
 551                     if (ourPos.entropySetHash)
 552                     {
 553                         auto const expectedEntropy = *ourPos.entropySetHash;
 554                         std::size_t alignedPeers = 0;
 555                         bool conflict = false;
 556                         for (auto const& [_, peerPos] : ctx.peerPositions)
 557                         {
 558                             auto const& peerPosition =
 559                                 peerPos.proposal().position();
 560                             if (!peerPosition.entropySetHash)
 561                                 continue;
 562                             if (*peerPosition.entropySetHash == expectedEntropy)
 563                             {
 564                                 ++alignedPeers;
 565                                 continue;
 566                             }
 567                             conflict = true;
 568                             break;
 569                         }
 570 
 571                         auto const alignedParticipants = alignedPeers + 1;
 572                         entropyAligned = !conflict &&
 573                             alignedParticipants >= requiredEntropyAligned;
 574                         if (!entropyAligned)
 575                         {
 576                             JLOG(ext.j_.debug())
 577                                 << "RNG: explicit-final entropy alignment "
 578                                    "insufficient"
 579                                 << " alignedParticipants="
 580                                 << alignedParticipants
 581                                 << " required=" << requiredEntropyAligned
 582                                 << " conflict=" << (conflict ? "yes" : "no");
 583                         }
 584                     }
 585                     else
 586                     {
 587                         JLOG(ext.j_.debug())
 588                             << "RNG: explicit-final waiting on local "
 589                                "entropySetHash";
 590                     }
 591                 }
 592 
 593                 if (ctx.mode == ConsensusMode::proposing &&
 594                     !ext.explicitFinalProposalSent_ &&
 595                     ext.hasQuorumOfCommits() && revealConsensus &&
 596                     fullParticipantCoverage && entropyAligned &&
 597                     ext.shouldSendExplicitFinalProposal())
 598                 {
 599                     // One-shot per round. This avoids repeated mutations/
 600                     // broadcasts from timer ticks, which can amplify
 601                     // network chatter in the exact conditions
 602                     // (loss/reordering) where this path is already fragile.
 603                     auto const synthSet = ext.buildExplicitFinalProposalTxSet(
 604                         ctx.getTxns(), buildSeq);
 605                     ext.explicitFinalProposalSent_ = true;
 606 
 607                     if (synthSet)
 608                     {
 609                         auto const synthHash = synthSet->id();
 610                         auto currentPos = ctx.getPosition();
 611                         auto newPos = currentPos;
 612                         newPos.updateTxSet(synthHash);
 613 
 614                         if (!(newPos == currentPos))
 615                         {
 616                             // WARNING:
 617                             // This changes proposal tx-set identity late in
 618                             // establish. Keep this path tightly gated and
 619                             // optional. The canonical ledger path remains
 620                             // the implicit accept-time injection logic.
 621 
 622                             // Maintain the invariant that our active
 623                             // position's tx-set hash is present in
 624                             // acquired_, otherwise gotTxSet can assert if
 625                             // this set arrives back from the network.
 626                             ctx.cacheAndShareTxSet(*synthSet);
 627                             JLOG(ext.j_.debug())
 628                                 << "RNG: cached explicit-final txSet="
 629                                 << synthHash;
 630                             ctx.updatePosition(newPos);
 631                             ctx.propose();
 632                             JLOG(ext.j_.debug())
 633                                 << "RNG: explicit final proposal txSet="
 634                                 << synthHash;
 635                             logRngDiag("rng-explicit-final-proposed");
 636                         }
 637                     }
 638                 }
 639                 else
 640                 {
 641                     char const* reason = "disabled";
 642                     if (ctx.mode != ConsensusMode::proposing)
 643                         reason = "not-proposing";
 644                     else if (ext.explicitFinalProposalSent_)
 645                         reason = "already-sent";
 646                     else if (!ext.hasQuorumOfCommits())
 647                         reason = "no-commit-quorum";
 648                     else if (!revealConsensus)
 649                         reason = "reveal-timeout";
 650                     else if (!fullParticipantCoverage)
 651                         reason = "participant-gap";
 652                     else if (!entropyAligned)
 653                         reason = "entropy-not-aligned";
 654                     JLOG(ext.j_.debug())
 655                         << "STALLDIAG: rng-explicit-final-skipped"
 656                         << " reason=" << reason
 657                         << " mode=" << to_string(ctx.mode) << " sent="
 658                         << (ext.explicitFinalProposalSent_ ? "yes" : "no");
 659                 }
 660             }
 661         }
 662     }
 663     else
 664     {
 665         JLOG(ext.j_.debug())
 666             << "RNGGATE: skipping RNG substates"
 667             << " prevSeq=" << (static_cast<std::uint32_t>(ctx.buildSeq) - 1)
 668             << " phase=establish"
 669             << " mode=" << to_string(ctx.mode);
 670     }

5) Export sig convergence gate (parallel with RNG)

📍 src/xrpld/consensus/ConsensusExtensionsTick.h:674-745

 674     // Export sig convergence gate: runs after RNG sub-states, only when
 675     // both CE and Export are enabled. Builds/publishes exportSigSetHash
 676     // and waits for peer agreement before accepting.
 677     if constexpr (requires { ctx.getPosition().exportSigSetHash; })
 678     {
 679         // Only run when CE is active (provides ExtendedPosition infra)
 680         // and there are export sigs to converge.
 681         if (isRngEnabled)
 682         {
 683             if (ext.hasPendingExportSigs())
 684             {
 685                 //@@start export-publish-sigset-hash
 686                 auto const buildSeqExport = ctx.buildSeq;
 687                 auto const exportHash = ext.buildExportSigSet(buildSeqExport);
 688 
 689                 auto currentPos = ctx.getPosition();
 690                 if (!currentPos.exportSigSetHash ||
 691                     *currentPos.exportSigSetHash != exportHash)
 692                 {
 693                     currentPos.exportSigSetHash = exportHash;
 694                     ctx.updatePosition(currentPos);
 695 
 696                     if (ctx.mode == ConsensusMode::proposing)
 697                         ctx.propose();
 698 
 699                     JLOG(ext.j_.debug())
 700                         << "Export: published exportSigSetHash=" << exportHash;
 701                 }
 702                 //@@end export-publish-sigset-hash
 703 
 704                 //@@start export-sigset-conflict-wait
 705                 // Check peer agreement on exportSigSetHash.
 706                 // If any tx-converged peer has a different non-empty hash,
 707                 // wait briefly for fetch/merge to resolve it.
 708                 {
 709                     bool conflict = false;
 710                     for (auto const& [_, peerPos] : ctx.peerPositions)
 711                     {
 712                         auto const& pp = peerPos.proposal().position();
 713                         if (!pp.exportSigSetHash)
 714                             continue;
 715                         if (*pp.exportSigSetHash != exportHash)
 716                         {
 717                             conflict = true;
 718 
 719                             // Trigger fetch for the differing set
 720                             ext.fetchRngSetIfNeeded(pp.exportSigSetHash);
 721                             break;
 722                         }
 723                     }
 724 
 725                     if (conflict)
 726                     {
 727                         // Don't block indefinitely — use the same pipeline
 728                         // timeout as RNG.
 729                         bool const timeout =
 730                             ctx.roundTime > ctx.parms.rngPIPELINE_TIMEOUT;
 731                         if (!timeout)
 732                         {
 733                             JLOG(ext.j_.debug())
 734                                 << "Export: exportSigSetHash conflict, waiting";
 735                             return {};
 736                         }
 737                         JLOG(ext.j_.info())
 738                             << "Export: exportSigSetHash conflict timed out, "
 739                                "proceeding (exports will retry next round)";
 740                     }
 741                 }
 742                 //@@end export-sigset-conflict-wait
 743             }
 744         }
 745     }

6) SHAMap construction: commit/reveal sets with proof blobs

📍 src/xrpld/app/consensus/ConsensusExtensions.cpp:315-371

 315     // Track the active RNG round explicitly. Nodes in observing/switching
 316     // mode can have a closed ledger index behind the consensus round while
 317     // still needing to fetch/merge that round's RNG sets.
 318     rngRoundSeq_ = seq;
 319 
 320     auto map =
 321         std::make_shared<SHAMap>(SHAMapType::TRANSACTION, app_.getNodeFamily());
 322     map->setUnbacked();
 323 
 324     // NOTE: avoid structured bindings in for-loops containing lambdas —
 325     // clang-14 (CI) rejects capturing them (P2036R3 not implemented).
 326     for (auto const& entry : pendingCommits_)
 327     {
 328         auto const& nid = entry.first;
 329         auto const& commit = entry.second;
 330 
 331         if (!isUNLReportMember(nid))
 332             continue;
 333 
 334         auto kit = nodeIdToKey_.find(nid);
 335         if (kit == nodeIdToKey_.end())
 336             continue;
 337 
 338         // Encode the NodeID into sfAccount so onAcquiredSidecarSet can
 339         // recover it without recomputing (master vs signing key issue).
 340         AccountID acctId;
 341         std::memcpy(acctId.data(), nid.data(), acctId.size());
 342 
 343         STTx tx(ttCONSENSUS_ENTROPY, [&](auto& obj) {
 344             obj.setFieldU32(sfFlags, tfEntropyCommit);
 345             obj.setFieldU32(sfLedgerSequence, seq);
 346             obj.setAccountID(sfAccount, acctId);
 347             obj.setFieldU32(sfSequence, 0);
 348             obj.setFieldAmount(sfFee, STAmount{});
 349             obj.setFieldH256(sfDigest, commit);
 350             obj.setFieldVL(sfSigningPubKey, kit->second.slice());
 351             auto proofIt = commitProofs_.find(nid);
 352             if (proofIt != commitProofs_.end())
 353                 obj.setFieldVL(sfBlob, serializeProof(proofIt->second));
 354         });
 355 
 356         Serializer s(2048);
 357         tx.add(s);
 358         map->addItem(
 359             SHAMapNodeType::tnTRANSACTION_NM,
 360             make_shamapitem(tx.getTransactionID(), s.slice()));
 361     }
 362 
 363     map = map->snapShot(false);
 364     commitSetMap_ = map;
 365 
 366     auto const hash = map->getHash().as_uint256();
 367     app_.getInboundTransactions().giveSet(hash, map, false);
 368 
 369     JLOG(j_.debug()) << "RNG: built commitSet SHAMap hash=" << hash
 370                      << " entries=" << pendingCommits_.size();
 371     return hash;

📍 src/xrpld/app/consensus/ConsensusExtensions.cpp:379-434

 379     rngRoundSeq_ = seq;
 380 
 381     auto map =
 382         std::make_shared<SHAMap>(SHAMapType::TRANSACTION, app_.getNodeFamily());
 383     map->setUnbacked();
 384 
 385     // NOTE: avoid structured bindings — clang-14 can't capture them (P2036R3).
 386     for (auto const& entry : pendingReveals_)
 387     {
 388         auto const& nid = entry.first;
 389         auto const& reveal = entry.second;
 390 
 391         if (!isUNLReportMember(nid))
 392             continue;
 393 
 394         auto kit = nodeIdToKey_.find(nid);
 395         if (kit == nodeIdToKey_.end())
 396             continue;
 397 
 398         AccountID acctId;
 399         std::memcpy(acctId.data(), nid.data(), acctId.size());
 400 
 401         STTx tx(ttCONSENSUS_ENTROPY, [&](auto& obj) {
 402             obj.setFieldU32(sfFlags, tfEntropyReveal);
 403             obj.setFieldU32(sfLedgerSequence, seq);
 404             obj.setAccountID(sfAccount, acctId);
 405             obj.setFieldU32(sfSequence, 0);
 406             obj.setFieldAmount(sfFee, STAmount{});
 407             obj.setFieldH256(sfDigest, reveal);
 408             obj.setFieldVL(sfSigningPubKey, kit->second.slice());
 409             // Intentionally omit sfBlob for reveal-set entries.
 410             //
 411             // Reveal proofs are timing-dependent (seq/closeTime/signature can
 412             // differ while the reveal digest is identical), which makes the
 413             // entropy-set hash non-deterministic across nodes under packet
 414             // loss/reordering.  We only need deterministic reveal material
 415             // (validator identity + digest) for fetch/merge and entropy
 416             // calculation.
 417         });
 418 
 419         Serializer s(2048);
 420         tx.add(s);
 421         map->addItem(
 422             SHAMapNodeType::tnTRANSACTION_NM,
 423             make_shamapitem(tx.getTransactionID(), s.slice()));
 424     }
 425 
 426     map = map->snapShot(false);
 427     entropySetMap_ = map;
 428 
 429     auto const hash = map->getHash().as_uint256();
 430     app_.getInboundTransactions().giveSet(hash, map, false);
 431 
 432     JLOG(j_.debug()) << "RNG: built entropySet SHAMap hash=" << hash
 433                      << " entries=" << pendingReveals_.size();
 434     return hash;

7) Injection stage (A): final entropy selection with deterministic fallback

📍 src/xrpld/app/consensus/ConsensusExtensions.cpp:1021-1077

1021     // Calculate entropy from collected reveals
1022     if (app_.config().standalone())
1023     {
1024         // Standalone mode: generate synthetic deterministic entropy
1025         // so that Hook APIs (dice/random) work for testing.
1026         finalEntropy = sha512Half(std::string("standalone-entropy"), seq);
1027         hasEntropy = true;
1028         JLOG(j_.info()) << "RNG: Standalone synthetic entropy " << finalEntropy
1029                         << " for ledger " << seq;
1030     }
1031     else if (shouldZeroEntropy())
1032     {
1033         // Liveness fallback: inject zero entropy.
1034         // Hooks MUST check for zero to know entropy is unavailable.
1035         // shouldZeroEntropy() covers: pipeline failure, no reveals,
1036         // or sub-quorum reveals (too easily influenced by a minority).
1037         finalEntropy.zero();
1038         hasEntropy = true;
1039         JLOG(j_.warn()) << "RNG: Injecting ZERO entropy (fallback) for ledger "
1040                         << seq << " (reveals=" << pendingReveals_.size()
1041                         << " threshold=" << quorumThreshold() << ")";
1042     }
1043     else
1044     {
1045         // Sort reveals deterministically by Validator Public Key
1046         std::vector<std::pair<PublicKey, uint256>> sorted;
1047         sorted.reserve(pendingReveals_.size());
1048 
1049         for (auto const& [nodeId, reveal] : pendingReveals_)
1050         {
1051             auto it = nodeIdToKey_.find(nodeId);
1052             if (it != nodeIdToKey_.end())
1053                 sorted.emplace_back(it->second, reveal);
1054         }
1055 
1056         if (!sorted.empty())
1057         {
1058             std::sort(
1059                 sorted.begin(), sorted.end(), [](auto const& a, auto const& b) {
1060                     return a.first.slice() < b.first.slice();
1061                 });
1062 
1063             // Mix all reveals into final entropy
1064             Serializer s;
1065             for (auto const& [key, reveal] : sorted)
1066             {
1067                 s.addVL(key.slice());
1068                 s.addBitString(reveal);
1069             }
1070             finalEntropy = sha512Half(s.slice());
1071             hasEntropy = true;
1072 
1073             JLOG(j_.info()) << "RNG: Injecting entropy " << finalEntropy
1074                             << " from " << sorted.size() << " reveals"
1075                             << " for ledger " << seq;
1076         }
1077     }

8) Injection stage (B): build and enqueue ttCONSENSUS_ENTROPY

📍 src/xrpld/app/consensus/ConsensusExtensions.cpp:1081-1131

1081     // Synthesize and inject the pseudo-transaction
1082     if (hasEntropy)
1083     {
1084         // Design note: this is the canonical/implicit path that materializes
1085         // the synthetic entropy-bearing tx-set in production.
1086         //
1087         // Why here (onAccept/buildLCL) instead of mutating proposals earlier?
1088         // - Consensus agreement is keyed by proposal txSetHash during
1089         //   establish. Late mutation of txSetHash in establish can fragment
1090         //   votes under loss/reordering (base hash vs synthetic hash).
1091         // - Injecting at accept preserves robust convergence semantics: peers
1092         //   agree on the base transaction set first, then deterministically
1093         //   derive/apply the entropy pseudo-tx for ledger construction.
1094         //
1095         // Explicit-final (seq=4 synthetic proposal) remains an optional
1096         // experiment for observability/perf testing and is default-off.
1097         // TBD (2026-03-03): revisit only with stronger evidence that explicit
1098         // publication can be made stable under tx-bearing, lossy networks.
1099 
1100         //@@start rng-inject-pseudotx-core
1101         // Account Zero convention for pseudo-transactions (same as ttFEE, etc)
1102         auto const entropyCount = static_cast<std::uint16_t>(
1103             app_.config().standalone()
1104                 ? 20  // synthetic: high enough for Hook APIs (need >= 5)
1105                 : (shouldZeroEntropy() ? 0 : pendingReveals_.size()));
1106         STTx tx(ttCONSENSUS_ENTROPY, [&](auto& obj) {
1107             obj.setFieldU32(sfLedgerSequence, seq);
1108             obj.setAccountID(sfAccount, AccountID{});
1109             obj.setFieldU32(sfSequence, 0);
1110             obj.setFieldAmount(sfFee, STAmount{});
1111             obj.setFieldH256(sfDigest, finalEntropy);
1112             obj.setFieldU16(sfEntropyCount, entropyCount);
1113         });
1114 
1115         auto const txID = tx.getTransactionID();
1116         auto alreadyPresent = std::any_of(
1117             retriableTxs.begin(), retriableTxs.end(), [&](auto const& entry) {
1118                 return entry.first.getTXID() == txID;
1119             });
1120         if (alreadyPresent)
1121         {
1122             JLOG(j_.debug())
1123                 << "RNG: entropy pseudo-tx already present, skip duplicate "
1124                 << txID;
1125         }
1126         else
1127         {
1128             retriableTxs.insert(std::make_shared<STTx>(std::move(tx)));
1129         }
1130         //@@end rng-inject-pseudotx-core
1131     }

9) Build stage: entropy pseudo-tx executes before normal transactions

📍 src/xrpld/app/ledger/detail/BuildLedger.cpp:111-148

 111     // CRITICAL: Apply consensus entropy pseudo-tx FIRST before any other
 112     // transactions. This ensures hooks can read entropy during this ledger.
 113     for (auto it = txns.begin(); it != txns.end(); /* manual */)
 114     {
 115         if (it->second->getTxnType() != ttCONSENSUS_ENTROPY)
 116         {
 117             ++it;
 118             continue;
 119         }
 120 
 121         auto const txid = it->first.getTXID();
 122         JLOG(j.debug()) << "Applying entropy tx FIRST: " << txid;
 123 
 124         try
 125         {
 126             auto const result =
 127                 applyTransaction(app, view, *it->second, true, tapNONE, j);
 128 
 129             if (result == ApplyTransactionResult::Success)
 130             {
 131                 ++count;
 132                 JLOG(j.debug()) << "Entropy tx applied successfully";
 133             }
 134             else
 135             {
 136                 failed.insert(txid);
 137                 JLOG(j.warn()) << "Entropy tx failed to apply";
 138             }
 139         }
 140         catch (std::exception const& ex)
 141         {
 142             JLOG(j.warn()) << "Entropy tx throws: " << ex.what();
 143             failed.insert(txid);
 144         }
 145 
 146         it = txns.erase(it);
 147         break;  // Only one entropy tx per ledger
 148     }

10) Apply stage: write consensus entropy into the singleton ledger object

📍 src/xrpld/app/tx/detail/Change.cpp:248-264

 248     auto sle = view().peek(keylet::consensusEntropy());
 249     bool const created = !sle;
 250 
 251     if (created)
 252         sle = std::make_shared<SLE>(keylet::consensusEntropy());
 253 
 254     sle->setFieldH256(sfDigest, entropy);
 255     sle->setFieldU16(sfEntropyCount, ctx_.tx.getFieldU16(sfEntropyCount));
 256     sle->setFieldU32(sfLedgerSequence, view().info().seq);
 257     // Note: sfPreviousTxnID and sfPreviousTxnLgrSeq are set automatically
 258     // by ApplyStateTable::threadItem() because isThreadedType() returns true
 259     // for ledger entries that have sfPreviousTxnID in their format.
 260 
 261     if (created)
 262         view().insert(sle);
 263     else
 264         view().update(sle);

11) Wire anchor: proposal message carrying extended payload bytes

📍 include/xrpl/proto/ripple.proto:153-175

 153 message TMProposeSet
 154 {
 155     required uint32 proposeSeq          = 1;
 156     required bytes currentTxHash        = 2;    // the hash of the ledger we are proposing
 157     required bytes nodePubKey           = 3;
 158     required uint32 closeTime           = 4;
 159     required bytes signature            = 5;    // signature of above fields
 160     required bytes previousledger       = 6;
 161     repeated bytes addedTransactions    = 10;   // not required if number is large
 162     repeated bytes removedTransactions  = 11;   // not required if number is large
 163 
 164     // node vouches signature is correct
 165     optional bool checkedSignature      = 7     [deprecated=true];
 166 
 167     // Number of hops traveled
 168     optional uint32 hops                = 12    [deprecated=true];
 169 
 170     // Export signatures for pending exports seen in the proposal set.
 171     // Each entry is: txnHash (32 bytes) + validator pubkey (33 bytes).
 172     // Validators attach these so export quorum can be reached within
 173     // the same consensus round.
 174     repeated bytes exportSignatures     = 13;
 175 }

[Architectural Retrospective]

The Road to Consensus-Native Randomness: A Retrospective

A narrative history of how the RNG architecture evolved from early featureRNG experiments into the final featureConsensusEntropy design.

Adding randomness to deterministic consensus sounds simple until you try to do it without breaking safety.

Consensus requires determinism: every honest node must compute the same state transition.
Randomness requires unpredictability: nobody should know the final value early enough to game it.

The requirement that made this hard was not just "randomness," but same-ledger usable randomness: finalize entropy after user intent is locked, but before normal execution in that same ledger.

That path was not linear.

Part I: What the First Branch Taught Us (featRNG)

The initial branch was aggressively practical: reuse existing transaction paths, avoid deep consensus surgery, and move fast.

Experiment 1: ttRNG looked straightforward, then failed quickly

The earliest model used a single transaction path (ttRNG) with validator-generated entropy.

It failed for a concrete reason: entropy bytes entered open-ledger transaction flow too early.
That made the randomness path mempool-observable and timing-sensitive, so sophisticated actors could condition behavior around visible entropy before the round was fully sealed.

Very quickly, the branch moved toward a dual-model design (ttENTROPY + ttSHUFFLE) to try to close that timing gap.

Experiment 2: dual-model defense (ttENTROPY + ttSHUFFLE)

The next design split responsibilities:

  • ttENTROPY: a UNL Validator Transaction (UVTx) — zero fee, seq=0, signed by the validator's ephemeral key, validated by UNLReport membership — used to submit blinded entropy hashes and later reveal them.
  • ttSHUFFLE: a pseudo-transaction that derived extra entropy from proposal signatures, timed to land after the transaction set was frozen.

Conceptually, this was smart defense-in-depth. Operationally, it hit three structural problems:

  • circular timing dependency around proposal-signature-derived shuffle values,
  • overlap windows between OPEN and ESTABLISH creating nondeterministic inclusion,
  • transaction-volume overhead (~70+ RNG-related tx artifacts per ledger, roughly ~28 GB/year of historical bloat) that became unacceptable at scale.

Experiment 3: mitigation hacks and why they still were not enough

Deterministic self-shuffle and piggyback variants improved specific failure modes. They did not remove the deeper issue: the model remained timing-sensitive and complex under real asynchronous behavior.

This was the "env var city" period (XAHAU_SELF_SHUFFLE, XAHAU_PIGGYBACK_SHUFFLE, and briefly XAHAU_AUTO_ACCEPT_SHUFFLES): useful for exposing failure boundaries, but also a clear signal that the architecture was being patched against the grain of consensus.

Experiment 4: dedicated shuffle phase (Open -> Establish -> Shuffle -> Validate)

The branch then tried full structural separation: a top-level shuffle phase, custom RNG message flow (TMRNGProposal), an RNGService managing commits/reveals in simple std::maps, and a forceRevealPhase() sync point to keep nodes aligned.

This delivered one lasting insight: contributors should be tied to actual recent consensus participants (the seed of later expected-proposer logic).

But the phase itself was abandoned:

  • too much new state-machine surface area (new top-level phase + new protocol message = edge cases at every transition boundary),
  • synchronization fragility (nodes drifted across the phase boundary under real latency, hitting the same timing problems the earlier model had — just at a different seam),
  • no native SHAMap diff/fetch leverage (simple maps meant building custom retry/fetch logic for missed messages).

The conclusion from Part I was precise:
commit/reveal was the right cryptographic primitive, but the transport/convergence model was wrong.

The final commit on the initial branch landed one more practical insight: entropy participation should track actual establish-round participation (establishProposers), not just static UNL membership. That expected-participant logic survived into the final architecture even as the dedicated shuffle phase did not.


Part II: The Trap We Nearly Chose (Scalar Opinion Convergence)

The seductive simplification was to treat entropy like any disputed scalar:
"let nodes publish their computed entropy value and avalanche-converge on the majority."

A lightweight discrete-event simulator (sim/rng_sim.cpp) was built to pressure-test this assumption under realistic latency and packet asymmetry. (This was a quick prototype model, not a faithful rippled consensus simulator — but it was sufficient to expose the core pathology.)

This fails for a reason that became impossible to ignore:

  • entropy output is subset-dependent,
  • subset-dependent outputs do not cluster naturally,
  • and non-clustered outputs give Avalanche no stable flipping signal.

When node A computes from set S_A and node B from S_B, and S_A != S_B, their scalars are unrelated.
At that point, you face a bad fork in design philosophy:

  1. Blind adoption: flip to whatever value seems popular.
  2. Principled reconciliation: fetch missing inputs, verify, recompute deterministically.

The final architecture deliberately chose option 2.


Part III: The Clean-Slate Branch (featureConsensusEntropy)

The new branch started as consensus documentation. That documentation work clarified failure boundaries so sharply that it became a from-scratch implementation effort.

This was not a rename exercise. It was selective reconstruction:

  • keep the proven cryptographic core (commit/reveal),
  • discard brittle runtime shape,
  • port only ideas that survived contact with real network behavior.

In other words: the primitive survived, the convergence model changed.

Hooks-facing RNG APIs such as dice() and random() were among the pieces carried forward and finalized in this architecture.

Breakthrough 1: converge on inputs, not output opinions

The core shift was simple and profound:
do not vote on final entropy values; converge on signed input sets.

Breakthrough 2: proposal-carried leaves + set identities

ExtendedPosition carries:

  • myCommitment
  • myReveal
  • commitSetHash
  • entropySetHash

Fast path: normal proposal traffic carries most of what nodes need.
Safety net: SHAMap-backed set identity enables deterministic reconciliation when packets drop or nodes lag.

Breakthrough 3: equality firewall

ExtendedPosition::operator== compares txSetHash only.

That keeps core Tx-set convergence from being held hostage by RNG sub-state timing differences while still allowing entropy sub-state convergence to proceed and reconcile.

Breakthrough 4: sub-states, not a top-level RNG phase

Instead of adding another global phase boundary, the design runs RNG progression inside establish sub-states:

  • ConvergingTx
  • ConvergingCommit
  • ConvergingReveal

This preserved the existing consensus cadence while integrating entropy convergence where it belongs.

Breakthrough 5: SHAMap union convergence

Union merge is monotonic: sets grow as verified leaves arrive.
Scalar opinions can oscillate; verified set growth does not.

And SHAMap mechanics keep reconciliation practical:

  • compare roots first,
  • walk only diverging branches,
  • fetch/merge missing leaves instead of replaying full sets.

So overhead is low on the golden path, with bounded recovery cost when reconciliation is needed.


Part IV: Hardening Moves That Made It Viable

The architecture became robust only after concrete hardening steps, each forced by a specific failure mode observed during testnet runs:

  1. Proposal proof blobs (sfBlob in SHAMap entries): without these, any peer could inject spoofed commit/reveal entries during a Cold Path fetch. Embedding the proposal signature makes every contribution independently verifiable.
  2. Reveal-vs-commit verification (sha512Half(reveal, pubKey, seq) == commitment): without this, a validator could commit to one value and reveal another (grinding attack).
  3. UNL enforcement across harvest/build/merge paths: without filtering by UNL membership, non-trusted nodes could contribute entropy and shift the output.
  4. Deterministic expected-proposer logic: ties commit-set membership to who actually proposed last round (intersected with UNL), preventing timeouts from waiting for offline validators.
  5. Split timeout strategy (3s commit window from round start, 1.5s reveal window from phase entry) with impossible-quorum early exit: without the split, txSet convergence time consumed the reveal budget, leaving no time for reveals to propagate.
  6. Deterministic fallback behavior (partial/zero entropy paths): if 80% quorum is met but not all expected proposers arrived, the round proceeds with a partial commitSet rather than discarding available entropy. If quorum is impossible, zero entropy is injected immediately rather than stalling.

Important nuance:

  • The design targets full reveal completion for committed contributors (bounded by the 1.5s reveal timeout — not an infinite wait).
  • Under timeout/failure conditions, deterministic bounded fallback paths exist so liveness is preserved.

Concrete progression: before the reveal-convergence fixes, a 15-node testnet produced 7 distinct entropy values in the same round (nodes collected different 80% subsets of reveals). After these hardening steps, 20-node testnets reported identical commit-set hashes and ~2.2s convergence with bounded recovery under node loss. (That ~2.2s came from aggressively tuned low-ms settings, including XAHAU_RNG_POLL_MS and tight timeout windows; broader production topologies may need larger windows.)


Part V: The Masterstroke (ttCONSENSUS_ENTROPY)

Once nodes converge on the relevant verified input set, final entropy is computed deterministically (sha512Half(sorted_reveals)) and injected as a synthetic pseudo-transaction: ttCONSENSUS_ENTROPY.

This injection happens locally in doAccept(), right before ledger construction. The pseudo-transaction is sorted to execute first in BuildLedger.cpp, so all user transactions and Hooks executing in that same ledger block can consume the entropy via the dice() and random() WebAssembly APIs.

Why no final gossip round on the derived scalar?

Because gossip resolves disagreements.
At this point, the system has already converged on verified inputs; the output function is deterministic. Forcing an extra opinion round adds delay and bandwidth cost without cryptographic benefit.

If a node suffers a local fault and synthesizes the wrong pseudo-tx, its resulting ledger hash will mismatch the network supermajority. Its validations will fail, and it will safely fork off and fetch the correct ledger from peers. Ledger safety is preserved by the validation phase, not the deliberation phase.


Safety and Liveness Framing

A useful framing that survived all iterations:

  • validation-phase quorum remains the safety anchor for ledger agreement,
  • entropy quality/availability is a separate axis that must degrade deterministically under stress.

This matches the formal XRPL LCP framing in Chase & MacBrough (2018): Example 5 captures the key intuition that deliberation outcomes can vary, while fork safety itself is anchored by validation-phase overlap conditions formalized in Theorem 8.

This safety claim is specifically about ledger agreement, not about maximum entropy strength under adversarial withholding.

That distinction prevented a lot of category errors in design discussions.


Closing

The final featureConsensusEntropy architecture is "least-bad" in the engineering sense:

  • more machinery than naive RNG,
  • but each mechanism exists because an observed failure mode forced it.

From ttRNG to dual-tx entropy, to dedicated shuffle phase, to scalar-convergence rejection, the trajectory kept pointing to the same destination:

commit/reveal inputs, SHAMap set identity, union reconciliation, deterministic synthetic injection, and bounded fallback behavior.

RichardAH and others added 30 commits December 19, 2025 13:27
Co-authored-by: tequ <git@tequ.dev>
Co-authored-by: tequ <git@tequ.dev>
Co-authored-by: tequ <git@tequ.dev>
Co-authored-by: tequ <git@tequ.dev>
Port the UNL Validator Transaction (UVTxn) pattern from the RNG feature
to allow validators to submit signed ttEXPORT_SIGN transactions without
requiring a funded account.

Changes:
- Add isUVTx() to identify UVTxn transaction types
- Add inUNLReport() templates to check validator UNLReport membership
- Add getValidationSecretKey() to Application for signing
- Modify Transactor for UVTxn bypasses (fee, seq, signature checks)
- Add makeExportSignTxns() to generate validator signatures
- Hook into RCLConsensus to submit ttEXPORT_SIGN during accept
- Update applySteps.cpp routing for ttEXPORT_SIGN
- Remove direct ttEXPORT_SIGN injection from TxQ::accept

Note: Currently uses Change transactor with UVTx branches.
May refactor to dedicated ExportSign transactor class.
Move ttEXPORT_SIGN handling to dedicated ExportSign transactor class,
following the same pattern as ttENTROPY/Entropy from the RNG feature.
UVTxns (signed validator transactions) should not be mixed with
pseudo-transactions in the Change transactor.

- Create ExportSign.h/cpp with preflight, preclaim, doApply
- Route ttEXPORT_SIGN through ExportSign in applySteps.cpp
- Remove UVTx branches from Change transactor
- Add documentation markers to View.h for inUNLReport functions
- Fix xport hook API whitelist to declare 4 args (I32, I32, I32, I32)
  instead of 2, matching the actual implementation signature
- Fix TxQ.cpp to use emplace_back with STObject for sfExportedTxn
  instead of setFieldVL, since sfExportedTxn is OBJECT type not VL.
  The previous code would throw "Wrong field type" at runtime.
- fix Guard.h: add import_whitelist_2 to signature lookup chain
  (was causing "Function type is inconsistent" errors for xport APIs)
- fix InvariantCheck.cpp: add ltEXPORTED_TXN to valid ledger entry types
  (was causing "invalid ledger entry type added" invariant failures)
- add SetHook.cpp: TODO comment documenting API version confusion

- add Export_test.cpp: comprehensive test suite for export feature
  - testBasicSetup: verify hook installation works
  - testEmitPayment: verify emit() flow works
  - testXportPayment: verify xport() creates ltEXPORTED_TXN
  - includes DebugLogs helper for per-partition log levels
  - parameterized runXportTest helper for future validator tests

Note: validator signing flow (ttEXPORT_SIGN) still needs debugging -
causes internal error on env.close() when validator config enabled.
adds step-by-step trace logging with [EXPORT-TRACE] prefix to track
the complete export transaction lifecycle:
- STEP-1: xport() creates ltEXPORTED_TXN
- STEP-2a: rawTxInsert ttEXPORT_SIGN in callback
- STEP-2b: doApply ttEXPORT_SIGN
- STEP-3a: rawTxInsert ttEXPORT
- STEP-4: doApply ttEXPORT (cleanup)

filter with: grep '\[EXPORT-TRACE\]'
Replace on-ledger ttEXPORT_SIGN transactions with ephemeral signature
collection via TMValidation messages. This eliminates O(n²) metadata
bloat from accumulating signatures on-ledger.

Changes:
- Add ExportSignatureCollector for in-memory signature storage with
  quorum tracking (80% UNL threshold)
- Extend TMValidation protobuf with exportSignatures field
- Sign pending exports during validate() and broadcast via validation
- Extract signatures from received TMValidation in PeerImp
- TxQ checks quorum from memory instead of ledger
- Inject ttEXPORT when quorum reached (can be ledger N+1 or N+2)
- Clean up collector after ttEXPORT processed

Includes [EXPORT-TIMING] debug logging for timing analysis.
Validators now sign ALL pending ltEXPORTED_TXN entries every ledger
(not just those from the current ledger). Signatures are cached in
ExportSignatureCollector and re-broadcast until the export is finalized.

Changes:
- Add hasSignatureFrom() and getSignatureFrom() to collector for
  checking/retrieving cached signatures
- signPendingExports() now iterates ALL pending exports, uses cached
  signature if available, otherwise signs fresh
- Signatures keep broadcasting until ltEXPORTED_TXN is deleted

This ensures:
- Late validators can contribute (sign when they come online)
- Network partitions self-heal (signatures propagate on reconnect)
- Node restarts recover (re-sign from ledger state)

The ltEXPORTED_TXN acts as a "ticket" - signatures only valid while it
exists. No explicit expiry check needed; ledger state is the gatekeeper.
- remove DBG_EXPORT macros and all usages
- remove [EXPORT-TRACE] and [EXPORT-TIMING] debug prefixes
- adjust log levels (verbose logs to trace, summaries to debug)
- upgrade "quorum reached" to info level (important event)
- standardize log prefixes to use "Export:"
- re-enable relay loop in OpenLedger.cpp
- remove reentrant call detection debug code
Add cryptographic verification of export signatures as they arrive:
- stashTxnData() caches serialized txn for verification
- verifyAndAddSignature() verifies against cached data, rejects invalid
- isSignatureVerified() / verifySignature() for Transactor fallback
- Cleanup methods updated to clear verification cache

Also removes leftover debug std::cerr from OpenView, STObject, and tests.
- Remove makeExportSignTxns() function (signatures now via TMValidation)
- Simplify ExportSign::doApply() to no-op (ttEXPORT_SIGN kept for protocol)
- Remove sfSigners from ltEXPORTED_TXN format (collected in memory now)
- Remove unused OpenView include and forward declaration
- Remove vestigial comment in TxQ about makeExportSignTxns
- Remove accept_wasm and emit_wasm hooks (not export-related)
- Remove testBasicSetup, testEmitPayment, testXportPayment
- Keep only testXportPaymentWithValidator which tests the export flow
- Delete ExportSign.cpp/h transactor (ttEXPORT_SIGN no longer used)
- Remove isUVTx() function and all UVTx checks from Transactor/TxQ
- Remove ttEXPORT_SIGN from TxFormats enum and format definition
- Remove jss::ExportSign
- Move signPendingExports() to ExportSignatureCollector

Export signatures are now collected ephemerally via TMValidation
messages, not via ttEXPORT_SIGN transactions.
Introduce data structures for consensus-derived randomness using
commit-reveal scheme:

- Add ExtendedPosition struct with consensus targets (txSetHash,
  commitSetHash, entropySetHash) and pipelined leaves (myCommitment,
  myReveal)
- operator== excludes leaves to allow convergence with unique leaves
- add() includes ALL fields to prevent signature stripping attacks
- Add EstablishState enum for sub-phases: ConvergingTx, ConvergingCommit,
  ConvergingReveal
- Update Consensus template to use Adaptor::Position_t
- Add Position_t typedef to RCLConsensus::Adaptor and test CSF Peer

This is the foundational data structure work for the RNG implementation.
The gating logic and entropy computation will follow.
- Serialize full ExtendedPosition in share() and propose()
- Deserialize ExtendedPosition in PeerImp using fromSerialIter()
- Add harvestRngData() to collect commits/reveals from peer proposals
- Conditionally call harvest via if constexpr for test compatibility
- Add clearRngState() call in startRoundInternal
- Reset estState_ in closeLedger when entering establish phase
- Implement three-phase RNG checkpoint gating:
  - ConvergingTx: wait for quorum commits, build commitSet
  - ConvergingCommit: reveal entropy, transition immediately
  - ConvergingReveal: wait for reveals or timeout, build entropySet
- Use if constexpr for test framework compatibility
…layer

Add protocol definitions for consensus-derived entropy pseudo-transaction:
- ttCONSENSUS_ENTROPY = 105 transaction type
- ltCONSENSUS_ENTROPY = 0x0058 ledger entry type
- keylet::consensusEntropy() singleton keylet (namespace 'X')
- applyConsensusEntropy() handler in Change.cpp
- Added to isPseudoTx() in STTx.cpp

The entropy value is stored in sfDigest field of the singleton ledger object.
This provides the protocol foundation for same-ledger entropy injection.
This does not introduce a new levelization cycle; the existing xrpld.app <-> xrpld.overlay loop now has equal aggregate include counts after the consensus-extension work. Treat this as essentially the same architectural situation, not a meaningful worsening by itself.

TODO: if we want to fix the boundary properly, extract a small shared consensus-extension wire/interface layer below both app and overlay instead of shaving includes to change the generated ratio.
Count the local proposer when deciding whether the previous round had enough participants for RNG, since prevProposers only tracks peers. This avoids a 4/5 honest quorum being treated as below quorum after one validator diverges.

Allow an already quorum-aligned entropySetHash to proceed despite below-quorum conflicting hashes, while retaining zero-entropy fallback when no entropy hash reaches quorum alignment. Add CSF coverage for a persistent single bogus entropy hash and for conflicting bogus hashes without quorum.
Document the consensus-extension invariants for RNG, sidecars, export sig convergence, validator quorum, zero-entropy fallback, and proposal signing. Link the note from the RCL consensus README so future changes have a durable checklist.
Remove the stale TMValidation exportSignatures field from the draft proto path now that export signatures ride signed proposal sidecars. Document that any future validation-carried ConsensusExtensions data must be covered by the signed validation payload and duplicate/replay identity, not an unsigned wrapper field.
Stamp export signatures learned from proposals, sidecar sets, and candidate tx-set upgrades with a ledger sequence so cleanupStale can age them out. Remove invalid unverified signatures after tx-local verification fails, with a buffer match check to avoid deleting newer replacements.
Limit outbound TMProposeSet export signature attachments to ExportLimits::maxPendingExports so honest proposals stay within the same bound enforced by inbound proposal validation. Extra exports remain unsigned for that proposal and rely on the existing retry/expiry path.
Cap pending ttEXPORT work in open/apply ledgers, including hook-emitted exports when TxQ drains the emitted directory into the open ledger. Enforce the same bound for per-account shadow tickets so durable pending imports cannot grow unbounded.
Enforce the pending export cap for hook-emitted ttEXPORT work before commit. Replace the non-present sfEmittedTxn template field when building ltEMITTED_TXN entries so in-flight ledger checks see the emitted wrapper.

Overflowing xport emission now returns tecDIR_FULL and leaves the emitted backlog capped at ExportLimits::maxPendingExports.
Keep hook result/state finalization non-fatal while enforcing the hook-export backlog cap through the transaction-level ApplyContext guard. This avoids resetting non-success tec metadata and preserves hook_again weak execution behavior.
Export-only originally used unanimity as a conservative substitute for the CE/RNG sidecar machinery. That made sense before Export had its own signed ExtendedPosition field and exportSigSetHash convergence gate.

Now Export sidecars are signed and converged independently of RNG, so a quorum-aligned exportSigSetHash plus verified active-view signature quorum is deterministic enough for Export-only mode. Keeping unanimity would let one active validator veto an otherwise converged export round.

Update CSF and testnet coverage to treat Export-only the same way: one missing/conflicting signer in a 5-validator network succeeds at 4/5, while below-quorum still retries or expires.
Clarify that export sidecar publication is local verified material only, and fetched sidecar leaves must be active-view checked, candidate-tx verified, and promoted into ExportSigCollector before closed-ledger apply can use them.
When a candidate set contains ttEXPORT but a node has no local verified export sig material yet, give tx-converged peers one bounded opportunity to advertise an exportSigSetHash before closed-ledger apply.

This is a safety coordination window, not a wait-for-Export-success mechanism. If no advertised sidecar arrives or fetched material cannot be merged by the deadline, Export convergence is marked failed and the transaction retries or expires through normal rules.

Add CSF coverage for a peer that can only succeed by fetching peer-advertised export sidecars, plus a direct ConsensusExtensionsTick test for the pre-advertisement observation window. Document the consensus-extension priority order: safe, fast, works.
Require tx-converged peers to advertise sidecar hashes before accepting RNG entropy or export signature success from local quorum alignment.

The RNG reveal fast path now publishes the entropy set and waits for peer observation instead of accepting in the same tick. On timeout, RNG clears the advertised entropy hash and falls back to deterministic zero.

Add unit and CSF regression coverage for asymmetric peer observation.
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 30, 2026

Codecov Report

❌ Patch coverage is 42.06576% with 1621 lines in your changes missing coverage. Please review.
✅ Project coverage is 65.56%. Comparing base (a6186d7) to head (445d007).
⚠️ Report is 1 commits behind head on dev.

Files with missing lines Patch % Lines
src/xrpld/app/consensus/ConsensusExtensions.cpp 21.46% 806 Missing and 61 partials ⚠️
src/xrpld/consensus/Consensus.h 32.36% 116 Missing and 24 partials ⚠️
src/xrpld/app/tx/detail/Export.cpp 51.87% 64 Missing and 13 partials ⚠️
src/xrpld/app/misc/detail/RuntimeConfig.cpp 44.03% 50 Missing and 11 partials ⚠️
src/xrpld/app/misc/ExportSigCollector.h 55.30% 46 Missing and 13 partials ⚠️
src/xrpld/app/hook/detail/applyHook.cpp 50.50% 29 Missing and 20 partials ⚠️
src/xrpld/app/proof/detail/ProofBuilder.cpp 59.64% 35 Missing and 11 partials ⚠️
src/xrpld/app/tx/detail/ExportLedgerOps.h 63.20% 19 Missing and 27 partials ⚠️
src/xrpld/app/hook/detail/HookAPI.cpp 56.43% 30 Missing and 14 partials ⚠️
src/xrpld/app/tx/detail/Import.cpp 52.27% 17 Missing and 25 partials ⚠️
... and 26 more
Additional details and impacted files
@@            Coverage Diff             @@
##              dev     #693      +/-   ##
==========================================
- Coverage   66.52%   65.56%   -0.96%     
==========================================
  Files         831      847      +16     
  Lines       78166    81702    +3536     
  Branches    44374    46591    +2217     
==========================================
+ Hits        52000    53569    +1569     
- Misses      17808    19441    +1633     
- Partials     8358     8692     +334     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

# Conflicts:
#	hook/sfcodes.h
#	include/xrpl/protocol/Feature.h
#	include/xrpl/protocol/detail/sfields.macro
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants