Conversation
See also: P1.10-3
See also: P1.10-4
See also: P1.10-2
There was a problem hiding this comment.
Pull request overview
This PR applies small protocol-safety and robustness fixes in the message handling and routing-table code, and updates the PoW parameters used during local peer ID generation.
Changes:
- Drop inbound messages whose header doesn’t match the sender IP (invalid peer IDs).
- Cap how many peers are processed from a single
Nodesmessage. - Increase the PoW difficulty used when generating local
BinaryIDs; plus minor lifetime-elision cleanups and a test adjustment.
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
src/transport/encoding/raptorq.rs |
Minor lifetime elision cleanup on ChunkedPayload impl block. |
src/lib.rs |
Raises K_DIFF_PRODUCED_BIT from 8 to 20 (affects local ID generation PoW cost). |
src/kbucket/bucket.rs |
Lifetime elision cleanup on NodeInsertOk impl; adjusts LRU/TTL unit test to pre-generate nodes. |
src/handling.rs |
Rejects invalid peer headers by skipping processing; limits peers processed from Nodes messages. |
CHANGELOG.md |
Documents the fixes/changes under Unreleased. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const K_NONCE_LEN: usize = 4; | ||
| const K_DIFF_MIN_BIT: usize = 8; | ||
| const K_DIFF_PRODUCED_BIT: usize = 8; | ||
| const K_DIFF_PRODUCED_BIT: usize = 20; |
There was a problem hiding this comment.
Raising K_DIFF_PRODUCED_BIT to 20 significantly increases the work done in BinaryID::generate (expected ~2^20 hash iterations per PeerNode::generate). This will substantially slow node startup and is also likely to make existing unit tests prohibitively slow (e.g., src/kbucket/key.rs has tests generating hundreds of PeerNodes). If the intent is to increase the required PoW for network participation, this constant alone won’t achieve that because verify_nonce() still checks K_DIFF_MIN_BIT; otherwise consider making the produced difficulty configurable and/or lowering it under cfg(test)/debug builds to keep test and startup times reasonable.
| // Create all the nodes at the beginning to ensure that PoW is not a | ||
| // factor in the test timing. | ||
| let root = PeerNode::generate("127.0.0.1:666", 0)?; | ||
| let node1 = PeerNode::generate("192.168.1.1:8080", 0)?; | ||
| let node1_copy = PeerNode::generate("192.168.1.1:8080", 0)?; | ||
| let node1_copy2 = PeerNode::generate("192.168.1.1:8080", 0)?; | ||
| let node2 = PeerNode::generate("192.168.1.2:8080", 0)?; | ||
|
|
||
| let mut additionals = vec![]; | ||
| for i in 2..21 { | ||
| additionals.push(PeerNode::generate( | ||
| &format!("192.168.1.{}:8080", i)[..], | ||
| 0, | ||
| )?); | ||
| } | ||
|
|
||
| let pending = PeerNode::generate("192.168.1.21:8080", 0)?; | ||
| let pending_2 = PeerNode::generate("192.168.1.21:8080", 0)?; |
There was a problem hiding this comment.
Pre-generating PeerNodes here sets seen_at at generation time (Node::new uses Instant::now()), so if PeerNode::generate becomes slow (e.g., due to the increased PoW difficulty) nodes may already be close to (or past) node_ttl by the time they’re inserted. That can change the behavior of this TTL-sensitive test and make it flaky. To keep the PoW work out of the timed portion without aging the nodes, consider precomputing the BinaryIDs up front and constructing PeerNodes via PeerNode::from_socket right before insertion (so seen_at is set at insertion time).
| let messages: Vec<_> = peers | ||
| .iter() | ||
| //filter out my ID to avoid loopback | ||
| .filter(|&n| &n.id != self.my_header.binary_id().as_binary()) | ||
| // Limit the number of peers accepted in a single Nodes message to a | ||
| // reasonable bound | ||
| .take(K_K * 2) | ||
| .filter(|&n| { | ||
| let h = self.my_header.binary_id().calculate_distance(&n.id); |
There was a problem hiding this comment.
The .take(K_K * 2) is applied before the filter that checks bucket fullness / has_peer(). As a result, many of the first K_K*2 entries could be rejected and you may end up accepting far fewer peers than the intended bound, while also ignoring later entries that would have passed the filter. If the goal is to cap the number of accepted peers from a Nodes message, move the take after the acceptance filter; if the goal is to cap the amount of work per message, consider renaming/commenting accordingly to avoid the semantic mismatch with “accepted”.
Fixed
Changed