Conversation
There was a problem hiding this comment.
Pull request overview
This pull request deprecates the overly broad SyncState RPC endpoint and introduces a new, more focused SyncChainMmr endpoint for syncing chain MMR data, addressing issue #1591.
Changes:
- Deprecated the
SyncStateRPC endpoint in favor of more specialized endpoints - Added new
SyncChainMmrRPC endpoint to sync chain MMR deltas within a specified block range - Implemented supporting infrastructure including error types, tests, and endpoint limits configuration
Reviewed changes
Copilot reviewed 9 out of 11 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| proto/proto/rpc.proto | Marked SyncState as deprecated and added SyncChainMmrRequest/SyncChainMmrResponse message definitions |
| proto/proto/internal/store.proto | Deprecated internal SyncState RPC and added SyncChainMmr RPC endpoint |
| crates/store/src/state/sync_state.rs | Implemented sync_chain_mmr method to compute MMR deltas for given block ranges |
| crates/store/src/server/rpc_api.rs | Added RPC handler for SyncChainMmr with parameter validation and error handling |
| crates/store/src/errors.rs | Introduced SyncChainMmrError enum for request validation errors |
| crates/rpc/src/server/api.rs | Added passthrough implementation and endpoint limit configuration for SyncChainMmr |
| crates/rpc/src/tests.rs | Added test for SyncChainMmr endpoint and verified it appears in limits response |
| crates/rpc/Cargo.toml | Enabled rocksdb feature for miden-node-store in dev-dependencies to support integration tests |
| crates/proto/src/generated/store.rs | Auto-generated gRPC client/server code for the new endpoint |
| crates/proto/src/generated/rpc.rs | Auto-generated gRPC client/server code for the new endpoint |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
c83c375 to
3e8fdd2
Compare
Mirko-von-Leipzig
left a comment
There was a problem hiding this comment.
Loving the line delta :)
crates/store/src/state/sync_state.rs
Outdated
| // - Mmr::get_delta is inclusive, whereas the sync request block_from is defined to be | ||
| // exclusive, so the from_forest has to be adjusted with a +1. |
There was a problem hiding this comment.
I worry about this definition. Users will expect (I think) MMR to return the values as per the block header number for that block, and get the mmr as at that block. Unless I'm misunderstanding.
There was a problem hiding this comment.
Happy to change, kept it in-line with the previous SynState.
From a practicality perspective, when calling I have my chain-tip and want everything starting from there, so I think the common call is to do local_chain_tip.. as the request. The response I'd expect is { (local_chain_tip+1)..=(local_chain_tip+PAGE_SIZE), Mmr for <<that range }
There was a problem hiding this comment.
Thing is then you can't sync from genesis though? Though I guess you have to fetch genesis header and build your own from there which you need to do to compare the hashes in any case
There was a problem hiding this comment.
Fair, @igamigo is that an issue/a lot of extra work? We can also be redundent and return including the local_chain_tip (client perspective).
To expand a bit on expected usage:
Assuming there are more entries in the MMR than PAGE_SIZE, the next call would be:
local_chain_tip+1+PAGE_SIZE.. yielding (local_chain_tip+1+PAGE_SIZE)..=(local_chain_tip+PAGE_SIZE*2), Mmr for <<that range } where local_chain_tip+PAGE_SIZE is part of the pagination info returned.
There was a problem hiding this comment.
I don't think we can return the response here starting on local_chain_tip, not without making it more inconvenient to apply the changes within the client anyway. Basically, we need to store peaks for a specific block (e.g, for block number 10 we want to keep the peaks of the MMR with 11 leaves). For this, when syncing, we basically apply the MmrDelta (which gets your MMR to block number 9), then apply the individual block header (number 10), and get the peaks to store alongside the block header. If on a subsequent response we got the MmrDelta starting block number 10 (whereas currently we would receive it starting block 11); we'd need to "rolllback" the MMR and apply the delta accordingly.
TLDR, I believe the current way is fine and overlapping the delta with the response/request tips may complicate things.
7dc8b4e to
76c4ecd
Compare
igamigo
left a comment
There was a problem hiding this comment.
LGTM in terms of implementation! Left a couple of questions but I don't think they are blocking
crates/store/src/server/rpc_api.rs
Outdated
| &self, | ||
| request: Request<proto::rpc::SyncChainMmrRequest>, | ||
| ) -> Result<Response<proto::rpc::SyncChainMmrResponse>, Status> { | ||
| const MAX_BLOCKS: u32 = 1000; |
There was a problem hiding this comment.
Is this limit based on anything? I don't think it meaningfully caps the response size. It would probably be less overhead in terms of walking the MMR but I wonder if this is below what would be sensible since it should be logarithmic.
There was a problem hiding this comment.
We can always up it to u32::MAX (as limited by PageInfo response type). The size of Mmr is a set of roots + depth of tree (so yes, log2(n)) to be transferred which should be fine for u32::MAX (or beyond if we change the representation of PageInfo).
Why
SynStateis too broad and complicated.What
SynStateRPC endpointSyncChainMmrRPC endpointContext
Closes #1591