diff --git a/block/block-manager.md b/block/block-manager.md index b8fceb110..634af8c40 100644 --- a/block/block-manager.md +++ b/block/block-manager.md @@ -30,16 +30,16 @@ sequenceDiagram Sequencer->>Full Node 2: Gossip Block Full Node 1->>Full Node 1: Verify Block Full Node 1->>Full Node 2: Gossip Block - Full Node 1->>Full Node 1: Mark Block Soft-Confirmed + Full Node 1->>Full Node 1: Mark Block Soft Confirmed Full Node 2->>Full Node 2: Verify Block - Full Node 2->>Full Node 2: Mark Block Soft-Confirmed + Full Node 2->>Full Node 2: Mark Block Soft Confirmed DA Layer->>Full Node 1: Retrieve Block - Full Node 1->>Full Node 1: Mark Block Hard-Confirmed + Full Node 1->>Full Node 1: Mark Block DA Included DA Layer->>Full Node 2: Retrieve Block - Full Node 2->>Full Node 2: Mark Block Hard-Confirmed + Full Node 2->>Full Node 2: Mark Block DA Included ``` ## Protocol/Component Description @@ -90,7 +90,7 @@ The block manager of the sequencer full nodes regularly publishes the produced b ### Block Retrieval from DA Network -The block manager of the full nodes regularly pulls blocks from the DA network at `DABlockTime` intervals and starts off with a DA height read from the last state stored in the local store or `DAStartHeight` configuration parameter, whichever is the latest. The block manager also actively maintains and increments the `daHeight` counter after every DA pull. The pull happens by making the `RetrieveBlocks(daHeight)` request using the Data Availability Light Client (DALC) retriever, which can return either `Success`, `NotFound`, or `Error`. In the event of an error, a retry logic kicks in after a delay of 100 milliseconds delay between every retry and after 10 retries, an error is logged and the `daHeight` counter is not incremented, which basically results in the intentional stalling of the block retrieval logic. In the block `NotFound` scenario, there is no error as it is acceptable to have no rollup block at every DA height. The retrieval successfully increments the `daHeight` counter in this case. Finally, for the `Success` scenario, first, blocks that are successfully retrieved are marked as hard confirmed and are sent to be applied (or state update). A successful state update triggers fresh DA and block store pulls without respecting the `DABlockTime` and `BlockTime` intervals. +The block manager of the full nodes regularly pulls blocks from the DA network at `DABlockTime` intervals and starts off with a DA height read from the last state stored in the local store or `DAStartHeight` configuration parameter, whichever is the latest. The block manager also actively maintains and increments the `daHeight` counter after every DA pull. The pull happens by making the `RetrieveBlocks(daHeight)` request using the Data Availability Light Client (DALC) retriever, which can return either `Success`, `NotFound`, or `Error`. In the event of an error, a retry logic kicks in after a delay of 100 milliseconds delay between every retry and after 10 retries, an error is logged and the `daHeight` counter is not incremented, which basically results in the intentional stalling of the block retrieval logic. In the block `NotFound` scenario, there is no error as it is acceptable to have no rollup block at every DA height. The retrieval successfully increments the `daHeight` counter in this case. Finally, for the `Success` scenario, first, blocks that are successfully retrieved are marked as DA included and are sent to be applied (or state update). A successful state update triggers fresh DA and block store pulls without respecting the `DABlockTime` and `BlockTime` intervals. ### Block Sync Service @@ -109,13 +109,13 @@ For non-sequencer full nodes, Blocks gossiped through the P2P network are retrie Starting off with a block store height of zero, for every `blockTime` unit of time, a signal is sent to the `blockStoreCh` channel in the block manager and when this signal is received, the `BlockStoreRetrieveLoop` retrieves blocks from the block store. It keeps track of the last retrieved block's height and every time the current block store's height is greater than the last retrieved block's height, it retrieves all blocks from the block store that are between these two heights. For each retrieved block, it sends a new block event to the `blockInCh` channel which is the same channel in which blocks retrieved from the DA layer are sent. -This block is marked as soft-confirmed by the validating full node until the same block is seen on the DA layer and then marked hard-confirmed. +This block is marked as soft confirmed by the validating full node until the same block is seen on the DA layer and then marked DA-included. Although a sequencer does not need to retrieve blocks from the P2P network, it still runs the `BlockStoreRetrieveLoop`. -#### About Soft/Hard Confirmations +#### About Soft Confirmations and DA Inclusions -The block manager retrieves blocks from both the P2P network and the underlying DA network because the blocks are available in the P2P network faster and DA retrieval is slower (e.g., 1 second vs 15 seconds). The blocks retrieved from the P2P network are only marked as soft confirmed until the DA retrieval succeeds on those blocks and they are marked hard confirmed. The hard confirmations can be considered to have a higher level of finality. +The block manager retrieves blocks from both the P2P network and the underlying DA network because the blocks are available in the P2P network faster and DA retrieval is slower (e.g., 1 second vs 15 seconds). The blocks retrieved from the P2P network are only marked as soft confirmed until the DA retrieval succeeds on those blocks and they are marked DA included. DA included blocks can be considered to have a higher level of finality. ### State Update after Block Retrieval @@ -145,7 +145,7 @@ The communication between the full node and block manager: * The default mode for sequencer nodes is normal (not lazy). * The sequencer can produce empty blocks. * The block manager uses persistent storage (disk) when the `root_dir` and `db_path` configuration parameters are specified in `config.toml` file under the app directory. If these configuration parameters are not specified, the in-memory storage is used, which will not be persistent if the node stops. -* The block manager does not re-apply the block again (in other words, create a new updated state and persist it) when a block was initially applied using P2P block sync, but later was hard confirmed by DA retrieval. The block is only set hard confirmed in this case. +* The block manager does not re-apply the block again (in other words, create a new updated state and persist it) when a block was initially applied using P2P block sync, but later was DA included during DA retrieval. The block is only marked DA included in this case. * The block sync store is created by prefixing `blockSync` on the main data store. * The genesis `ChainID` is used to create the `PubSubTopID` in go-header with the string `-block` appended to it. This append is because the full node also has a P2P header sync running with a different P2P network. Refer to go-header specs for more details. * Block sync over the P2P network works only when a full node is connected to the P2P network by specifying the initial seeds to connect to via `P2PConfig.Seeds` configuration parameter when starting the full node. diff --git a/block/block_cache.go b/block/block_cache.go index 9f305e4ca..62f3f5ed9 100644 --- a/block/block_cache.go +++ b/block/block_cache.go @@ -8,19 +8,19 @@ import ( // BlockCache maintains blocks that are seen and hard confirmed type BlockCache struct { - blocks map[uint64]*types.Block - hashes map[string]bool - hardConfirmations map[string]bool - mtx *sync.RWMutex + blocks map[uint64]*types.Block + hashes map[string]bool + daIncluded map[string]bool + mtx *sync.RWMutex } // NewBlockCache returns a new BlockCache struct func NewBlockCache() *BlockCache { return &BlockCache{ - blocks: make(map[uint64]*types.Block), - hashes: make(map[string]bool), - hardConfirmations: make(map[string]bool), - mtx: new(sync.RWMutex), + blocks: make(map[uint64]*types.Block), + hashes: make(map[string]bool), + daIncluded: make(map[string]bool), + mtx: new(sync.RWMutex), } } @@ -55,14 +55,14 @@ func (bc *BlockCache) setSeen(hash string) { bc.hashes[hash] = true } -func (bc *BlockCache) isHardConfirmed(hash string) bool { +func (bc *BlockCache) isDAIncluded(hash string) bool { bc.mtx.RLock() defer bc.mtx.RUnlock() - return bc.hardConfirmations[hash] + return bc.daIncluded[hash] } -func (bc *BlockCache) setHardConfirmed(hash string) { +func (bc *BlockCache) setDAIncluded(hash string) { bc.mtx.Lock() defer bc.mtx.Unlock() - bc.hardConfirmations[hash] = true + bc.daIncluded[hash] = true } diff --git a/block/block_cache_test.go b/block/block_cache_test.go index 8b078b580..02baf4297 100644 --- a/block/block_cache_test.go +++ b/block/block_cache_test.go @@ -38,8 +38,8 @@ func TestBlockCache(t *testing.T) { bc.setSeen("hash") require.True(t, bc.isSeen("hash"), "isSeen should return true for seen hash") - // Test setHardConfirmed - require.False(t, bc.isHardConfirmed("hash"), "hardConfirmations should be false for unseen hash") - bc.setHardConfirmed("hash") - require.True(t, bc.isHardConfirmed("hash"), "hardConfirmations should be true for seen hash") + // Test setDAIncluded + require.False(t, bc.isDAIncluded("hash"), "DAIncluded should be false for unseen hash") + bc.setDAIncluded("hash") + require.True(t, bc.isDAIncluded("hash"), "DAIncluded should be true for seen hash") } diff --git a/block/manager.go b/block/manager.go index 04ffad68b..8df85cfe1 100644 --- a/block/manager.go +++ b/block/manager.go @@ -207,9 +207,8 @@ func (m *Manager) GetStoreHeight() uint64 { return m.store.Height() } -// GetHardConfirmation returns true if the block is hard confirmed -func (m *Manager) GetHardConfirmation(hash types.Hash) bool { - return m.blockCache.isHardConfirmed(hash.String()) +func (m *Manager) IsDAIncluded(hash types.Hash) bool { + return m.blockCache.isDAIncluded(hash.String()) } // AggregationLoop is responsible for aggregating transactions into rollup-blocks. @@ -508,8 +507,8 @@ func (m *Manager) processNextDABlock(ctx context.Context) error { m.logger.Debug("retrieved potential blocks", "n", len(blockResp.Blocks), "daHeight", daHeight) for _, block := range blockResp.Blocks { blockHash := block.Hash().String() - m.blockCache.setHardConfirmed(blockHash) - m.logger.Info("block marked as hard confirmed", "blockHeight", block.Height(), "blockHash", blockHash) + m.blockCache.setDAIncluded(blockHash) + m.logger.Info("block marked as DA included", "blockHeight", block.Height(), "blockHash", blockHash) if !m.blockCache.isSeen(blockHash) { m.blockInCh <- newBlockEvent{block, daHeight} } diff --git a/block/manager_test.go b/block/manager_test.go index 433c7ab62..3ad191d52 100644 --- a/block/manager_test.go +++ b/block/manager_test.go @@ -102,18 +102,17 @@ func getMockDALC(logger log.Logger) da.DataAvailabilityLayerClient { return dalc } -func TestGetHardConfirmation(t *testing.T) { +func TestIsDAIncluded(t *testing.T) { // Create a minimalistic block manager m := &Manager{ blockCache: NewBlockCache(), } hash := types.Hash([]byte("hash")) - // GetHardConfirmation should return false for unseen hash - require.False(t, m.GetHardConfirmation(hash)) + // IsDAIncluded should return false for unseen hash + require.False(t, m.IsDAIncluded(hash)) - // Set the hash as hard confirmed and verify GetHardConfirmation returns - // true - m.blockCache.setHardConfirmed(hash.String()) - require.True(t, m.GetHardConfirmation(hash)) + // Set the hash as DAIncluded and verify IsDAIncluded returns true + m.blockCache.setDAIncluded(hash.String()) + require.True(t, m.IsDAIncluded(hash)) } diff --git a/config/defaults.go b/config/defaults.go index 15d87947c..3fcb4333c 100644 --- a/config/defaults.go +++ b/config/defaults.go @@ -26,7 +26,7 @@ var DefaultNodeConfig = NodeConfig{ DABlockTime: 15 * time.Second, NamespaceID: types.NamespaceID{}, }, - DALayer: "mock", + DALayer: "newda", DAConfig: "", Light: false, HeaderConfig: HeaderConfig{ diff --git a/da/celestia/mock/server.go b/da/celestia/mock/server.go index 711a4a7c2..7890889a5 100644 --- a/da/celestia/mock/server.go +++ b/da/celestia/mock/server.go @@ -12,8 +12,8 @@ import ( mux2 "github.com/gorilla/mux" "github.com/rollkit/celestia-openrpc/types/blob" - "github.com/rollkit/celestia-openrpc/types/header" - mockda "github.com/rollkit/rollkit/da/mock" + "github.com/rollkit/go-da/test" + "github.com/rollkit/rollkit/da/newda" "github.com/rollkit/rollkit/store" "github.com/rollkit/rollkit/third_party/log" "github.com/rollkit/rollkit/types" @@ -53,7 +53,7 @@ type response struct { // Server mocks celestia-node HTTP API. type Server struct { - mock *mockda.DataAvailabilityLayerClient + mock *newda.NewDA blockTime time.Duration server *httptest.Server logger log.Logger @@ -62,7 +62,7 @@ type Server struct { // NewServer creates new instance of Server. func NewServer(blockTime time.Duration, logger log.Logger) *Server { return &Server{ - mock: new(mockda.DataAvailabilityLayerClient), + mock: &newda.NewDA{DA: test.NewDummyDA()}, blockTime: blockTime, logger: logger, } @@ -107,33 +107,6 @@ func (s *Server) rpc(w http.ResponseWriter, r *http.Request) { return } switch req.Method { - case "header.GetByHeight": - var params []interface{} - err := json.Unmarshal(req.Params, ¶ms) - if err != nil { - s.writeError(w, err) - return - } - if len(params) != 1 { - s.writeError(w, errors.New("expected 1 param: height (uint64)")) - return - } - height := uint64(params[0].(float64)) - dah := s.mock.GetHeaderByHeight(height) - resp := &response{ - Jsonrpc: "2.0", - Result: header.ExtendedHeader{ - DAH: dah, - }, - ID: req.ID, - Error: nil, - } - bytes, err := json.Marshal(resp) - if err != nil { - s.writeError(w, err) - return - } - s.writeResponse(w, bytes) case "blob.GetAll": var params []interface{} err := json.Unmarshal(req.Params, ¶ms) diff --git a/da/grpc/mockserv/mockserv.go b/da/grpc/mockserv/mockserv.go index aff7b1801..96809935b 100644 --- a/da/grpc/mockserv/mockserv.go +++ b/da/grpc/mockserv/mockserv.go @@ -7,8 +7,9 @@ import ( ds "github.com/ipfs/go-datastore" "google.golang.org/grpc" + "github.com/rollkit/go-da/test" grpcda "github.com/rollkit/rollkit/da/grpc" - "github.com/rollkit/rollkit/da/mock" + "github.com/rollkit/rollkit/da/newda" "github.com/rollkit/rollkit/types" "github.com/rollkit/rollkit/types/pb/dalc" "github.com/rollkit/rollkit/types/pb/rollkit" @@ -17,7 +18,7 @@ import ( // GetServer creates and returns gRPC server instance. func GetServer(kv ds.Datastore, conf grpcda.Config, mockConfig []byte, logger cmlog.Logger) *grpc.Server { srv := grpc.NewServer() - mockImpl := &mockImpl{} + mockImpl := &mockImpl{mock: &newda.NewDA{DA: test.NewDummyDA()}} err := mockImpl.mock.Init([8]byte{}, mockConfig, kv, logger) if err != nil { logger.Error("failed to initialize mock DALC", "error", err) @@ -33,7 +34,7 @@ func GetServer(kv ds.Datastore, conf grpcda.Config, mockConfig []byte, logger cm } type mockImpl struct { - mock mock.DataAvailabilityLayerClient + mock *newda.NewDA } func (m *mockImpl) SubmitBlocks(ctx context.Context, request *dalc.SubmitBlocksRequest) (*dalc.SubmitBlocksResponse, error) { diff --git a/da/newda/newda.go b/da/newda/newda.go new file mode 100644 index 000000000..5ba44b95e --- /dev/null +++ b/da/newda/newda.go @@ -0,0 +1,125 @@ +package newda + +import ( + "context" + "encoding/binary" + "fmt" + + pb "github.com/rollkit/rollkit/types/pb/rollkit" + + "github.com/gogo/protobuf/proto" + ds "github.com/ipfs/go-datastore" + + newda "github.com/rollkit/go-da" + "github.com/rollkit/rollkit/da" + "github.com/rollkit/rollkit/third_party/log" + "github.com/rollkit/rollkit/types" +) + +// NewDA is a new DA implementation. +type NewDA struct { + DA newda.DA + logger log.Logger +} + +// Init is called once to allow DA client to read configuration and initialize resources. +func (n *NewDA) Init(namespaceID types.NamespaceID, config []byte, kvStore ds.Datastore, logger log.Logger) error { + n.logger = logger + return nil +} + +// Start creates connection to gRPC server and instantiates gRPC client. +func (n *NewDA) Start() error { + return nil +} + +// Stop closes connection to gRPC server. +func (n *NewDA) Stop() error { + return nil +} + +// SubmitBlocks submits blocks to DA. +func (n *NewDA) SubmitBlocks(ctx context.Context, blocks []*types.Block) da.ResultSubmitBlocks { + blobs := make([][]byte, len(blocks)) + for i := range blocks { + blob, err := blocks[i].MarshalBinary() + if err != nil { + return da.ResultSubmitBlocks{ + BaseResult: da.BaseResult{ + Code: da.StatusError, + Message: "failed to serialize block", + }, + } + } + blobs[i] = blob + } + ids, _, err := n.DA.Submit(blobs) + if err != nil { + return da.ResultSubmitBlocks{ + BaseResult: da.BaseResult{ + Code: da.StatusError, + Message: "failed to submit blocks: " + err.Error(), + }, + } + } + + return da.ResultSubmitBlocks{ + BaseResult: da.BaseResult{ + Code: da.StatusSuccess, + DAHeight: binary.LittleEndian.Uint64(ids[0]), + }, + } +} + +// RetrieveBlocks retrieves blocks from DA. +func (n *NewDA) RetrieveBlocks(ctx context.Context, dataLayerHeight uint64) da.ResultRetrieveBlocks { + ids, err := n.DA.GetIDs(dataLayerHeight) + if err != nil { + return da.ResultRetrieveBlocks{ + BaseResult: da.BaseResult{ + Code: da.StatusError, + Message: fmt.Sprintf("failed to get IDs: %s", err.Error()), + DAHeight: dataLayerHeight, + }, + } + } + + blobs, err := n.DA.Get(ids) + if err != nil { + return da.ResultRetrieveBlocks{ + BaseResult: da.BaseResult{ + Code: da.StatusError, + Message: fmt.Sprintf("failed to get blobs: %s", err.Error()), + DAHeight: dataLayerHeight, + }, + } + } + + blocks := make([]*types.Block, len(blobs)) + for i, blob := range blobs { + var block pb.Block + err = proto.Unmarshal(blob, &block) + if err != nil { + n.logger.Error("failed to unmarshal block", "daHeight", dataLayerHeight, "position", i, "error", err) + continue + } + blocks[i] = new(types.Block) + err := blocks[i].FromProto(&block) + if err != nil { + return da.ResultRetrieveBlocks{ + BaseResult: da.BaseResult{ + Code: da.StatusError, + Message: err.Error(), + }, + } + } + } + + return da.ResultRetrieveBlocks{ + BaseResult: da.BaseResult{ + Code: da.StatusSuccess, + DAHeight: dataLayerHeight, + }, + Blocks: blocks, + } +} diff --git a/da/registry/registry.go b/da/registry/registry.go index 4e9a1a92e..10f251103 100644 --- a/da/registry/registry.go +++ b/da/registry/registry.go @@ -3,10 +3,12 @@ package registry import ( "fmt" + "github.com/rollkit/go-da/test" + "github.com/rollkit/rollkit/da/newda" + "github.com/rollkit/rollkit/da" "github.com/rollkit/rollkit/da/celestia" "github.com/rollkit/rollkit/da/grpc" - "github.com/rollkit/rollkit/da/mock" ) // ErrAlreadyRegistered is used when user tries to register DA using a name already used in registry. @@ -20,9 +22,13 @@ func (e *ErrAlreadyRegistered) Error() string { // this is a central registry for all Data Availability Layer Clients var clients = map[string]func() da.DataAvailabilityLayerClient{ - "mock": func() da.DataAvailabilityLayerClient { return &mock.DataAvailabilityLayerClient{} }, "grpc": func() da.DataAvailabilityLayerClient { return &grpc.DataAvailabilityLayerClient{} }, "celestia": func() da.DataAvailabilityLayerClient { return &celestia.DataAvailabilityLayerClient{} }, + "newda": func() da.DataAvailabilityLayerClient { + return &newda.NewDA{ + DA: test.NewDummyDA(), + } + }, } // GetClient returns client identified by name. diff --git a/da/registry/registry_test.go b/da/registry/registry_test.go index cb059121d..d6df5baa3 100644 --- a/da/registry/registry_test.go +++ b/da/registry/registry_test.go @@ -5,18 +5,20 @@ import ( "github.com/stretchr/testify/assert" + "github.com/rollkit/go-da/test" "github.com/rollkit/rollkit/da" - "github.com/rollkit/rollkit/da/mock" + "github.com/rollkit/rollkit/da/newda" ) func TestRegistry(t *testing.T) { - expected := []string{"mock", "grpc", "celestia"} + expected := []string{"grpc", "celestia", "newda"} + actual := RegisteredClients() assert.ElementsMatch(t, expected, actual) constructor := func() da.DataAvailabilityLayerClient { - return &mock.DataAvailabilityLayerClient{} // cheating, only for tests :D + return &newda.NewDA{DA: test.NewDummyDA()} // cheating, only for tests :D } err := Register("testDA", constructor) assert.NoError(t, err) diff --git a/da/test/da_test.go b/da/test/da_test.go index 42dceffc6..33ca52756 100644 --- a/da/test/da_test.go +++ b/da/test/da_test.go @@ -22,7 +22,7 @@ import ( cmock "github.com/rollkit/rollkit/da/celestia/mock" grpcda "github.com/rollkit/rollkit/da/grpc" "github.com/rollkit/rollkit/da/grpc/mockserv" - "github.com/rollkit/rollkit/da/mock" + "github.com/rollkit/rollkit/da/newda" "github.com/rollkit/rollkit/da/registry" "github.com/rollkit/rollkit/store" test "github.com/rollkit/rollkit/test/log" @@ -70,7 +70,7 @@ func TestLifecycle(t *testing.T) { func doTestLifecycle(t *testing.T, dalc da.DataAvailabilityLayerClient) { conf := []byte{} - if _, ok := dalc.(*mock.DataAvailabilityLayerClient); ok { + if _, ok := dalc.(*newda.NewDA); ok { conf = []byte(mockDaBlockTime.String()) } if _, ok := dalc.(*celestia.DataAvailabilityLayerClient); ok { @@ -133,7 +133,7 @@ func doTestRetrieve(t *testing.T, dalc da.DataAvailabilityLayerClient) { // mock DALC will advance block height every 100ms conf := []byte{} - if _, ok := dalc.(*mock.DataAvailabilityLayerClient); ok { + if _, ok := dalc.(*newda.NewDA); ok { conf = []byte(mockDaBlockTime.String()) } if _, ok := dalc.(*celestia.DataAvailabilityLayerClient); ok { diff --git a/go.mod b/go.mod index 8f1335dad..bc9d056c5 100644 --- a/go.mod +++ b/go.mod @@ -24,6 +24,7 @@ require ( github.com/multiformats/go-multiaddr v0.12.0 github.com/prometheus/client_golang v1.17.0 github.com/rollkit/celestia-openrpc v0.3.0 + github.com/rollkit/go-da v0.0.0-20231024133951-57bc36006772 github.com/rs/cors v1.10.1 github.com/spf13/cobra v1.8.0 github.com/spf13/viper v1.17.0 diff --git a/go.sum b/go.sum index 56d513421..f17bed94e 100644 --- a/go.sum +++ b/go.sum @@ -1377,6 +1377,8 @@ github.com/rogpeppe/go-internal v1.8.1/go.mod h1:JeRgkft04UBgHMgCIwADu4Pn6Mtm5d4 github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M= github.com/rollkit/celestia-openrpc v0.3.0 h1:jMLsdLNQ7T20yiNDlisBhlyurFOpN1gZ6vC068FPrQA= github.com/rollkit/celestia-openrpc v0.3.0/go.mod h1:2ZhU01YF2hsHIROWzxfMZOYM09Kgyy4roH5JWoNJzp0= +github.com/rollkit/go-da v0.0.0-20231024133951-57bc36006772 h1:0qbVvvxy++RIjwoI2GmqgZDNP5yShBMA+swWjKt7mQE= +github.com/rollkit/go-da v0.0.0-20231024133951-57bc36006772/go.mod h1:cy1LA9kCyCJHgszKkTh9hJn816l5Oa87GMA2c1imfqA= github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU= github.com/rs/cors v1.8.2/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU= github.com/rs/cors v1.10.1 h1:L0uuZVXIKlI1SShY2nhFfo44TYvDPQ1w4oFkUJNfhyo= diff --git a/node/full_client.go b/node/full_client.go index 9ccf8cd6a..5d2ae78be 100644 --- a/node/full_client.go +++ b/node/full_client.go @@ -803,6 +803,9 @@ func (c *FullClient) CheckTx(ctx context.Context, tx cmtypes.Tx) (*ctypes.Result // Header returns a cometbft ResultsHeader for the FullClient func (c *FullClient) Header(ctx context.Context, height *int64) (*ctypes.ResultHeader, error) { blockMeta := c.getBlockMeta(*height) + if blockMeta == nil { + return nil, fmt.Errorf("block at height %d not found", *height) + } return &ctypes.ResultHeader{Header: &blockMeta.Header}, nil } diff --git a/node/full_client_test.go b/node/full_client_test.go index aa91acde6..444a39949 100644 --- a/node/full_client_test.go +++ b/node/full_client_test.go @@ -78,7 +78,8 @@ func getRPC(t *testing.T) (*mocks.Application, *FullClient) { key, _, _ := crypto.GenerateEd25519Key(crand.Reader) signingKey, _, _ := crypto.GenerateEd25519Key(crand.Reader) ctx := context.Background() - node, err := newFullNode(ctx, config.NodeConfig{DALayer: "mock"}, key, signingKey, proxy.NewLocalClientCreator(app), &cmtypes.GenesisDoc{ChainID: "test"}, test.NewFileLogger(t)) + + node, err := newFullNode(ctx, config.NodeConfig{DALayer: "newda"}, key, signingKey, proxy.NewLocalClientCreator(app), &cmtypes.GenesisDoc{ChainID: "test"}, test.NewFileLogger(t)) require.NoError(t, err) require.NotNil(t, node) @@ -168,7 +169,7 @@ func TestGenesisChunked(t *testing.T) { mockApp.On(InitChain, mock.Anything).Return(abci.ResponseInitChain{}) privKey, _, _ := crypto.GenerateEd25519Key(crand.Reader) signingKey, _, _ := crypto.GenerateEd25519Key(crand.Reader) - n, _ := newFullNode(context.Background(), config.NodeConfig{DALayer: "mock"}, privKey, signingKey, proxy.NewLocalClientCreator(mockApp), genDoc, test.NewFileLogger(t)) + n, _ := newFullNode(context.Background(), config.NodeConfig{DALayer: "newda"}, privKey, signingKey, proxy.NewLocalClientCreator(mockApp), genDoc, test.NewFileLogger(t)) rpc := NewFullClient(n) @@ -457,7 +458,7 @@ func TestTx(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) defer cancel() node, err := newFullNode(ctx, config.NodeConfig{ - DALayer: "mock", + DALayer: "newda", Aggregator: true, BlockManagerConfig: config.BlockManagerConfig{ BlockTime: 1 * time.Second, // blocks must be at least 1 sec apart for adjacent headers to get verified correctly @@ -661,10 +662,28 @@ func TestBlockchainInfo(t *testing.T) { if test.err { require.Error(t, err) } else { + require.NoError(t, err) assert.Equal(t, result.LastHeight, heights[9]) assert.Contains(t, result.BlockMetas, test.exp[0]) assert.Contains(t, result.BlockMetas, test.exp[1]) + assert.Equal(t, result.BlockMetas[0].BlockID.Hash, test.exp[1].BlockID.Hash) + assert.Equal(t, result.BlockMetas[len(result.BlockMetas)-1].BlockID.Hash, test.exp[0].BlockID.Hash) + assert.Equal(t, result.BlockMetas[0].Header.Version.Block, test.exp[1].Header.Version.Block) + assert.Equal(t, result.BlockMetas[len(result.BlockMetas)-1].Header.Version.Block, test.exp[0].Header.Version.Block) + assert.Equal(t, result.BlockMetas[0].Header, test.exp[1].Header) + assert.Equal(t, result.BlockMetas[len(result.BlockMetas)-1].Header, test.exp[0].Header) + assert.Equal(t, result.BlockMetas[0].Header.DataHash, test.exp[1].Header.DataHash) + assert.Equal(t, result.BlockMetas[len(result.BlockMetas)-1].Header.DataHash, test.exp[0].Header.DataHash) + assert.Equal(t, result.BlockMetas[0].Header.LastCommitHash, test.exp[1].Header.LastCommitHash) + assert.Equal(t, result.BlockMetas[len(result.BlockMetas)-1].Header.LastCommitHash, test.exp[0].Header.LastCommitHash) + assert.Equal(t, result.BlockMetas[0].Header.EvidenceHash, test.exp[1].Header.EvidenceHash) + assert.Equal(t, result.BlockMetas[len(result.BlockMetas)-1].Header.AppHash, test.exp[0].Header.AppHash) + assert.Equal(t, result.BlockMetas[0].Header.AppHash, test.exp[1].Header.AppHash) + assert.Equal(t, result.BlockMetas[len(result.BlockMetas)-1].Header.ConsensusHash, test.exp[0].Header.ConsensusHash) + assert.Equal(t, result.BlockMetas[0].Header.ConsensusHash, test.exp[1].Header.ConsensusHash) + assert.Equal(t, result.BlockMetas[len(result.BlockMetas)-1].Header.ValidatorsHash, test.exp[0].Header.ValidatorsHash) + assert.Equal(t, result.BlockMetas[0].Header.NextValidatorsHash, test.exp[1].Header.NextValidatorsHash) } }) @@ -706,7 +725,7 @@ func createGenesisValidators(t *testing.T, numNodes int, appCreator func(t *test nodes[i], err = newFullNode( ctx, config.NodeConfig{ - DALayer: "mock", + DALayer: "newda", Aggregator: true, BlockManagerConfig: config.BlockManagerConfig{ BlockTime: 1 * time.Second, @@ -868,7 +887,7 @@ func TestMempool2Nodes(t *testing.T) { // make node1 an aggregator, so that node2 can start gracefully node1, err := newFullNode(ctx, config.NodeConfig{ Aggregator: true, - DALayer: "mock", + DALayer: "newda", P2P: config.P2PConfig{ ListenAddress: "/ip4/127.0.0.1/tcp/9001", }, @@ -880,7 +899,7 @@ func TestMempool2Nodes(t *testing.T) { require.NotNil(t, node1) node2, err := newFullNode(ctx, config.NodeConfig{ - DALayer: "mock", + DALayer: "newda", P2P: config.P2PConfig{ ListenAddress: "/ip4/127.0.0.1/tcp/9002", Seeds: "/ip4/127.0.0.1/tcp/9001/p2p/" + id1.Pretty(), @@ -960,7 +979,7 @@ func TestStatus(t *testing.T) { node, err := newFullNode( context.Background(), config.NodeConfig{ - DALayer: "mock", + DALayer: "newda", P2P: config.P2PConfig{ ListenAddress: "/ip4/0.0.0.0/tcp/26656", }, @@ -1072,7 +1091,7 @@ func TestFutureGenesisTime(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) defer cancel() node, err := newFullNode(ctx, config.NodeConfig{ - DALayer: "mock", + DALayer: "newda", Aggregator: true, BlockManagerConfig: config.BlockManagerConfig{ BlockTime: 200 * time.Millisecond, diff --git a/node/full_node_integration_test.go b/node/full_node_integration_test.go index 6adba8a87..491532d10 100644 --- a/node/full_node_integration_test.go +++ b/node/full_node_integration_test.go @@ -49,7 +49,8 @@ func TestAggregatorMode(t *testing.T) { } ctx, cancel := context.WithCancel(context.Background()) defer cancel() - node, err := newFullNode(ctx, config.NodeConfig{DALayer: "mock", Aggregator: true, BlockManagerConfig: blockManagerConfig}, key, signingKey, proxy.NewLocalClientCreator(app), &cmtypes.GenesisDoc{ChainID: "test", Validators: genesisValidators}, log.TestingLogger()) + + node, err := newFullNode(ctx, config.NodeConfig{DALayer: "newda", Aggregator: true, BlockManagerConfig: blockManagerConfig}, key, signingKey, proxy.NewLocalClientCreator(app), &cmtypes.GenesisDoc{ChainID: "test", Validators: genesisValidators}, log.TestingLogger()) require.NoError(t, err) require.NotNil(t, node) @@ -148,7 +149,7 @@ func TestLazyAggregator(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) defer cancel() node, err := NewNode(ctx, config.NodeConfig{ - DALayer: "mock", + DALayer: "newda", Aggregator: true, BlockManagerConfig: blockManagerConfig, LazyAggregator: true, @@ -196,7 +197,7 @@ func TestFastDASync(t *testing.T) { bmConfig.DABlockTime = 1 * time.Second // Set BlockTime to 2x DABlockTime to ensure that the aggregator node is // producing DA blocks faster than rollup blocks. This is to force the - // block syncing to align with hard confirmations. + // block syncing to align with DA inclusions. bmConfig.BlockTime = 2 * bmConfig.DABlockTime const numberOfBlocksToSyncTill = 5 @@ -262,12 +263,13 @@ func TestFastDASync(t *testing.T) { // Verify the nodes are synced require.NoError(t, verifyNodesSynced(node1, node2, Store)) - // Verify that the block we synced to is hard confirmed. This is to + // Verify that the block we synced to is DA included. This is to // ensure that the test is passing due to the DA syncing, since the P2P - // block sync will sync quickly but the block won't be hard confirmed. + // block sync will sync quickly but the block won't be DA included. block, err := node2.Store.LoadBlock(numberOfBlocksToSyncTill) + require.NoError(t, err) - require.True(t, node2.blockManager.GetHardConfirmation(block.Hash())) + require.True(t, node2.blockManager.IsDAIncluded(block.Hash())) } // TestSingleAggregatorTwoFullNodesBlockSyncSpeed tests the scenario where the chain's block time is much faster than the DA's block time. In this case, the full nodes should be able to use block sync to sync blocks much faster than syncing from the DA layer, and the test should conclude within block time @@ -626,7 +628,7 @@ func createNode(ctx context.Context, n int, aggregator bool, isLight bool, keys ctx, config.NodeConfig{ P2P: p2pConfig, - DALayer: "mock", + DALayer: "newda", Aggregator: aggregator, BlockManagerConfig: bmConfig, Light: isLight, diff --git a/node/node_test.go b/node/node_test.go index 1d45609dd..01a40525e 100644 --- a/node/node_test.go +++ b/node/node_test.go @@ -45,7 +45,7 @@ func initializeAndStartNode(ctx context.Context, t *testing.T, nodeType string) } func newTestNode(ctx context.Context, t *testing.T, nodeType string) (Node, error) { - config := config.NodeConfig{DALayer: "mock"} + config := config.NodeConfig{DALayer: "newda"} switch nodeType { case "light": config.Light = true diff --git a/rpc/json/helpers_test.go b/rpc/json/helpers_test.go index 69ff864ff..739c4d50e 100644 --- a/rpc/json/helpers_test.go +++ b/rpc/json/helpers_test.go @@ -61,7 +61,8 @@ func getRPC(t *testing.T) (*mocks.Application, rpcclient.Client) { genesisValidators := []cmtypes.GenesisValidator{ {Address: pubKey.Address(), PubKey: pubKey, Power: int64(100), Name: "gen #1"}, } - n, err := node.NewNode(context.Background(), config.NodeConfig{Aggregator: true, DALayer: "mock", BlockManagerConfig: config.BlockManagerConfig{BlockTime: 1 * time.Second}, Light: false}, key, signingKey, proxy.NewLocalClientCreator(app), &cmtypes.GenesisDoc{ChainID: "test", Validators: genesisValidators}, log.TestingLogger()) + + n, err := node.NewNode(context.Background(), config.NodeConfig{Aggregator: true, DALayer: "newda", BlockManagerConfig: config.BlockManagerConfig{BlockTime: 1 * time.Second}, Light: false}, key, signingKey, proxy.NewLocalClientCreator(app), &cmtypes.GenesisDoc{ChainID: "test", Validators: genesisValidators}, log.TestingLogger()) require.NoError(t, err) require.NotNil(t, n)