Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extract EVM state #683

Open
wants to merge 15 commits into
base: feature/local-tx-reexecution
Choose a base branch
from

Conversation

zhangchiqing
Copy link
Member

@zhangchiqing zhangchiqing commented Nov 28, 2024

This PR implemented basic EVM state extraction.

It depends on this PR to be merged first.

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced new API method debug_flowHeightByBlock for retrieving flow block height.
    • Enhanced DebugAPI for improved transaction and block tracing capabilities.
    • Added RegisterStorage for managing register values with new snapshot functionality.
    • Implemented BlockExecutor for executing transactions in a blockchain context.
  • Bug Fixes

    • Improved error handling and logging for tracing and event subscription processes.
  • Documentation

    • Updated README to reflect new API methods and configuration flags.
  • Tests

    • Added comprehensive tests for new features, including transaction tracing and state overrides.
    • Enhanced existing tests for better coverage and accuracy.
  • Chores

    • Updated dependencies for improved stability and performance.

Copy link
Contributor

coderabbitai bot commented Nov 28, 2024

Walkthrough

The changes in this pull request involve significant modifications across various files, primarily focusing on the Makefile, README.md, and several API files. Key alterations include the removal of specific mock generation commands from the Makefile, the addition of a new API method in the README.md, and extensive updates to the api.go and debug.go files to enhance transaction and block tracing functionalities. Additionally, multiple files have been added or removed, reflecting a restructuring of the project to improve clarity, maintainability, and functionality.

Changes

File Path Change Summary
Makefile Removed commands for generating mocks for AccountIndexer and all mocks in services/traces.
README.md Added new API method debug_flowHeightByBlock and expanded sections on unsupported APIs and configuration flags.
api/api.go Removed accounts field from BlockChainAPI, refactored block number handling, updated method signatures to use ethTypes package types, and added BlockGasLimit constant.
api/debug.go Enhanced DebugAPI with new fields and methods for transaction and block tracing, including TraceTransaction, TraceCall, and methods for handling execution context.
api/encode_transaction.go Removed file containing the encodeTxFromArgs function.
api/pull.go Updated getTransactions method to use []*ethTypes.Transaction instead of []*Transaction.
api/stream.go Updated prepareBlockHeader method to return *ethTypes.BlockHeader.
api/utils.go Introduced utility functions for block resolution and error handling.
api/wallet.go Updated SignTransaction and SendTransaction methods to use ethTypes.TransactionArgs and ethTypes.SignTransactionResult.
bootstrap/bootstrap.go Added Registers field to Storages struct and updated several methods to accommodate new storage structure.
cmd/run/cmd.go Replaced Run function with RunE for better error handling and control flow.
cmd/util/main.go Introduced command-line utility for exporting EVM state with added error handling.
config/config.go Removed fields related to transaction tracing and heartbeat intervals.
eth/types/types.go Changed package name from api to types and updated several structs and methods related to transaction management.
services/evm/executor.go Introduced BlockExecutor struct for executing transactions with robust error handling.
services/evm/extract.go Added ExtractEVMState function for managing EVM state extraction.
services/ingestion/engine.go Updated Engine struct to include new fields and methods for event ingestion.
services/requester/cross-spork_client.go Enhanced sporkClient with rate limiting and improved error handling for height range queries.
storage/index.go Removed AccountIndexer interface and its methods.
storage/pebble/register_storage.go Introduced RegisterStorage type for managing register values.
storage/register_delta.go Added RegisterDelta struct for managing temporary register state changes.
tests/e2e_web3js_test.go Added new end-to-end tests for various functionalities.
tests/web3js/debug_traces_test.js Introduced tests for debugging transaction and call traces.
tests/web3js/eth_get_storage_at_test.js Added tests for retrieving storage slots from deployed contracts.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant API
    participant Storage
    participant EVM

    User->>API: Request to execute transaction
    API->>EVM: Execute transaction with state overrides
    EVM->>Storage: Store transaction data
    Storage-->>EVM: Confirm storage success
    EVM-->>API: Return transaction receipt
    API-->>User: Return transaction result
Loading

Possibly related PRs

Suggested labels

Improvement, EVM, Bugfix, Documentation

Suggested reviewers

  • peterargue
  • m-Peter
  • ramtinms
  • janezpodhostnik

Poem

🐇 In the meadow, changes bloom,
Mocks removed, we clear the room.
API grows with new delight,
Tracing blocks in day and night.
With every hop, our code refines,
In harmony, our project shines! 🌼

Warning

There were issues while running some tools. Please review the errors and either fix the tool’s configuration or disable the tool if it’s a critical failure.

🔧 golangci-lint (1.62.2)

level=warning msg="[runner] Can't run linter goanalysis_metalinter: buildir: failed to load package evm: could not load export data: no export data for "github.com/onflow/flow-evm-gateway/services/evm""
level=error msg="Running error: can't run linter goanalysis_metalinter\nbuildir: failed to load package evm: could not load export data: no export data for "github.com/onflow/flow-evm-gateway/services/evm""


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@zhangchiqing zhangchiqing changed the base branch from main to feature/local-tx-reexecution November 28, 2024 00:05
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.

🛑 Comments failed to post (49)
services/replayer/blocks_provider.go (1)

66-73: ⚠️ Potential issue

Correct the usage of fmt.Errorf with %w for error wrapping

The current usage of fmt.Errorf with the %w verb is incorrect. The %w verb should be used to wrap an error and must be the only verb in the format string or at the end. Including other formatting verbs after %w will cause a compilation error.

Apply this diff to fix the error wrapping:

 return fmt.Errorf(
-    "%w: received new block: %d, non-sequential of latest block: %d",
+    "received new block: %d, non-sequential of latest block: %d: %w",
     models.ErrInvalidHeight,
     block.Height,
     bp.latestBlock.Height,
 )

This change ensures that models.ErrInvalidHeight is properly wrapped, and the error message includes the necessary context.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	if bp.latestBlock != nil && bp.latestBlock.Height != (block.Height-1) {
		return fmt.Errorf(
			"received new block: %d, non-sequential of latest block: %d: %w",
			models.ErrInvalidHeight,
			block.Height,
			bp.latestBlock.Height,
		)
	}
storage/register_delta.go (1)

57-63: ⚠️ Potential issue

Possible Incorrect Behavior in ValueExists with Empty Values

The ValueExists method checks len(value) > 0 to determine if a value exists. If a key exists with an empty value (zero-length byte slice), this method will return false, which may not be the intended behavior. Consider modifying the implementation to check for the existence of the key regardless of the value's length.

Apply this diff to correct the implementation:

 func (r *RegisterDelta) ValueExists(owner []byte, key []byte) (bool, error) {
-    value, err := r.GetValue(owner, key)
+    _, err := r.GetValue(owner, key)
     if err != nil {
+        if errors.Is(err, flow.ErrNotFound) {
+            return false, nil
+        }
         return false, err
     }
-    return len(value) > 0, nil
+    return true, nil
 }

Committable suggestion skipped: line range outside the PR's diff.

storage/pebble/storage_test.go (3)

24-32: ⚠️ Potential issue

Ensure Batch is Properly Closed to Prevent Resource Leaks

In the store block test, the batch created with db.NewBatch() is not closed after committing. According to Pebble's documentation, batches should be closed after use to release resources.

Apply this diff to close the batch:

 err = batch.Commit(pebble.Sync)
 require.NoError(t, err)
+err = batch.Close()
+require.NoError(t, err)

Alternatively, you can ensure the batch is closed by adding a defer statement:

 batch := db.NewBatch()
+defer func() {
+    require.NoError(t, batch.Close())
+}()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		batch := db.NewBatch()

		err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch)
		require.NoError(t, err)

		err = blocks.Store(20, flow.Identifier{0x1}, bl, batch)
		require.NoError(t, err)

		err = batch.Commit(pebble.Sync)
		require.NoError(t, err)
		err = batch.Close()
		require.NoError(t, err)

43-50: ⚠️ Potential issue

Ensure Batch is Properly Closed to Prevent Resource Leaks

In the get stored block test, the batch is created and committed but not closed. To prevent resource leaks, remember to close the batch after use.

Apply this diff to close the batch:

 err = batch.Commit(pebble.Sync)
 require.NoError(t, err)
+err = batch.Close()
+require.NoError(t, err)

Or use a defer statement to close the batch:

 batch := db.NewBatch()
+defer func() {
+    require.NoError(t, batch.Close())
+}()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		batch := db.NewBatch()
		err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch)
		require.NoError(t, err)

		err = blocks.Store(cadenceHeight, cadenceID, bl, batch)
		require.NoError(t, err)

		err = batch.Commit(pebble.Sync)
		require.NoError(t, err)
		err = batch.Close()
		require.NoError(t, err)

76-82: ⚠️ Potential issue

Ensure Batch is Properly Closed to Prevent Resource Leaks

In the get not found block error test, the batch is not closed after committing. Closing the batch is essential to release resources and prevent potential memory leaks.

Apply this diff to close the batch:

 err = batch.Commit(pebble.Sync)
 require.NoError(t, err)
+err = batch.Close()
+require.NoError(t, err)

Alternatively, add a defer statement to close the batch:

 batch := db.NewBatch()
+defer func() {
+    require.NoError(t, batch.Close())
+}()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		batch := db.NewBatch()
		err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch)
		require.NoError(t, err)
		err = blocks.Store(2, flow.Identifier{0x1}, mocks.NewBlock(1), batch) // init
		require.NoError(t, err)

		err = batch.Commit(pebble.Sync)
		require.NoError(t, err)
		err = batch.Close()
		require.NoError(t, err)
api/utils.go (2)

98-105: 🛠️ Refactor suggestion

Enforce exact hash length in decodeHash

Accepting hashes shorter than 32 bytes by padding them with zeros may lead to unintended behavior. To ensure data integrity, enforce that the input hash has exactly 32 bytes after decoding.

Apply this diff to enforce exact hash length:

     if len(b) > common.HashLength {
         return common.Hash{}, fmt.Errorf(
             "hex string too long, want at most 32 bytes, have %d bytes",
             len(b),
         )
+    } else if len(b) < common.HashLength {
+        return common.Hash{}, fmt.Errorf(
+            "hex string too short, want exactly 32 bytes, have %d bytes",
+            len(b),
+        )
     }
     return common.BytesToHash(b), nil
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	if len(b) > common.HashLength {
		return common.Hash{}, fmt.Errorf(
			"hex string too long, want at most 32 bytes, have %d bytes",
			len(b),
		)
	} else if len(b) < common.HashLength {
		return common.Hash{}, fmt.Errorf(
			"hex string too short, want exactly 32 bytes, have %d bytes",
			len(b),
		)
	}
	return common.BytesToHash(b), nil
}

63-82: ⚠️ Potential issue

Potential negative height conversion to uint64 in resolveBlockNumber

Converting a negative height value to uint64 may cause unexpected underflow, resulting in extremely large numbers. Ensure that height is non-negative before converting it to uint64.

Apply this diff to enforce non-negative height:

     if height < 0 {
         executed, err := blocksDB.LatestEVMHeight()
         if err != nil {
             return 0, err
         }
         height = int64(executed)
+    }
+
+    if height < 0 {
+        return 0, fmt.Errorf("resolved block height cannot be negative")
     }

     return uint64(height), nil
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	height := number.Int64()

	// if special values (latest) we return latest executed height
	//
	// all the special values are:
	//	SafeBlockNumber      = BlockNumber(-4)
	//	FinalizedBlockNumber = BlockNumber(-3)
	//	LatestBlockNumber    = BlockNumber(-2)
	//	PendingBlockNumber   = BlockNumber(-1)
	//
	// EVM on Flow does not have these concepts, but the latest block is the closest fit
	if height < 0 {
		executed, err := blocksDB.LatestEVMHeight()
		if err != nil {
			return 0, err
		}
		height = int64(executed)
	}

	if height < 0 {
		return 0, fmt.Errorf("resolved block height cannot be negative")
	}

	return uint64(height), nil
services/replayer/call_tracer_collector.go (3)

76-82: ⚠️ Potential issue

Potential data race on resultsByTxID map access

Access to the resultsByTxID map is not synchronized, which could lead to race conditions if Collect and OnTxEnd methods are called concurrently. Consider adding synchronization, such as a mutex, to protect access to resultsByTxID.

Apply this diff to introduce synchronization:

+ import (
+     "sync"
+ )

+ var (
+     resultsMutex sync.Mutex
+ )

// In Collect method
func (ct *CallTracerCollector) Collect(txID common.Hash) (json.RawMessage, error) {
+   resultsMutex.Lock()
    result, found := ct.resultsByTxID[txID]
    if !found {
+       resultsMutex.Unlock()
        return nil, fmt.Errorf("trace result for tx: %s, not found", txID.String())
    }
    // remove the result
    delete(ct.resultsByTxID, txID)
+   resultsMutex.Unlock()
    return result, nil
}

// In OnTxEnd method
func (wrapped *Tracer) OnTxEnd(receipt *types.Receipt, err error) {
    // ...
+   resultsMutex.Lock()
    ct.resultsByTxID[receipt.TxHash] = res
+   resultsMutex.Unlock()
    // ...
}

Also applies to: 134-136


67-71: ⚠️ Potential issue

Potential data race when reassigning t.tracer in ResetTracer

Reassigning t.tracer in ResetTracer without synchronization might lead to race conditions if t.tracer is accessed concurrently in other methods. Consider using synchronization mechanisms, such as a mutex, to protect access to t.tracer.

Apply this diff to synchronize access to t.tracer:

+ import (
+     "sync"
+ )

+ var (
+     tracerMutex sync.RWMutex
+ )

func (t *CallTracerCollector) ResetTracer() error {
    var err error
+   tracerMutex.Lock()
    t.tracer, err = DefaultCallTracer()
+   tracerMutex.Unlock()
    return err
}

func NewSafeTxTracer(ct *CallTracerCollector) *tracers.Tracer {
    // ...
    wrapped.OnTxStart = func(
        vm *tracing.VMContext,
        tx *types.Transaction,
        from common.Address,
    ) {
        defer func() {
            // existing recover logic
        }()
+       tracerMutex.RLock()
        if ct.tracer.OnTxStart != nil {
            ct.tracer.OnTxStart(vm, tx, from)
        }
+       tracerMutex.RUnlock()
    }
    // Repeat similar locking for other methods accessing ct.tracer
}

Also applies to: 90-92, 110-112, 125-127, 161-163, 176-178, 191-193


130-141: 🛠️ Refactor suggestion

Ensure tracer is reset even if GetResult fails in OnTxEnd

If ct.tracer.GetResult() fails, the tracer is not reset, which might leave it in an inconsistent state. Consider resetting the tracer even when an error occurs to ensure a fresh state for subsequent traces.

Apply this change:

	res, err := ct.tracer.GetResult()
	if err != nil {
		l.Error().Err(err).Msg("failed to produce trace results")
+       // Reset tracer even if GetResult fails
+       if resetErr := ct.ResetTracer(); resetErr != nil {
+           l.Error().Err(resetErr).Msg("failed to reset tracer after GetResult error")
+       }
		return
	}
	ct.resultsByTxID[receipt.TxHash] = res

	// reset tracing to have fresh state
	if err := ct.ResetTracer(); err != nil {
		l.Error().Err(err).Msg("failed to reset tracer")
		return
	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		if err != nil {
			l.Error().Err(err).Msg("failed to produce trace results")
			// Reset tracer even if GetResult fails
			if resetErr := ct.ResetTracer(); resetErr != nil {
				l.Error().Err(resetErr).Msg("failed to reset tracer after GetResult error")
			}
			return
		}
		ct.resultsByTxID[receipt.TxHash] = res

		// reset tracing to have fresh state
		if err := ct.ResetTracer(); err != nil {
			l.Error().Err(err).Msg("failed to reset tracer")
			return
		}
	}
services/requester/cross-spork_client.go (2)

60-61: 🛠️ Refactor suggestion

Make the rate limit configurable

There's a TODO comment indicating that the rate limit is hardcoded. To enhance flexibility and adaptability to different environments or load conditions, consider making the rate limit configurable.

Here's how you might modify the code:

func (s *sporkClients) add(logger zerolog.Logger, client access.Client, rateLimit int) error {
	// Existing code...
	*s = append(*s, &sporkClient{
		firstHeight: info.NodeRootBlockHeight,
		lastHeight:  header.Height,
		client:      client,
-		// TODO (JanezP): Make this configurable
-		getEventsForHeightRangeLimiter: ratelimit.New(100, ratelimit.WithoutSlack),
+		getEventsForHeightRangeLimiter: ratelimit.New(rateLimit, ratelimit.WithoutSlack),
	})
	// Existing code...
}

Adjust the calling function to pass the desired rateLimit value based on configuration or environment variables.

Committable suggestion skipped: line range outside the PR's diff.


229-245: ⚠️ Potential issue

Rate limiting is not applied in CrossSporkClient's GetEventsForHeightRange

The GetEventsForHeightRange method in CrossSporkClient calls GetEventsForHeightRange on the access.Client interface, which does not include the rate limiting implemented in sporkClient. This oversight may lead to unthrottled requests and potential rate limit violations.

To ensure rate limiting is properly applied, consider modifying the getClientForHeight method to return a sporkClient rather than an access.Client. This allows access to the rate-limited method. Here's the proposed change:

- func (s *sporkClients) get(height uint64) access.Client {
+ func (s *sporkClients) get(height uint64) *sporkClient {
	for _, spork := range *s {
		if spork.contains(height) {
			return spork
		}
	}
	return nil
}

- func (c *CrossSporkClient) getClientForHeight(height uint64) (access.Client, error) {
+ func (c *CrossSporkClient) getClientForHeight(height uint64) (*sporkClient, error) {
	if !c.IsPastSpork(height) {
+		// Wrap the current client in a sporkClient to apply rate limiting
+		return &sporkClient{
+			firstHeight: c.currentSporkFirstHeight,
+			lastHeight:  math.MaxUint64, // Assuming the current spork extends indefinitely
+			client:      c.Client,
+			getEventsForHeightRangeLimiter: ratelimit.New(100, ratelimit.WithoutSlack), // Make configurable as per previous suggestion
+		}, nil
	}

	client := c.sporkClients.get(height)
	if client == nil {
		return nil, errs.NewHeightOutOfRangeError(height)
	}

	// Logging remains the same
	return client, nil
}

Update the GetEventsForHeightRange method to use the returned sporkClient:

func (c *CrossSporkClient) GetEventsForHeightRange(
	ctx context.Context, eventType string, startHeight uint64, endHeight uint64,
) ([]flow.BlockEvents, error) {
	client, err := c.getClientForHeight(startHeight)
	if err != nil {
		return nil, err
	}
	endClient, err := c.getClientForHeight(endHeight)
	if err != nil {
		return nil, err
	}
	// Compare spork clients directly
	if endClient != client {
		return nil, fmt.Errorf(
			"invalid height range, end height %d is not in the same spork as start height %d",
			endHeight, startHeight,
		)
	}
	return client.GetEventsForHeightRange(ctx, eventType, startHeight, endHeight)
}

This ensures that rate limiting is consistently applied across all spork clients.

Committable suggestion skipped: line range outside the PR's diff.

models/events.go (3)

39-44: ⚠️ Potential issue

Ensure proper initialization and nil checks for new struct fields

With the addition of blockEventPayload and txEventPayloads to the CadenceEvents struct, please ensure these fields are properly initialized and that nil checks are in place where they are used to prevent potential nil pointer dereferences.


118-124: ⚠️ Potential issue

Add nil check for txEventPayload before dereferencing

In decodeCadenceEvents, txEventPayload is dereferenced with *txEventPayload in append(e.txEventPayloads, *txEventPayload) without a nil check. There's a risk of a nil pointer dereference if decodeTransactionEvent returns a nil txEventPayload. Please add a nil check before dereferencing or ensure decodeTransactionEvent always returns a non-nil value when err is nil.


175-177: 🛠️ Refactor suggestion

Return empty slice instead of nil in TxEventPayloads

In the TxEventPayloads method, consider returning an empty slice []events.TransactionEventPayload{} instead of nil when there are no transaction events. This prevents potential nil pointer exceptions when iterating over the returned slice.

cmd/run/cmd.go (1)

51-63: ⚠️ Potential issue

Prevent Panic by Ensuring 'ready' Channel Is Closed Only Once

Closing an already closed channel will cause a panic. In the current code, the ready channel is closed both in the deferred call defer close(ready) and within the function passed to bootstrap.Run. This could lead to a panic if both attempts to close ready are executed.

To fix this issue, use a sync.Once to ensure that ready is closed exactly once:

+			var readyOnce sync.Once
-			defer close(ready)
+
			err := bootstrap.Run(
				ctx,
				cfg,
+				func() {
+					readyOnce.Do(func() { close(ready) })
+				},
-				func() {
-					close(ready)
-				},
			)
+			if err != nil && !errors.Is(err, context.Canceled) {
+				readyOnce.Do(func() { close(ready) })
				log.Err(err).Msg("Gateway runtime error")
			}
+			readyOnce.Do(func() { close(ready) })

By using readyOnce, you ensure that ready is closed exactly once, regardless of how the code execution flows. This prevents any potential panic due to closing a closed channel.

Committable suggestion skipped: line range outside the PR's diff.

services/ingestion/engine.go (1)

251-251: ⚠️ Potential issue

Fix assignment mismatch: replayer.ReplayBlock returns 2 values

The function replayer.ReplayBlock returns only 2 values, but the code is attempting to assign 3 variables. This will cause a compile-time error.

Apply this diff to correct the assignment:

-res, _, err := replayer.ReplayBlock(events.TxEventPayloads(), blockEvents)
+res, err := replayer.ReplayBlock(events.TxEventPayloads(), blockEvents)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	res, err := replayer.ReplayBlock(events.TxEventPayloads(), blockEvents)
🧰 Tools
🪛 GitHub Check: Lint

[failure] 251-251:
assignment mismatch: 3 variables but replayer.ReplayBlock returns 2 values) (typecheck)


[failure] 251-251:
assignment mismatch: 3 variables but replayer.ReplayBlock returns 2 values (typecheck)

api/debug.go (2)

264-268: 🛠️ Refactor suggestion

Default from address in TraceCall should be zero address

In the TraceCall method, the from address defaults to d.config.Coinbase if not provided. To align with Ethereum JSON-RPC standard behavior, consider defaulting to the zero address instead.

Apply this diff to update the default from address:

-	from := d.config.Coinbase
+	from := gethCommon.Address{}
	if args.From != nil {
		from = *args.From
	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	// Default address in case user does not provide one
	from := gethCommon.Address{}
	if args.From != nil {
		from = *args.From
	}

440-445: ⚠️ Potential issue

Prevent potential nil pointer dereference in isDefaultCallTracer

In the isDefaultCallTracer function, dereferencing config.Tracer without checking if it's nil may cause a panic. Ensure you check for nil before dereferencing.

Apply this diff to prevent a possible nil pointer dereference:

	if config.Tracer == nil {
		return false
	}

	if *config.Tracer != replayer.TracerName {
		return false
	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	if config.Tracer == nil {
		return false
	}

	if *config.Tracer != replayer.TracerName {
		return false
	}

	tracerConfig := json.RawMessage(replayer.TracerConfig)
	return slices.Equal(config.TracerConfig, tracerConfig)
bootstrap/bootstrap.go (2)

500-509: ⚠️ Potential issue

Correct the spelling of evmBlokcHeight to evmBlockHeight

The variable evmBlokcHeight appears to be misspelled. It should be evmBlockHeight for consistency and readability.

Apply this diff to correct the spelling:

-evmBlokcHeight := uint64(0)
+evmBlockHeight := uint64(0)

-cadenceBlock, err := client.GetBlockHeaderByHeight(context.Background(), cadenceHeight)
+cadenceBlock, err := client.GetBlockHeaderByHeight(context.Background(), cadenceHeight)

-snapshot, err := registerStore.GetSnapshotAt(evmBlokcHeight)
+snapshot, err := registerStore.GetSnapshotAt(evmBlockHeight)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		evmBlockHeight := uint64(0)
		cadenceBlock, err := client.GetBlockHeaderByHeight(context.Background(), cadenceHeight)
		if err != nil {
			return nil, fmt.Errorf("could not fetch provided cadence height, make sure it's correct: %w", err)
		}

		snapshot, err := registerStore.GetSnapshotAt(evmBlockHeight)
		if err != nil {
			return nil, fmt.Errorf("could not get register snapshot at block height %d: %w", 0, err)
		}

390-398: ⚠️ Potential issue

Ensure all storage components are properly closed in StopDB()

Currently, the StopDB method only closes b.storages.Storage. If b.storages.Registers or other storages require closing, ensure they are also properly closed to prevent resource leaks.

Apply this diff to close the Registers storage:

func (b *Bootstrap) StopDB() {
	if b.storages == nil || b.storages.Storage == nil {
		return
	}
	err := b.storages.Storage.Close()
	if err != nil {
		b.logger.Err(err).Msg("PebbleDB graceful shutdown failed")
	}
+	if b.storages.Registers != nil {
+		err := b.storages.Registers.Close()
+		if err != nil {
+			b.logger.Err(err).Msg("Registers storage graceful shutdown failed")
+		}
+	}
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func (b *Bootstrap) StopDB() {
	if b.storages == nil || b.storages.Storage == nil {
		return
	}
	err := b.storages.Storage.Close()
	if err != nil {
		b.logger.Err(err).Msg("PebbleDB graceful shutdown failed")
	}
	if b.storages.Registers != nil {
		err := b.storages.Registers.Close()
		if err != nil {
			b.logger.Err(err).Msg("Registers storage graceful shutdown failed")
		}
	}
}
services/ingestion/event_subscriber.go (3)

40-42: ⚠️ Potential issue

Potential data races on recovery and recoveredEvents fields

The recovery flag and recoveredEvents slice are accessed and modified across multiple goroutines without synchronization. This can lead to data races and unpredictable behavior.

To ensure thread safety, consider protecting these fields with a mutex or other synchronization mechanisms. For example:

type RPCEventSubscriber struct {
    // ...

    recoveryMu      sync.Mutex
    recovery        bool
    recoveredEvents []flow.Event
}

// When accessing or modifying `recovery` and `recoveredEvents`, use the mutex:
func (r *RPCEventSubscriber) someMethod() {
    r.recoveryMu.Lock()
    defer r.recoveryMu.Unlock()
    // Access or modify r.recovery and r.recoveredEvents
}

Also applies to: 165-170


282-284: ⚠️ Potential issue

Handle mismatched lengths of transaction and block events

The code assumes that the lengths of the transactions and blocks slices are equal. If this assumption fails due to missing events or data inconsistencies, it could cause the backfill process to fail.

Consider implementing logic to handle cases where the lengths differ. You might align the events based on their block heights or handle missing events gracefully to ensure robustness.


431-439: 🛠️ Refactor suggestion

Assumption of a single recovered event may not hold

In fetchMissingData, the code expects exactly one recovered event per height. However, there might be multiple events or none at all, which could lead to errors.

Modify the logic to handle scenarios where the number of recovered events is not exactly one. Instead of enforcing this assumption, process all retrieved events for the given height appropriately.

tests/web3js/debug_traces_test.js (2)

28-31: ⚠️ Potential issue

Declare variables response and res to prevent unintended globals

The variables response and res are used without declaration in multiple places (e.g., lines 28, 47, 85, 137, 151, 190, 237). This can lead to unintended global variables, which is a bad practice and may cause unexpected behavior. Please declare these variables using let or const within their appropriate scopes.

Apply this diff to declare response and res:

+ let response = await helpers.callRPCMethod(
    'debug_traceTransaction',
    [receipt.transactionHash, callTracer]
);

+ let res = await helpers.signAndSend({
    from: conf.eoa.address,
    to: contractAddress,
    data: updateData,
    value: '0',
    gasPrice: conf.minGasPrice,
});

Also applies to: 47-50, 85-88, 137-140, 151-154, 190-193, 237-245


41-42: ⚠️ Potential issue

Fix potential type issues with assert.lengthOf and BigInt literals

The assert.lengthOf function expects the length argument to be a Number, but 9856n and 9806n are BigInt literals. Using BigInt may cause type errors. Consider converting BigInt literals to Number or removing the n suffix.

Apply this diff to fix the issue:

- assert.lengthOf(txTrace.input, 9856n)
+ assert.lengthOf(txTrace.input, 9856)

- assert.lengthOf(txTrace.output, 9806n)
+ assert.lengthOf(txTrace.output, 9806)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    assert.lengthOf(txTrace.input, 9856)
    assert.lengthOf(txTrace.output, 9806)
services/requester/remote_cadence_arch.go (1)

29-34: ⚠️ Potential issue

Address concurrent access to cachedCalls map

The cachedCalls map is accessed and modified without synchronization. If RemoteCadenceArch instances are used concurrently by multiple goroutines, this could lead to race conditions and undefined behavior. Consider using a thread-safe alternative like sync.Map or adding a mutex to synchronize access to the cachedCalls map.

Apply this refactor to make cachedCalls thread-safe using a mutex:

+import "sync"

 type RemoteCadenceArch struct {
     blockHeight uint64
     client      *CrossSporkClient
     chainID     flow.ChainID
-    cachedCalls map[string]evmTypes.Data
+    cachedCalls map[string]evmTypes.Data
+    cacheMutex  sync.RWMutex
 }

Update the Run method to synchronize access:

 func (rca *RemoteCadenceArch) Run(input []byte) ([]byte, error) {
     key := hex.EncodeToString(crypto.Keccak256(input))

+    rca.cacheMutex.RLock()
     if result, ok := rca.cachedCalls[key]; ok {
+        rca.cacheMutex.RUnlock()
         return result, nil
     }
+    rca.cacheMutex.RUnlock()

     evmResult, err := rca.runCall(input)
     if err != nil {
         return nil, err
     }
     return evmResult.ReturnedData, nil
 }

And in the runCall method when writing to the cache:

 func (rca *RemoteCadenceArch) runCall(input []byte) (*evmTypes.ResultSummary, error) {
     // existing code...

     key := hex.EncodeToString(crypto.Keccak256(input))
+    rca.cacheMutex.Lock()
     rca.cachedCalls[key] = evmResult.ReturnedData
+    rca.cacheMutex.Unlock()

     return evmResult, nil
 }

Committable suggestion skipped: line range outside the PR's diff.

models/events_test.go (1)

345-346: ⚠️ Potential issue

Incorrect function call to NewSingleBlockEvents in TestNewMultiBlockEvents

In the TestNewMultiBlockEvents function, the call to NewSingleBlockEvents is incorrect. It should be NewMultiBlockEvents to match the test's purpose and ensure correct functionality.

Apply this diff to fix the function call:

-		evmEvents := NewSingleBlockEvents(blockEvents)
+		evmEvents := NewMultiBlockEvents(blockEvents)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		evmEvents := NewMultiBlockEvents(blockEvents)
		require.Error(t, evmEvents.Err)
tests/web3js/debug_util_test.js (1)

5-15: 🛠️ Refactor suggestion

Enhance test robustness with additional assertions and error handling

While the basic happy path is tested, the test could be more comprehensive:

  1. Missing error handling for RPC failures
  2. No validation of the height being a non-negative number
  3. No test cases for invalid block identifiers

Consider enhancing the test with this implementation:

 it('should retrieve flow height', async () => {
+    // Happy path
     let response = await helpers.callRPCMethod(
         'debug_flowHeightByBlock',
         ['latest']
     )
     assert.equal(response.status, 200)
     assert.isDefined(response.body)
+    assert.isDefined(response.body.result)
 
     let height = response.body.result
+    assert.isNumber(height)
+    assert.isAtLeast(height, 0, 'Height should be non-negative')
     assert.equal(height, conf.startBlockHeight)
 })
+
+it('should handle invalid block identifier', async () => {
+    let response = await helpers.callRPCMethod(
+        'debug_flowHeightByBlock',
+        ['invalid_block']
+    )
+    assert.equal(response.status, 400)
+    assert.isDefined(response.body.error)
+})
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

it('should retrieve flow height', async () => {
    // Happy path
    let response = await helpers.callRPCMethod(
        'debug_flowHeightByBlock',
        ['latest']
    )
    assert.equal(response.status, 200)
    assert.isDefined(response.body)
    assert.isDefined(response.body.result)

    let height = response.body.result
    assert.isNumber(height)
    assert.isAtLeast(height, 0, 'Height should be non-negative')
    assert.equal(height, conf.startBlockHeight)
})

it('should handle invalid block identifier', async () => {
    let response = await helpers.callRPCMethod(
        'debug_flowHeightByBlock',
        ['invalid_block']
    )
    assert.equal(response.status, 400)
    assert.isDefined(response.body.error)
})
services/requester/utils.go (2)

11-12: ⚠️ Potential issue

Function should validate input and return error

The function should validate the input script and return an error if it's invalid.

Consider updating the signature and adding validation:

-func replaceAddresses(script []byte, chainID flow.ChainID) []byte {
+func replaceAddresses(script []byte, chainID flow.ChainID) ([]byte, error) {
+	if len(script) == 0 {
+		return nil, fmt.Errorf("empty script")
+	}

Committable suggestion skipped: line range outside the PR's diff.


23-28: 🛠️ Refactor suggestion

Improve string replacement safety and efficiency

The current implementation has several potential issues:

  1. String replacement could match partial words
  2. Multiple string conversions could be inefficient
  3. No validation of successful replacements

Consider using a more robust approach:

-	for _, contract := range contracts {
-		s = strings.ReplaceAll(s,
-			fmt.Sprintf("import %s", contract.Name),
-			fmt.Sprintf("import %s from %s", contract.Name, contract.Address.HexWithPrefix()),
-		)
-	}
+	for _, contract := range contracts {
+		oldImport := fmt.Sprintf("import %s", contract.Name)
+		newImport := fmt.Sprintf("import %s from %s", contract.Name, contract.Address.HexWithPrefix())
+		
+		// Use regex to ensure we match complete import statements
+		pattern := fmt.Sprintf(`\b%s\b`, regexp.QuoteMeta(oldImport))
+		re := regexp.MustCompile(pattern)
+		
+		if !re.MatchString(s) {
+			continue
+		}
+		
+		s = re.ReplaceAllString(s, newImport)
+	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	for _, contract := range contracts {
		oldImport := fmt.Sprintf("import %s", contract.Name)
		newImport := fmt.Sprintf("import %s from %s", contract.Name, contract.Address.HexWithPrefix())
		
		// Use regex to ensure we match complete import statements
		pattern := fmt.Sprintf(`\b%s\b`, regexp.QuoteMeta(oldImport))
		re := regexp.MustCompile(pattern)
		
		if !re.MatchString(s) {
			continue
		}
		
		s = re.ReplaceAllString(s, newImport)
	}
services/evm/extract_test.go (2)

16-27: ⚠️ Potential issue

Critical: Test function needs restructuring for better maintainability and portability.

Several issues need to be addressed:

  1. The function doesn't follow Go test naming convention (should be TestStateDiff)
  2. Hardcoded paths (/var/flow52/...) make the test environment-dependent
  3. Magic numbers (17724990, 228901661) lack context
  4. The test appears to be an integration test but isn't marked as such

Consider refactoring like this:

-func StateDiff(t *testing.T) {
+func TestStateDiff(t *testing.T) {
+       t.Skip("Integration test - requires specific environment setup")
+
+       // Test configuration
+       const (
+               testnetHeight = uint64(17724990)
+               checkpointID = "228901661"
+       )
+       
+       dbPath := t.TempDir() // or use environment variable
+       checkpointPath := t.TempDir() // or use environment variable
+
-       state1 := extractEVMState(t, flowGo.Testnet, "/var/flow52/evm/data/db", uint64(17724990))
-       state2 := evmStateFromCheckpointExtract(t, "/var/flow52/evm-state-from-checkpoint-228901661")
+       state1 := extractEVMState(t, flowGo.Testnet, dbPath, testnetHeight)
+       state2 := evmStateFromCheckpointExtract(t, checkpointPath)

Committable suggestion skipped: line range outside the PR's diff.


17-18: 💡 Codebase verification

Test file uses hardcoded paths that should be configurable

The paths /var/flow52/evm/data/db and /var/flow52/evm-state-from-checkpoint-228901661 are hardcoded in a test file. While they don't expose sensitive data (they are only used to load test state data), hardcoding system paths in tests is a bad practice as it:

  • Makes tests non-portable across different environments
  • Could potentially access unintended system locations in different environments

Recommendations:

  • Use test fixtures or temporary directories for test data
  • Make paths configurable through environment variables or test configuration
  • Consider using relative paths within the test directory structure
🔗 Analysis chain

Verify security implications of filesystem access.

The test reads from system paths that might contain sensitive data. We should verify that these paths don't expose production data or credentials.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if the hardcoded paths contain sensitive patterns

# Look for potential credential files or sensitive data
rg -g '!{.git,node_modules}' -i '(password|secret|key|credential|token)' "/var/flow52"

# Check for file permissions that might expose sensitive data
ls -la "/var/flow52/evm/data/db" "/var/flow52/evm-state-from-checkpoint-228901661" 2>/dev/null

Length of output: 219


Script:

#!/bin/bash
# Let's analyze the test file and its context to understand the nature of these paths

# First, check the test file content
cat services/evm/extract_test.go

# Look for any configuration or test data files
fd -e json -e yaml -e yml -e toml . services/evm/

# Check if these paths are used elsewhere in the codebase
rg -l "/var/flow52"

Length of output: 1340

storage/pebble/keys.go (1)

63-80: 🛠️ Refactor suggestion

Add safety checks and documentation for the MVCC comparer

The MVCC comparer implementation needs additional safety measures and documentation:

  1. For register keys, there's no validation that the key length is at least 8 bytes before attempting to remove them
  2. The purpose of removing the last 8 bytes for register keys should be documented

Consider applying these improvements:

 func NewMVCCComparer() *pebble.Comparer {
 	comparer := *pebble.DefaultComparer
 	comparer.Split = func(a []byte) int {
 		if len(a) == 0 {
 			return 0
 		}
 		if a[0] == registerKeyMarker {
+			// Register keys are structured as: marker + key + version(8 bytes)
+			// Split at version boundary to enable MVCC operations
+			if len(a) < 9 { // marker(1) + minimum version(8)
+				return len(a)
+			}
 			return len(a) - 8
 		}
 		return len(a)
 	}
 	comparer.Name = "flow.MVCCComparer"
 	return &comparer
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func NewMVCCComparer() *pebble.Comparer {
	comparer := *pebble.DefaultComparer
	comparer.Split = func(a []byte) int {
		if len(a) == 0 {
			return 0
		}
		if a[0] == registerKeyMarker {
			// Register keys are structured as: marker + key + version(8 bytes)
			// Split at version boundary to enable MVCC operations
			if len(a) < 9 { // marker(1) + minimum version(8)
				return len(a)
			}
			return len(a) - 8
		}
		return len(a)
	}
	comparer.Name = "flow.MVCCComparer"
	return &comparer
}
tests/web3js/eth_rate_limit_test.js (1)

13-29: 🛠️ Refactor suggestion

Consider enhancing test robustness and realism

The current implementation has several areas for improvement:

  1. Sequential requests might not effectively simulate real-world conditions
  2. Error handling could be more thorough
  3. Memory usage might be a concern with increased request count

Consider these improvements:

     // this should be synced with the value on server config
     let requestLimit = 500
     let requestsMade = 0
     let requestsFailed = 0
     let requests = 1000
 
+    // Process requests in smaller batches to manage memory
+    const BATCH_SIZE = 50
+    const processBatch = async (start, size) => {
+      const promises = Array(size).fill().map(async (_, i) => {
+        try {
+          await ws.eth.getBlockNumber()
+          return { success: true }
+        } catch (e) {
+          assert.instanceOf(e, Error)
+          assert.equal(e.innerError.message, 'limit of requests per second reached')
+          return { success: false }
+        }
+      })
+      const results = await Promise.all(promises)
+      requestsMade += results.filter(r => r.success).length
+      requestsFailed += results.filter(r => !r.success).length
+    }
+
+    for (let i = 0; i < requests; i += BATCH_SIZE) {
+      const batchSize = Math.min(BATCH_SIZE, requests - i)
+      await processBatch(i, batchSize)
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    let requestLimit = 500
    let requestsMade = 0
    let requestsFailed = 0
    let requests = 1000

    // Process requests in smaller batches to manage memory
    const BATCH_SIZE = 50
    const processBatch = async (start, size) => {
      const promises = Array(size).fill().map(async (_, i) => {
        try {
          await ws.eth.getBlockNumber()
          return { success: true }
        } catch (e) {
          assert.instanceOf(e, Error)
          assert.equal(e.innerError.message, 'limit of requests per second reached')
          return { success: false }
        }
      })
      const results = await Promise.all(promises)
      requestsMade += results.filter(r => r.success).length
      requestsFailed += results.filter(r => !r.success).length
    }

    for (let i = 0; i < requests; i += BATCH_SIZE) {
      const batchSize = Math.min(BATCH_SIZE, requests - i)
      await processBatch(i, batchSize)
    }

    assert.equal(requestsMade, requestLimit, 'more requests made than the limit')
    assert.equal(requestsFailed, requests - requestLimit, 'failed requests don\'t match expected value')
cmd/util/main.go (2)

35-35: 🛠️ Refactor suggestion

Consider making chainID configurable

The chainID is hardcoded to Testnet which limits the utility's usage. Consider adding it as a command-line flag to support different networks.

-chainID := flowGo.Testnet
+var network string
+flag.StringVar(&network, "network", "testnet", "Network to use (mainnet, testnet)")
+
+var chainID flowGo.ChainID
+switch network {
+case "mainnet":
+    chainID = flowGo.Mainnet
+case "testnet":
+    chainID = flowGo.Testnet
+default:
+    log.Error().Msgf("unsupported network: %s", network)
+    flag.Usage()
+    os.Exit(1)
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	var network string
	flag.StringVar(&network, "network", "testnet", "Network to use (mainnet, testnet)")
	
	var chainID flowGo.ChainID
	switch network {
	case "mainnet":
		chainID = flowGo.Mainnet
	case "testnet":
		chainID = flowGo.Testnet
	default:
		log.Error().Msgf("unsupported network: %s", network)
		flag.Usage()
		os.Exit(1)
	}

29-33: 🛠️ Refactor suggestion

Add directory validation checks

The code should validate that the output directory exists and is writable, and that the register store directory exists and is readable before proceeding.

 if height == 0 || outputDir == "" || registerStoreDir == "" {
     log.Error().Msg("All flags (height, output, register-store) must be provided")
     flag.Usage()
     os.Exit(1)
 }
+
+// Validate register store directory
+if _, err := os.Stat(registerStoreDir); os.IsNotExist(err) {
+    log.Error().Msgf("Register store directory does not exist: %s", registerStoreDir)
+    os.Exit(1)
+}
+
+// Create output directory if it doesn't exist
+if err := os.MkdirAll(outputDir, 0755); err != nil {
+    log.Error().Err(err).Msgf("Failed to create output directory: %s", outputDir)
+    os.Exit(1)
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	if height == 0 || outputDir == "" || registerStoreDir == "" {
		log.Error().Msg("All flags (height, output, register-store) must be provided")
		flag.Usage()
		os.Exit(1)
	}

	// Validate register store directory
	if _, err := os.Stat(registerStoreDir); os.IsNotExist(err) {
		log.Error().Msgf("Register store directory does not exist: %s", registerStoreDir)
		os.Exit(1)
	}

	// Create output directory if it doesn't exist
	if err := os.MkdirAll(outputDir, 0755); err != nil {
		log.Error().Err(err).Msgf("Failed to create output directory: %s", outputDir)
		os.Exit(1)
	}
storage/mocks/mocks.go (1)

26-26: ⚠️ Potential issue

Fix timestamp implementation

Using time.Now().Second() for block timestamp is problematic because:

  1. It only provides values 0-59, which is not representative of real block timestamps
  2. It can cause test flakiness due to timing dependencies

Consider using a deterministic timestamp based on block height:

-Timestamp:           uint64(time.Now().Second()),
+Timestamp:           uint64(1600000000 + height), // Base timestamp + height for deterministic values

Committable suggestion skipped: line range outside the PR's diff.

tests/web3js/eth_get_storage_at_test.js (1)

39-39: ⚠️ Potential issue

Add missing 'let' or 'const' declaration.

The newValue variable is used without proper declaration.

-newValue = 100
+const newValue = 100
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    const newValue = 100
tests/web3js/eth_revert_reason_test.js (2)

82-93: 🛠️ Refactor suggestion

Remove duplicated polling logic

This polling implementation is identical to the previous one. Once the helper function suggested above is implemented, this segment can be simplified.

Replace with:

- rcp = null
- while (rcp == null) {
-     rcp = await helpers.callRPCMethod(
-         'eth_getTransactionReceipt',
-         [txHash]
-     )
-     if (rcp.body.result == null) {
-         rcp = null
-     }
- }
+ rcp = await waitForReceipt(txHash)

Committable suggestion skipped: line range outside the PR's diff.


45-56: 🛠️ Refactor suggestion

Add timeout and delay to the polling mechanism

The current polling implementation has several potential issues:

  1. No timeout mechanism could lead to infinite loops
  2. Continuous polling without delay could overwhelm the node
  3. The polling logic is duplicated later in the file

Consider refactoring to a helper function with timeout and delay:

- let rcp = null
- while (rcp == null) {
-     rcp = await helpers.callRPCMethod(
-         'eth_getTransactionReceipt',
-         [txHash]
-     )
-     if (rcp.body.result == null) {
-         rcp = null
-     }
- }
+ const waitForReceipt = async (txHash, timeout = 60000) => {
+     const startTime = Date.now()
+     while (Date.now() - startTime < timeout) {
+         const rcp = await helpers.callRPCMethod(
+             'eth_getTransactionReceipt',
+             [txHash]
+         )
+         if (rcp.body.result != null) {
+             return rcp
+         }
+         await new Promise(resolve => setTimeout(resolve, 1000))
+     }
+     throw new Error('Transaction receipt not found: timeout')
+ }
+ 
+ let rcp = await waitForReceipt(txHash)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    const waitForReceipt = async (txHash, timeout = 60000) => {
        const startTime = Date.now()
        while (Date.now() - startTime < timeout) {
            const rcp = await helpers.callRPCMethod(
                'eth_getTransactionReceipt',
                [txHash]
            )
            if (rcp.body.result != null) {
                return rcp
            }
            await new Promise(resolve => setTimeout(resolve, 1000))
        }
        throw new Error('Transaction receipt not found: timeout')
    }
    
    let rcp = await waitForReceipt(txHash)
storage/pebble/register_storage_test.go (1)

69-69: ⚠️ Potential issue

Remove unnecessary error check.

This line contains a redundant error check with no preceding error-generating operation.

-		require.NoError(t, err)
tests/web3js/verify_cadence_arch_calls_test.js (3)

33-50: 🛠️ Refactor suggestion

Reduce code duplication by extracting common test patterns

The test structure for contract interactions is repeated across different methods. Consider extracting this pattern into helper functions.

Here's a suggested helper function:

async function verifyContractCall(contract, methodName, params = [], expectedLength = 66) {
    const data = contract.methods[methodName](...params).encodeABI()
    
    // Estimate gas
    const estimatedGas = await web3.eth.estimateGas({
        from: conf.eoa.address,
        to: contract.options.address,
        data: data,
    })

    // Submit transaction
    const txResult = await helpers.signAndSend({
        from: conf.eoa.address,
        to: contract.options.address,
        data: data,
        value: '0',
        gasPrice: conf.minGasPrice,
        gas: estimatedGas,
    })
    assert.equal(txResult.receipt.status, conf.successStatus)

    // Verify call result
    const callResult = await web3.eth.call({ 
        to: contract.options.address, 
        data: data 
    }, 'latest')
    assert.notEqual(callResult, ZERO_HASH, `${methodName} response should not be zero`)
    assert.lengthOf(callResult, expectedLength, `Invalid ${methodName} response length`)
    
    return callResult
}

Then use it like this:

-let revertibleRandomData = deployed.contract.methods.verifyArchCallToRevertibleRandom().encodeABI()
-res = await helpers.signAndSend({
-    from: conf.eoa.address,
-    to: contractAddress,
-    data: revertibleRandomData,
-    value: '0',
-    gasPrice: conf.minGasPrice,
-})
-assert.equal(res.receipt.status, conf.successStatus)
-
-res = await web3.eth.call({ to: contractAddress, data: revertibleRandomData }, 'latest')
-assert.notEqual(
-    res,
-    '0x0000000000000000000000000000000000000000000000000000000000000000'
-)
-assert.lengthOf(res, 66)
+res = await verifyContractCall(deployed.contract, 'verifyArchCallToRevertibleRandom')

14-32: 🛠️ Refactor suggestion

Add gas estimation and improve test readability

A few suggestions to enhance this test section:

  1. Add gas estimation before transaction submission
  2. Extract magic numbers into named constants

Consider these improvements:

+const RANDOM_SOURCE_HEIGHT = 2
+const EXPECTED_RANDOM_SOURCE_LENGTH = 66
+const ZERO_HASH = '0x' + '0'.repeat(64)

-let getRandomSourceData = deployed.contract.methods.verifyArchCallToRandomSource(2).encodeABI()
+let getRandomSourceData = deployed.contract.methods.verifyArchCallToRandomSource(RANDOM_SOURCE_HEIGHT).encodeABI()

+// Estimate gas before sending transaction
+const estimatedGas = await web3.eth.estimateGas({
+    from: conf.eoa.address,
+    to: contractAddress,
+    data: getRandomSourceData,
+})

 res = await helpers.signAndSend({
     from: conf.eoa.address,
     to: contractAddress,
     data: getRandomSourceData,
     value: '0',
     gasPrice: conf.minGasPrice,
+    gas: estimatedGas,
 })

-assert.notEqual(
-    res,
-    '0x0000000000000000000000000000000000000000000000000000000000000000'
-)
-assert.lengthOf(res, 66)
+assert.notEqual(res, ZERO_HASH, 'Response should not be zero')
+assert.lengthOf(res, EXPECTED_RANDOM_SOURCE_LENGTH, 'Invalid random source length')
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    const RANDOM_SOURCE_HEIGHT = 2
    const EXPECTED_RANDOM_SOURCE_LENGTH = 66
    const ZERO_HASH = '0x' + '0'.repeat(64)

    // submit a transaction that calls verifyArchCallToRandomSource(uint64 height)
    let getRandomSourceData = deployed.contract.methods.verifyArchCallToRandomSource(RANDOM_SOURCE_HEIGHT).encodeABI()

    // Estimate gas before sending transaction
    const estimatedGas = await web3.eth.estimateGas({
        from: conf.eoa.address,
        to: contractAddress,
        data: getRandomSourceData,
    })

    res = await helpers.signAndSend({
        from: conf.eoa.address,
        to: contractAddress,
        data: getRandomSourceData,
        value: '0',
        gasPrice: conf.minGasPrice,
        gas: estimatedGas,
    })
    assert.equal(res.receipt.status, conf.successStatus)

    // make a contract call for verifyArchCallToRandomSource(uint64 height)
    res = await web3.eth.call({ to: contractAddress, data: getRandomSourceData }, 'latest')
    assert.notEqual(res, ZERO_HASH, 'Response should not be zero')
    assert.lengthOf(res, EXPECTED_RANDOM_SOURCE_LENGTH, 'Invalid random source length')

70-91: ⚠️ Potential issue

Add error handling and document test data

The test uses hardcoded values and lacks error handling for transaction retrieval.

Consider these improvements:

+// Test data for COA ownership verification
+const TEST_HASH = '0x1bacdb569847f31ade07e83d6bb7cefba2b9290b35d5c2964663215e73519cff'
+const TEST_PROOF = 'f853c18088f8d6e0586b0a20c78365766df842b840b90448f4591df2639873be2914c5560149318b7e2fcf160f7bb8ed13cfd97be2f54e6889606f18e50b2c37308386f840e03a9fff915f57b2164cba27f0206a95'

-let tx = await web3.eth.getTransactionFromBlock(conf.startBlockHeight, 1)
+// Retrieve transaction with error handling
+let tx
+try {
+    tx = await web3.eth.getTransactionFromBlock(conf.startBlockHeight, 1)
+    assert.ok(tx, 'Transaction should exist in the block')
+} catch (error) {
+    throw new Error(`Failed to retrieve transaction: ${error.message}`)
+}

 let verifyCOAOwnershipProofData = deployed.contract.methods.verifyArchCallToVerifyCOAOwnershipProof(
     tx.to,
-    '0x1bacdb569847f31ade07e83d6bb7cefba2b9290b35d5c2964663215e73519cff',
-    web3.utils.hexToBytes('f853c18088f8d6e0586b0a20c78365766df842b840b90448f4591df2639873be2914c5560149318b7e2fcf160f7bb8ed13cfd97be2f54e6889606f18e50b2c37308386f840e03a9fff915f57b2164cba27f0206a95')
+    TEST_HASH,
+    web3.utils.hexToBytes(TEST_PROOF)
 ).encodeABI()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    // Test data for COA ownership verification
    const TEST_HASH = '0x1bacdb569847f31ade07e83d6bb7cefba2b9290b35d5c2964663215e73519cff'
    const TEST_PROOF = 'f853c18088f8d6e0586b0a20c78365766df842b840b90448f4591df2639873be2914c5560149318b7e2fcf160f7bb8ed13cfd97be2f54e6889606f18e50b2c37308386f840e03a9fff915f57b2164cba27f0206a95'

    // Retrieve transaction with error handling
    let tx
    try {
        tx = await web3.eth.getTransactionFromBlock(conf.startBlockHeight, 1)
        assert.ok(tx, 'Transaction should exist in the block')
    } catch (error) {
        throw new Error(`Failed to retrieve transaction: ${error.message}`)
    }

    let verifyCOAOwnershipProofData = deployed.contract.methods.verifyArchCallToVerifyCOAOwnershipProof(
        tx.to,
        TEST_HASH,
        web3.utils.hexToBytes(TEST_PROOF)
    ).encodeABI()
    res = await helpers.signAndSend({
        from: conf.eoa.address,
        to: contractAddress,
        data: verifyCOAOwnershipProofData,
        value: '0',
        gasPrice: conf.minGasPrice,
    })
    assert.equal(res.receipt.status, conf.successStatus)

    // make a contract call for verifyArchCallToVerifyCOAOwnershipProof(address,bytes32,bytes)
    res = await web3.eth.call({ to: contractAddress, data: verifyCOAOwnershipProofData }, 'latest')
    assert.equal(
        web3.eth.abi.decodeParameter('bool', res),
        false,
    )
storage/register_delta_test.go (2)

215-224: 🛠️ Refactor suggestion

Add database cleanup in runDB function.

The runDB function should ensure proper cleanup of database resources.

Add cleanup:

 func runDB(name string, t *testing.T, f func(t *testing.T, db *pebbleStorage.Storage)) {
 	dir := t.TempDir()
 
 	db, err := pebbleStorage.New(dir, zerolog.New(zerolog.NewTestWriter(t)))
 	require.NoError(t, err)
+	t.Cleanup(func() {
+		require.NoError(t, db.Close())
+	})
 
 	t.Run(name, func(t *testing.T) {
 		f(t, db)
 	})
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func runDB(name string, t *testing.T, f func(t *testing.T, db *pebbleStorage.Storage)) {
	dir := t.TempDir()

	db, err := pebbleStorage.New(dir, zerolog.New(zerolog.NewTestWriter(t)))
	require.NoError(t, err)
	t.Cleanup(func() {
		require.NoError(t, db.Close())
	})

	t.Run(name, func(t *testing.T) {
		f(t, db)
	})
}

226-244: 🛠️ Refactor suggestion

Improve error handling in commit function.

The commit function has a few areas for improvement:

  1. Uses hardcoded height 0 in Store call
  2. Inconsistent error handling between Store and Commit operations

Consider this improvement:

 func commit(
 	t *testing.T,
 	db *pebbleStorage.Storage,
 	d *storage.RegisterDelta,
 	r *pebbleStorage.RegisterStorage,
+	height uint64,
 ) error {
 	batch := db.NewBatch()
+	defer batch.Close()
 
-	err := r.Store(d.GetUpdates(), 0, batch)
+	err := r.Store(d.GetUpdates(), height, batch)
 	if err != nil {
 		return err
 	}
 
-	err = batch.Commit(pebble.Sync)
-	require.NoError(t, err)
-	return nil
+	return batch.Commit(pebble.Sync)
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

// commit is an example on how to commit the delta to storage.
func commit(
	t *testing.T,
	db *pebbleStorage.Storage,
	d *storage.RegisterDelta,
	r *pebbleStorage.RegisterStorage,
	height uint64,
) error {
	batch := db.NewBatch()
	defer batch.Close()

	err := r.Store(d.GetUpdates(), height, batch)
	if err != nil {
		return err
	}

	return batch.Commit(pebble.Sync)
}
storage/pebble/blocks.go (1)

202-222: 🛠️ Refactor suggestion

Add parameter validation and improve error handling

The InitHeights method has several areas that could be improved:

  1. The batch parameter should be validated to prevent nil pointer dereference
  2. The cadenceHeight and cadenceID parameters should be validated
  3. The error message on line 213 has redundant formatting

Consider applying these improvements:

 func (b *Blocks) InitHeights(cadenceHeight uint64, cadenceID flow.Identifier, batch *pebble.Batch) error {
+    if batch == nil {
+        return fmt.Errorf("batch cannot be nil")
+    }
+    if cadenceHeight == 0 {
+        return fmt.Errorf("cadenceHeight cannot be zero")
+    }
+    if cadenceID == flow.EmptyID {
+        return fmt.Errorf("cadenceID cannot be empty")
+    }
     // ... rest of the method ...
-    if err := b.store.set(latestEVMHeightKey, nil, uint64Bytes(0), batch); err != nil {
-        return fmt.Errorf("failed to init latest EVM height at: %d, with: %w", 0, err)
+    if err := b.store.set(latestEVMHeightKey, nil, uint64Bytes(0), batch); err != nil {
+        return fmt.Errorf("failed to init latest EVM height with: %w", err)
     }
     // ... rest of the method ...
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func (b *Blocks) InitHeights(cadenceHeight uint64, cadenceID flow.Identifier, batch *pebble.Batch) error {
	if batch == nil {
		return fmt.Errorf("batch cannot be nil")
	}
	if cadenceHeight == 0 {
		return fmt.Errorf("cadenceHeight cannot be zero")
	}
	if cadenceID == flow.EmptyID {
		return fmt.Errorf("cadenceID cannot be empty")
	}
	// sanity check, make sure we don't have any heights stored, disable overwriting the database
	_, err := b.LatestEVMHeight()
	if !errors.Is(err, errs.ErrStorageNotInitialized) {
		return fmt.Errorf("can't init the database that already has data stored")
	}

	if err := b.store.set(latestCadenceHeightKey, nil, uint64Bytes(cadenceHeight), batch); err != nil {
		return fmt.Errorf("failed to init latest Cadence height at: %d, with: %w", cadenceHeight, err)
	}

	if err := b.store.set(latestEVMHeightKey, nil, uint64Bytes(0), batch); err != nil {
		return fmt.Errorf("failed to init latest EVM height with: %w", err)
	}

	// we store genesis block because it isn't emitted over the network
	genesisBlock := models.GenesisBlock(b.chainID)
	if err := b.Store(cadenceHeight, cadenceID, genesisBlock, batch); err != nil {
		return fmt.Errorf("failed to store genesis block at Cadence height: %d, with: %w", cadenceHeight, err)
	}
services/replayer/blocks_provider_test.go (1)

58-70: ⚠️ Potential issue

Fix duplicate test name.

The test case name "with new block non-sequential to latest block" is used twice, but the second test (lines 58-70) actually tests sequential blocks. Please update the name to accurately reflect the test scenario.

-	t.Run("with new block non-sequential to latest block", func(t *testing.T) {
+	t.Run("with new block sequential to latest block", func(t *testing.T) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	t.Run("with new block sequential to latest block", func(t *testing.T) {
		_, blocks := setupBlocksDB(t)
		blocksProvider := NewBlocksProvider(blocks, flowGo.Emulator, nil)

		block1 := mocks.NewBlock(10)
		err := blocksProvider.OnBlockReceived(block1)
		require.NoError(t, err)

		block2 := mocks.NewBlock(11)
		err = blocksProvider.OnBlockReceived(block2)
		require.NoError(t, err)
	})
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant