-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extract EVM state #683
base: feature/local-tx-reexecution
Are you sure you want to change the base?
Extract EVM state #683
Conversation
WalkthroughThe changes in this pull request involve significant modifications across various files, primarily focusing on the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant API
participant Storage
participant EVM
User->>API: Request to execute transaction
API->>EVM: Execute transaction with state overrides
EVM->>Storage: Store transaction data
Storage-->>EVM: Confirm storage success
EVM-->>API: Return transaction receipt
API-->>User: Return transaction result
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Warning There were issues while running some tools. Please review the errors and either fix the tool’s configuration or disable the tool if it’s a critical failure. 🔧 golangci-lint (1.62.2)level=warning msg="[runner] Can't run linter goanalysis_metalinter: buildir: failed to load package evm: could not load export data: no export data for "github.com/onflow/flow-evm-gateway/services/evm"" Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.
🛑 Comments failed to post (49)
services/replayer/blocks_provider.go (1)
66-73:
⚠️ Potential issueCorrect the usage of
fmt.Errorf
with%w
for error wrappingThe current usage of
fmt.Errorf
with the%w
verb is incorrect. The%w
verb should be used to wrap an error and must be the only verb in the format string or at the end. Including other formatting verbs after%w
will cause a compilation error.Apply this diff to fix the error wrapping:
return fmt.Errorf( - "%w: received new block: %d, non-sequential of latest block: %d", + "received new block: %d, non-sequential of latest block: %d: %w", models.ErrInvalidHeight, block.Height, bp.latestBlock.Height, )This change ensures that
models.ErrInvalidHeight
is properly wrapped, and the error message includes the necessary context.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if bp.latestBlock != nil && bp.latestBlock.Height != (block.Height-1) { return fmt.Errorf( "received new block: %d, non-sequential of latest block: %d: %w", models.ErrInvalidHeight, block.Height, bp.latestBlock.Height, ) }
storage/register_delta.go (1)
57-63:
⚠️ Potential issuePossible Incorrect Behavior in
ValueExists
with Empty ValuesThe
ValueExists
method checkslen(value) > 0
to determine if a value exists. If a key exists with an empty value (zero-length byte slice), this method will returnfalse
, which may not be the intended behavior. Consider modifying the implementation to check for the existence of the key regardless of the value's length.Apply this diff to correct the implementation:
func (r *RegisterDelta) ValueExists(owner []byte, key []byte) (bool, error) { - value, err := r.GetValue(owner, key) + _, err := r.GetValue(owner, key) if err != nil { + if errors.Is(err, flow.ErrNotFound) { + return false, nil + } return false, err } - return len(value) > 0, nil + return true, nil }Committable suggestion skipped: line range outside the PR's diff.
storage/pebble/storage_test.go (3)
24-32:
⚠️ Potential issueEnsure Batch is Properly Closed to Prevent Resource Leaks
In the
store block
test, the batch created withdb.NewBatch()
is not closed after committing. According to Pebble's documentation, batches should be closed after use to release resources.Apply this diff to close the batch:
err = batch.Commit(pebble.Sync) require.NoError(t, err) +err = batch.Close() +require.NoError(t, err)Alternatively, you can ensure the batch is closed by adding a
defer
statement:batch := db.NewBatch() +defer func() { + require.NoError(t, batch.Close()) +}()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := db.NewBatch() err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch) require.NoError(t, err) err = blocks.Store(20, flow.Identifier{0x1}, bl, batch) require.NoError(t, err) err = batch.Commit(pebble.Sync) require.NoError(t, err) err = batch.Close() require.NoError(t, err)
43-50:
⚠️ Potential issueEnsure Batch is Properly Closed to Prevent Resource Leaks
In the
get stored block
test, the batch is created and committed but not closed. To prevent resource leaks, remember to close the batch after use.Apply this diff to close the batch:
err = batch.Commit(pebble.Sync) require.NoError(t, err) +err = batch.Close() +require.NoError(t, err)Or use a
defer
statement to close the batch:batch := db.NewBatch() +defer func() { + require.NoError(t, batch.Close()) +}()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := db.NewBatch() err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch) require.NoError(t, err) err = blocks.Store(cadenceHeight, cadenceID, bl, batch) require.NoError(t, err) err = batch.Commit(pebble.Sync) require.NoError(t, err) err = batch.Close() require.NoError(t, err)
76-82:
⚠️ Potential issueEnsure Batch is Properly Closed to Prevent Resource Leaks
In the
get not found block error
test, the batch is not closed after committing. Closing the batch is essential to release resources and prevent potential memory leaks.Apply this diff to close the batch:
err = batch.Commit(pebble.Sync) require.NoError(t, err) +err = batch.Close() +require.NoError(t, err)Alternatively, add a
defer
statement to close the batch:batch := db.NewBatch() +defer func() { + require.NoError(t, batch.Close()) +}()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := db.NewBatch() err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch) require.NoError(t, err) err = blocks.Store(2, flow.Identifier{0x1}, mocks.NewBlock(1), batch) // init require.NoError(t, err) err = batch.Commit(pebble.Sync) require.NoError(t, err) err = batch.Close() require.NoError(t, err)
api/utils.go (2)
98-105: 🛠️ Refactor suggestion
Enforce exact hash length in
decodeHash
Accepting hashes shorter than 32 bytes by padding them with zeros may lead to unintended behavior. To ensure data integrity, enforce that the input hash has exactly 32 bytes after decoding.
Apply this diff to enforce exact hash length:
if len(b) > common.HashLength { return common.Hash{}, fmt.Errorf( "hex string too long, want at most 32 bytes, have %d bytes", len(b), ) + } else if len(b) < common.HashLength { + return common.Hash{}, fmt.Errorf( + "hex string too short, want exactly 32 bytes, have %d bytes", + len(b), + ) } return common.BytesToHash(b), nil📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if len(b) > common.HashLength { return common.Hash{}, fmt.Errorf( "hex string too long, want at most 32 bytes, have %d bytes", len(b), ) } else if len(b) < common.HashLength { return common.Hash{}, fmt.Errorf( "hex string too short, want exactly 32 bytes, have %d bytes", len(b), ) } return common.BytesToHash(b), nil }
63-82:
⚠️ Potential issuePotential negative height conversion to
uint64
inresolveBlockNumber
Converting a negative
height
value touint64
may cause unexpected underflow, resulting in extremely large numbers. Ensure thatheight
is non-negative before converting it touint64
.Apply this diff to enforce non-negative height:
if height < 0 { executed, err := blocksDB.LatestEVMHeight() if err != nil { return 0, err } height = int64(executed) + } + + if height < 0 { + return 0, fmt.Errorf("resolved block height cannot be negative") } return uint64(height), nil📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.height := number.Int64() // if special values (latest) we return latest executed height // // all the special values are: // SafeBlockNumber = BlockNumber(-4) // FinalizedBlockNumber = BlockNumber(-3) // LatestBlockNumber = BlockNumber(-2) // PendingBlockNumber = BlockNumber(-1) // // EVM on Flow does not have these concepts, but the latest block is the closest fit if height < 0 { executed, err := blocksDB.LatestEVMHeight() if err != nil { return 0, err } height = int64(executed) } if height < 0 { return 0, fmt.Errorf("resolved block height cannot be negative") } return uint64(height), nil
services/replayer/call_tracer_collector.go (3)
76-82:
⚠️ Potential issuePotential data race on
resultsByTxID
map accessAccess to the
resultsByTxID
map is not synchronized, which could lead to race conditions ifCollect
andOnTxEnd
methods are called concurrently. Consider adding synchronization, such as a mutex, to protect access toresultsByTxID
.Apply this diff to introduce synchronization:
+ import ( + "sync" + ) + var ( + resultsMutex sync.Mutex + ) // In Collect method func (ct *CallTracerCollector) Collect(txID common.Hash) (json.RawMessage, error) { + resultsMutex.Lock() result, found := ct.resultsByTxID[txID] if !found { + resultsMutex.Unlock() return nil, fmt.Errorf("trace result for tx: %s, not found", txID.String()) } // remove the result delete(ct.resultsByTxID, txID) + resultsMutex.Unlock() return result, nil } // In OnTxEnd method func (wrapped *Tracer) OnTxEnd(receipt *types.Receipt, err error) { // ... + resultsMutex.Lock() ct.resultsByTxID[receipt.TxHash] = res + resultsMutex.Unlock() // ... }Also applies to: 134-136
67-71:
⚠️ Potential issuePotential data race when reassigning
t.tracer
inResetTracer
Reassigning
t.tracer
inResetTracer
without synchronization might lead to race conditions ift.tracer
is accessed concurrently in other methods. Consider using synchronization mechanisms, such as a mutex, to protect access tot.tracer
.Apply this diff to synchronize access to
t.tracer
:+ import ( + "sync" + ) + var ( + tracerMutex sync.RWMutex + ) func (t *CallTracerCollector) ResetTracer() error { var err error + tracerMutex.Lock() t.tracer, err = DefaultCallTracer() + tracerMutex.Unlock() return err } func NewSafeTxTracer(ct *CallTracerCollector) *tracers.Tracer { // ... wrapped.OnTxStart = func( vm *tracing.VMContext, tx *types.Transaction, from common.Address, ) { defer func() { // existing recover logic }() + tracerMutex.RLock() if ct.tracer.OnTxStart != nil { ct.tracer.OnTxStart(vm, tx, from) } + tracerMutex.RUnlock() } // Repeat similar locking for other methods accessing ct.tracer }Also applies to: 90-92, 110-112, 125-127, 161-163, 176-178, 191-193
130-141: 🛠️ Refactor suggestion
Ensure tracer is reset even if
GetResult
fails inOnTxEnd
If
ct.tracer.GetResult()
fails, the tracer is not reset, which might leave it in an inconsistent state. Consider resetting the tracer even when an error occurs to ensure a fresh state for subsequent traces.Apply this change:
res, err := ct.tracer.GetResult() if err != nil { l.Error().Err(err).Msg("failed to produce trace results") + // Reset tracer even if GetResult fails + if resetErr := ct.ResetTracer(); resetErr != nil { + l.Error().Err(resetErr).Msg("failed to reset tracer after GetResult error") + } return } ct.resultsByTxID[receipt.TxHash] = res // reset tracing to have fresh state if err := ct.ResetTracer(); err != nil { l.Error().Err(err).Msg("failed to reset tracer") return }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if err != nil { l.Error().Err(err).Msg("failed to produce trace results") // Reset tracer even if GetResult fails if resetErr := ct.ResetTracer(); resetErr != nil { l.Error().Err(resetErr).Msg("failed to reset tracer after GetResult error") } return } ct.resultsByTxID[receipt.TxHash] = res // reset tracing to have fresh state if err := ct.ResetTracer(); err != nil { l.Error().Err(err).Msg("failed to reset tracer") return } }
services/requester/cross-spork_client.go (2)
60-61: 🛠️ Refactor suggestion
Make the rate limit configurable
There's a TODO comment indicating that the rate limit is hardcoded. To enhance flexibility and adaptability to different environments or load conditions, consider making the rate limit configurable.
Here's how you might modify the code:
func (s *sporkClients) add(logger zerolog.Logger, client access.Client, rateLimit int) error { // Existing code... *s = append(*s, &sporkClient{ firstHeight: info.NodeRootBlockHeight, lastHeight: header.Height, client: client, - // TODO (JanezP): Make this configurable - getEventsForHeightRangeLimiter: ratelimit.New(100, ratelimit.WithoutSlack), + getEventsForHeightRangeLimiter: ratelimit.New(rateLimit, ratelimit.WithoutSlack), }) // Existing code... }Adjust the calling function to pass the desired
rateLimit
value based on configuration or environment variables.Committable suggestion skipped: line range outside the PR's diff.
229-245:
⚠️ Potential issueRate limiting is not applied in
CrossSporkClient
'sGetEventsForHeightRange
The
GetEventsForHeightRange
method inCrossSporkClient
callsGetEventsForHeightRange
on theaccess.Client
interface, which does not include the rate limiting implemented insporkClient
. This oversight may lead to unthrottled requests and potential rate limit violations.To ensure rate limiting is properly applied, consider modifying the
getClientForHeight
method to return asporkClient
rather than anaccess.Client
. This allows access to the rate-limited method. Here's the proposed change:- func (s *sporkClients) get(height uint64) access.Client { + func (s *sporkClients) get(height uint64) *sporkClient { for _, spork := range *s { if spork.contains(height) { return spork } } return nil } - func (c *CrossSporkClient) getClientForHeight(height uint64) (access.Client, error) { + func (c *CrossSporkClient) getClientForHeight(height uint64) (*sporkClient, error) { if !c.IsPastSpork(height) { + // Wrap the current client in a sporkClient to apply rate limiting + return &sporkClient{ + firstHeight: c.currentSporkFirstHeight, + lastHeight: math.MaxUint64, // Assuming the current spork extends indefinitely + client: c.Client, + getEventsForHeightRangeLimiter: ratelimit.New(100, ratelimit.WithoutSlack), // Make configurable as per previous suggestion + }, nil } client := c.sporkClients.get(height) if client == nil { return nil, errs.NewHeightOutOfRangeError(height) } // Logging remains the same return client, nil }Update the
GetEventsForHeightRange
method to use the returnedsporkClient
:func (c *CrossSporkClient) GetEventsForHeightRange( ctx context.Context, eventType string, startHeight uint64, endHeight uint64, ) ([]flow.BlockEvents, error) { client, err := c.getClientForHeight(startHeight) if err != nil { return nil, err } endClient, err := c.getClientForHeight(endHeight) if err != nil { return nil, err } // Compare spork clients directly if endClient != client { return nil, fmt.Errorf( "invalid height range, end height %d is not in the same spork as start height %d", endHeight, startHeight, ) } return client.GetEventsForHeightRange(ctx, eventType, startHeight, endHeight) }This ensures that rate limiting is consistently applied across all spork clients.
Committable suggestion skipped: line range outside the PR's diff.
models/events.go (3)
39-44:
⚠️ Potential issueEnsure proper initialization and nil checks for new struct fields
With the addition of
blockEventPayload
andtxEventPayloads
to theCadenceEvents
struct, please ensure these fields are properly initialized and that nil checks are in place where they are used to prevent potential nil pointer dereferences.
118-124:
⚠️ Potential issueAdd nil check for
txEventPayload
before dereferencingIn
decodeCadenceEvents
,txEventPayload
is dereferenced with*txEventPayload
inappend(e.txEventPayloads, *txEventPayload)
without a nil check. There's a risk of a nil pointer dereference ifdecodeTransactionEvent
returns a niltxEventPayload
. Please add a nil check before dereferencing or ensuredecodeTransactionEvent
always returns a non-nil value whenerr
is nil.
175-177: 🛠️ Refactor suggestion
Return empty slice instead of nil in
TxEventPayloads
In the
TxEventPayloads
method, consider returning an empty slice[]events.TransactionEventPayload{}
instead ofnil
when there are no transaction events. This prevents potential nil pointer exceptions when iterating over the returned slice.cmd/run/cmd.go (1)
51-63:
⚠️ Potential issuePrevent Panic by Ensuring 'ready' Channel Is Closed Only Once
Closing an already closed channel will cause a panic. In the current code, the
ready
channel is closed both in the deferred calldefer close(ready)
and within the function passed tobootstrap.Run
. This could lead to a panic if both attempts to closeready
are executed.To fix this issue, use a
sync.Once
to ensure thatready
is closed exactly once:+ var readyOnce sync.Once - defer close(ready) + err := bootstrap.Run( ctx, cfg, + func() { + readyOnce.Do(func() { close(ready) }) + }, - func() { - close(ready) - }, ) + if err != nil && !errors.Is(err, context.Canceled) { + readyOnce.Do(func() { close(ready) }) log.Err(err).Msg("Gateway runtime error") } + readyOnce.Do(func() { close(ready) })By using
readyOnce
, you ensure thatready
is closed exactly once, regardless of how the code execution flows. This prevents any potential panic due to closing a closed channel.Committable suggestion skipped: line range outside the PR's diff.
services/ingestion/engine.go (1)
251-251:
⚠️ Potential issueFix assignment mismatch:
replayer.ReplayBlock
returns 2 valuesThe function
replayer.ReplayBlock
returns only 2 values, but the code is attempting to assign 3 variables. This will cause a compile-time error.Apply this diff to correct the assignment:
-res, _, err := replayer.ReplayBlock(events.TxEventPayloads(), blockEvents) +res, err := replayer.ReplayBlock(events.TxEventPayloads(), blockEvents)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.res, err := replayer.ReplayBlock(events.TxEventPayloads(), blockEvents)
🧰 Tools
🪛 GitHub Check: Lint
[failure] 251-251:
assignment mismatch: 3 variables but replayer.ReplayBlock returns 2 values) (typecheck)
[failure] 251-251:
assignment mismatch: 3 variables but replayer.ReplayBlock returns 2 values (typecheck)api/debug.go (2)
264-268: 🛠️ Refactor suggestion
Default
from
address inTraceCall
should be zero addressIn the
TraceCall
method, thefrom
address defaults tod.config.Coinbase
if not provided. To align with Ethereum JSON-RPC standard behavior, consider defaulting to the zero address instead.Apply this diff to update the default
from
address:- from := d.config.Coinbase + from := gethCommon.Address{} if args.From != nil { from = *args.From }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// Default address in case user does not provide one from := gethCommon.Address{} if args.From != nil { from = *args.From }
440-445:
⚠️ Potential issuePrevent potential nil pointer dereference in
isDefaultCallTracer
In the
isDefaultCallTracer
function, dereferencingconfig.Tracer
without checking if it'snil
may cause a panic. Ensure you check fornil
before dereferencing.Apply this diff to prevent a possible nil pointer dereference:
if config.Tracer == nil { return false } if *config.Tracer != replayer.TracerName { return false }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if config.Tracer == nil { return false } if *config.Tracer != replayer.TracerName { return false } tracerConfig := json.RawMessage(replayer.TracerConfig) return slices.Equal(config.TracerConfig, tracerConfig)
bootstrap/bootstrap.go (2)
500-509:
⚠️ Potential issueCorrect the spelling of
evmBlokcHeight
toevmBlockHeight
The variable
evmBlokcHeight
appears to be misspelled. It should beevmBlockHeight
for consistency and readability.Apply this diff to correct the spelling:
-evmBlokcHeight := uint64(0) +evmBlockHeight := uint64(0) -cadenceBlock, err := client.GetBlockHeaderByHeight(context.Background(), cadenceHeight) +cadenceBlock, err := client.GetBlockHeaderByHeight(context.Background(), cadenceHeight) -snapshot, err := registerStore.GetSnapshotAt(evmBlokcHeight) +snapshot, err := registerStore.GetSnapshotAt(evmBlockHeight)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.evmBlockHeight := uint64(0) cadenceBlock, err := client.GetBlockHeaderByHeight(context.Background(), cadenceHeight) if err != nil { return nil, fmt.Errorf("could not fetch provided cadence height, make sure it's correct: %w", err) } snapshot, err := registerStore.GetSnapshotAt(evmBlockHeight) if err != nil { return nil, fmt.Errorf("could not get register snapshot at block height %d: %w", 0, err) }
390-398:
⚠️ Potential issueEnsure all storage components are properly closed in
StopDB()
Currently, the
StopDB
method only closesb.storages.Storage
. Ifb.storages.Registers
or other storages require closing, ensure they are also properly closed to prevent resource leaks.Apply this diff to close the
Registers
storage:func (b *Bootstrap) StopDB() { if b.storages == nil || b.storages.Storage == nil { return } err := b.storages.Storage.Close() if err != nil { b.logger.Err(err).Msg("PebbleDB graceful shutdown failed") } + if b.storages.Registers != nil { + err := b.storages.Registers.Close() + if err != nil { + b.logger.Err(err).Msg("Registers storage graceful shutdown failed") + } + } }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (b *Bootstrap) StopDB() { if b.storages == nil || b.storages.Storage == nil { return } err := b.storages.Storage.Close() if err != nil { b.logger.Err(err).Msg("PebbleDB graceful shutdown failed") } if b.storages.Registers != nil { err := b.storages.Registers.Close() if err != nil { b.logger.Err(err).Msg("Registers storage graceful shutdown failed") } } }
services/ingestion/event_subscriber.go (3)
40-42:
⚠️ Potential issuePotential data races on
recovery
andrecoveredEvents
fieldsThe
recovery
flag andrecoveredEvents
slice are accessed and modified across multiple goroutines without synchronization. This can lead to data races and unpredictable behavior.To ensure thread safety, consider protecting these fields with a mutex or other synchronization mechanisms. For example:
type RPCEventSubscriber struct { // ... recoveryMu sync.Mutex recovery bool recoveredEvents []flow.Event } // When accessing or modifying `recovery` and `recoveredEvents`, use the mutex: func (r *RPCEventSubscriber) someMethod() { r.recoveryMu.Lock() defer r.recoveryMu.Unlock() // Access or modify r.recovery and r.recoveredEvents }Also applies to: 165-170
282-284:
⚠️ Potential issueHandle mismatched lengths of transaction and block events
The code assumes that the lengths of the
transactions
andblocks
slices are equal. If this assumption fails due to missing events or data inconsistencies, it could cause the backfill process to fail.Consider implementing logic to handle cases where the lengths differ. You might align the events based on their block heights or handle missing events gracefully to ensure robustness.
431-439: 🛠️ Refactor suggestion
Assumption of a single recovered event may not hold
In
fetchMissingData
, the code expects exactly one recovered event per height. However, there might be multiple events or none at all, which could lead to errors.Modify the logic to handle scenarios where the number of recovered events is not exactly one. Instead of enforcing this assumption, process all retrieved events for the given height appropriately.
tests/web3js/debug_traces_test.js (2)
28-31:
⚠️ Potential issueDeclare variables
response
andres
to prevent unintended globalsThe variables
response
andres
are used without declaration in multiple places (e.g., lines 28, 47, 85, 137, 151, 190, 237). This can lead to unintended global variables, which is a bad practice and may cause unexpected behavior. Please declare these variables usinglet
orconst
within their appropriate scopes.Apply this diff to declare
response
andres
:+ let response = await helpers.callRPCMethod( 'debug_traceTransaction', [receipt.transactionHash, callTracer] ); + let res = await helpers.signAndSend({ from: conf.eoa.address, to: contractAddress, data: updateData, value: '0', gasPrice: conf.minGasPrice, });Also applies to: 47-50, 85-88, 137-140, 151-154, 190-193, 237-245
41-42:
⚠️ Potential issueFix potential type issues with
assert.lengthOf
and BigInt literalsThe
assert.lengthOf
function expects the length argument to be aNumber
, but9856n
and9806n
areBigInt
literals. UsingBigInt
may cause type errors. Consider convertingBigInt
literals toNumber
or removing then
suffix.Apply this diff to fix the issue:
- assert.lengthOf(txTrace.input, 9856n) + assert.lengthOf(txTrace.input, 9856) - assert.lengthOf(txTrace.output, 9806n) + assert.lengthOf(txTrace.output, 9806)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.assert.lengthOf(txTrace.input, 9856) assert.lengthOf(txTrace.output, 9806)
services/requester/remote_cadence_arch.go (1)
29-34:
⚠️ Potential issueAddress concurrent access to
cachedCalls
mapThe
cachedCalls
map is accessed and modified without synchronization. IfRemoteCadenceArch
instances are used concurrently by multiple goroutines, this could lead to race conditions and undefined behavior. Consider using a thread-safe alternative likesync.Map
or adding a mutex to synchronize access to thecachedCalls
map.Apply this refactor to make
cachedCalls
thread-safe using a mutex:+import "sync" type RemoteCadenceArch struct { blockHeight uint64 client *CrossSporkClient chainID flow.ChainID - cachedCalls map[string]evmTypes.Data + cachedCalls map[string]evmTypes.Data + cacheMutex sync.RWMutex }Update the
Run
method to synchronize access:func (rca *RemoteCadenceArch) Run(input []byte) ([]byte, error) { key := hex.EncodeToString(crypto.Keccak256(input)) + rca.cacheMutex.RLock() if result, ok := rca.cachedCalls[key]; ok { + rca.cacheMutex.RUnlock() return result, nil } + rca.cacheMutex.RUnlock() evmResult, err := rca.runCall(input) if err != nil { return nil, err } return evmResult.ReturnedData, nil }And in the
runCall
method when writing to the cache:func (rca *RemoteCadenceArch) runCall(input []byte) (*evmTypes.ResultSummary, error) { // existing code... key := hex.EncodeToString(crypto.Keccak256(input)) + rca.cacheMutex.Lock() rca.cachedCalls[key] = evmResult.ReturnedData + rca.cacheMutex.Unlock() return evmResult, nil }Committable suggestion skipped: line range outside the PR's diff.
models/events_test.go (1)
345-346:
⚠️ Potential issueIncorrect function call to
NewSingleBlockEvents
inTestNewMultiBlockEvents
In the
TestNewMultiBlockEvents
function, the call toNewSingleBlockEvents
is incorrect. It should beNewMultiBlockEvents
to match the test's purpose and ensure correct functionality.Apply this diff to fix the function call:
- evmEvents := NewSingleBlockEvents(blockEvents) + evmEvents := NewMultiBlockEvents(blockEvents)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.evmEvents := NewMultiBlockEvents(blockEvents) require.Error(t, evmEvents.Err)
tests/web3js/debug_util_test.js (1)
5-15: 🛠️ Refactor suggestion
Enhance test robustness with additional assertions and error handling
While the basic happy path is tested, the test could be more comprehensive:
- Missing error handling for RPC failures
- No validation of the height being a non-negative number
- No test cases for invalid block identifiers
Consider enhancing the test with this implementation:
it('should retrieve flow height', async () => { + // Happy path let response = await helpers.callRPCMethod( 'debug_flowHeightByBlock', ['latest'] ) assert.equal(response.status, 200) assert.isDefined(response.body) + assert.isDefined(response.body.result) let height = response.body.result + assert.isNumber(height) + assert.isAtLeast(height, 0, 'Height should be non-negative') assert.equal(height, conf.startBlockHeight) }) + +it('should handle invalid block identifier', async () => { + let response = await helpers.callRPCMethod( + 'debug_flowHeightByBlock', + ['invalid_block'] + ) + assert.equal(response.status, 400) + assert.isDefined(response.body.error) +})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.it('should retrieve flow height', async () => { // Happy path let response = await helpers.callRPCMethod( 'debug_flowHeightByBlock', ['latest'] ) assert.equal(response.status, 200) assert.isDefined(response.body) assert.isDefined(response.body.result) let height = response.body.result assert.isNumber(height) assert.isAtLeast(height, 0, 'Height should be non-negative') assert.equal(height, conf.startBlockHeight) }) it('should handle invalid block identifier', async () => { let response = await helpers.callRPCMethod( 'debug_flowHeightByBlock', ['invalid_block'] ) assert.equal(response.status, 400) assert.isDefined(response.body.error) })
services/requester/utils.go (2)
11-12:
⚠️ Potential issueFunction should validate input and return error
The function should validate the input script and return an error if it's invalid.
Consider updating the signature and adding validation:
-func replaceAddresses(script []byte, chainID flow.ChainID) []byte { +func replaceAddresses(script []byte, chainID flow.ChainID) ([]byte, error) { + if len(script) == 0 { + return nil, fmt.Errorf("empty script") + }Committable suggestion skipped: line range outside the PR's diff.
23-28: 🛠️ Refactor suggestion
Improve string replacement safety and efficiency
The current implementation has several potential issues:
- String replacement could match partial words
- Multiple string conversions could be inefficient
- No validation of successful replacements
Consider using a more robust approach:
- for _, contract := range contracts { - s = strings.ReplaceAll(s, - fmt.Sprintf("import %s", contract.Name), - fmt.Sprintf("import %s from %s", contract.Name, contract.Address.HexWithPrefix()), - ) - } + for _, contract := range contracts { + oldImport := fmt.Sprintf("import %s", contract.Name) + newImport := fmt.Sprintf("import %s from %s", contract.Name, contract.Address.HexWithPrefix()) + + // Use regex to ensure we match complete import statements + pattern := fmt.Sprintf(`\b%s\b`, regexp.QuoteMeta(oldImport)) + re := regexp.MustCompile(pattern) + + if !re.MatchString(s) { + continue + } + + s = re.ReplaceAllString(s, newImport) + }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.for _, contract := range contracts { oldImport := fmt.Sprintf("import %s", contract.Name) newImport := fmt.Sprintf("import %s from %s", contract.Name, contract.Address.HexWithPrefix()) // Use regex to ensure we match complete import statements pattern := fmt.Sprintf(`\b%s\b`, regexp.QuoteMeta(oldImport)) re := regexp.MustCompile(pattern) if !re.MatchString(s) { continue } s = re.ReplaceAllString(s, newImport) }
services/evm/extract_test.go (2)
16-27:
⚠️ Potential issueCritical: Test function needs restructuring for better maintainability and portability.
Several issues need to be addressed:
- The function doesn't follow Go test naming convention (should be
TestStateDiff
)- Hardcoded paths (
/var/flow52/...
) make the test environment-dependent- Magic numbers (17724990, 228901661) lack context
- The test appears to be an integration test but isn't marked as such
Consider refactoring like this:
-func StateDiff(t *testing.T) { +func TestStateDiff(t *testing.T) { + t.Skip("Integration test - requires specific environment setup") + + // Test configuration + const ( + testnetHeight = uint64(17724990) + checkpointID = "228901661" + ) + + dbPath := t.TempDir() // or use environment variable + checkpointPath := t.TempDir() // or use environment variable + - state1 := extractEVMState(t, flowGo.Testnet, "/var/flow52/evm/data/db", uint64(17724990)) - state2 := evmStateFromCheckpointExtract(t, "/var/flow52/evm-state-from-checkpoint-228901661") + state1 := extractEVMState(t, flowGo.Testnet, dbPath, testnetHeight) + state2 := evmStateFromCheckpointExtract(t, checkpointPath)Committable suggestion skipped: line range outside the PR's diff.
17-18: 💡 Codebase verification
Test file uses hardcoded paths that should be configurable
The paths
/var/flow52/evm/data/db
and/var/flow52/evm-state-from-checkpoint-228901661
are hardcoded in a test file. While they don't expose sensitive data (they are only used to load test state data), hardcoding system paths in tests is a bad practice as it:
- Makes tests non-portable across different environments
- Could potentially access unintended system locations in different environments
Recommendations:
- Use test fixtures or temporary directories for test data
- Make paths configurable through environment variables or test configuration
- Consider using relative paths within the test directory structure
🔗 Analysis chain
Verify security implications of filesystem access.
The test reads from system paths that might contain sensitive data. We should verify that these paths don't expose production data or credentials.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if the hardcoded paths contain sensitive patterns # Look for potential credential files or sensitive data rg -g '!{.git,node_modules}' -i '(password|secret|key|credential|token)' "/var/flow52" # Check for file permissions that might expose sensitive data ls -la "/var/flow52/evm/data/db" "/var/flow52/evm-state-from-checkpoint-228901661" 2>/dev/nullLength of output: 219
Script:
#!/bin/bash # Let's analyze the test file and its context to understand the nature of these paths # First, check the test file content cat services/evm/extract_test.go # Look for any configuration or test data files fd -e json -e yaml -e yml -e toml . services/evm/ # Check if these paths are used elsewhere in the codebase rg -l "/var/flow52"Length of output: 1340
storage/pebble/keys.go (1)
63-80: 🛠️ Refactor suggestion
Add safety checks and documentation for the MVCC comparer
The MVCC comparer implementation needs additional safety measures and documentation:
- For register keys, there's no validation that the key length is at least 8 bytes before attempting to remove them
- The purpose of removing the last 8 bytes for register keys should be documented
Consider applying these improvements:
func NewMVCCComparer() *pebble.Comparer { comparer := *pebble.DefaultComparer comparer.Split = func(a []byte) int { if len(a) == 0 { return 0 } if a[0] == registerKeyMarker { + // Register keys are structured as: marker + key + version(8 bytes) + // Split at version boundary to enable MVCC operations + if len(a) < 9 { // marker(1) + minimum version(8) + return len(a) + } return len(a) - 8 } return len(a) } comparer.Name = "flow.MVCCComparer" return &comparer }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func NewMVCCComparer() *pebble.Comparer { comparer := *pebble.DefaultComparer comparer.Split = func(a []byte) int { if len(a) == 0 { return 0 } if a[0] == registerKeyMarker { // Register keys are structured as: marker + key + version(8 bytes) // Split at version boundary to enable MVCC operations if len(a) < 9 { // marker(1) + minimum version(8) return len(a) } return len(a) - 8 } return len(a) } comparer.Name = "flow.MVCCComparer" return &comparer }
tests/web3js/eth_rate_limit_test.js (1)
13-29: 🛠️ Refactor suggestion
Consider enhancing test robustness and realism
The current implementation has several areas for improvement:
- Sequential requests might not effectively simulate real-world conditions
- Error handling could be more thorough
- Memory usage might be a concern with increased request count
Consider these improvements:
// this should be synced with the value on server config let requestLimit = 500 let requestsMade = 0 let requestsFailed = 0 let requests = 1000 + // Process requests in smaller batches to manage memory + const BATCH_SIZE = 50 + const processBatch = async (start, size) => { + const promises = Array(size).fill().map(async (_, i) => { + try { + await ws.eth.getBlockNumber() + return { success: true } + } catch (e) { + assert.instanceOf(e, Error) + assert.equal(e.innerError.message, 'limit of requests per second reached') + return { success: false } + } + }) + const results = await Promise.all(promises) + requestsMade += results.filter(r => r.success).length + requestsFailed += results.filter(r => !r.success).length + } + + for (let i = 0; i < requests; i += BATCH_SIZE) { + const batchSize = Math.min(BATCH_SIZE, requests - i) + await processBatch(i, batchSize) + }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.let requestLimit = 500 let requestsMade = 0 let requestsFailed = 0 let requests = 1000 // Process requests in smaller batches to manage memory const BATCH_SIZE = 50 const processBatch = async (start, size) => { const promises = Array(size).fill().map(async (_, i) => { try { await ws.eth.getBlockNumber() return { success: true } } catch (e) { assert.instanceOf(e, Error) assert.equal(e.innerError.message, 'limit of requests per second reached') return { success: false } } }) const results = await Promise.all(promises) requestsMade += results.filter(r => r.success).length requestsFailed += results.filter(r => !r.success).length } for (let i = 0; i < requests; i += BATCH_SIZE) { const batchSize = Math.min(BATCH_SIZE, requests - i) await processBatch(i, batchSize) } assert.equal(requestsMade, requestLimit, 'more requests made than the limit') assert.equal(requestsFailed, requests - requestLimit, 'failed requests don\'t match expected value')
cmd/util/main.go (2)
35-35: 🛠️ Refactor suggestion
Consider making chainID configurable
The chainID is hardcoded to Testnet which limits the utility's usage. Consider adding it as a command-line flag to support different networks.
-chainID := flowGo.Testnet +var network string +flag.StringVar(&network, "network", "testnet", "Network to use (mainnet, testnet)") + +var chainID flowGo.ChainID +switch network { +case "mainnet": + chainID = flowGo.Mainnet +case "testnet": + chainID = flowGo.Testnet +default: + log.Error().Msgf("unsupported network: %s", network) + flag.Usage() + os.Exit(1) +}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.var network string flag.StringVar(&network, "network", "testnet", "Network to use (mainnet, testnet)") var chainID flowGo.ChainID switch network { case "mainnet": chainID = flowGo.Mainnet case "testnet": chainID = flowGo.Testnet default: log.Error().Msgf("unsupported network: %s", network) flag.Usage() os.Exit(1) }
29-33: 🛠️ Refactor suggestion
Add directory validation checks
The code should validate that the output directory exists and is writable, and that the register store directory exists and is readable before proceeding.
if height == 0 || outputDir == "" || registerStoreDir == "" { log.Error().Msg("All flags (height, output, register-store) must be provided") flag.Usage() os.Exit(1) } + +// Validate register store directory +if _, err := os.Stat(registerStoreDir); os.IsNotExist(err) { + log.Error().Msgf("Register store directory does not exist: %s", registerStoreDir) + os.Exit(1) +} + +// Create output directory if it doesn't exist +if err := os.MkdirAll(outputDir, 0755); err != nil { + log.Error().Err(err).Msgf("Failed to create output directory: %s", outputDir) + os.Exit(1) +}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if height == 0 || outputDir == "" || registerStoreDir == "" { log.Error().Msg("All flags (height, output, register-store) must be provided") flag.Usage() os.Exit(1) } // Validate register store directory if _, err := os.Stat(registerStoreDir); os.IsNotExist(err) { log.Error().Msgf("Register store directory does not exist: %s", registerStoreDir) os.Exit(1) } // Create output directory if it doesn't exist if err := os.MkdirAll(outputDir, 0755); err != nil { log.Error().Err(err).Msgf("Failed to create output directory: %s", outputDir) os.Exit(1) }
storage/mocks/mocks.go (1)
26-26:
⚠️ Potential issueFix timestamp implementation
Using
time.Now().Second()
for block timestamp is problematic because:
- It only provides values 0-59, which is not representative of real block timestamps
- It can cause test flakiness due to timing dependencies
Consider using a deterministic timestamp based on block height:
-Timestamp: uint64(time.Now().Second()), +Timestamp: uint64(1600000000 + height), // Base timestamp + height for deterministic valuesCommittable suggestion skipped: line range outside the PR's diff.
tests/web3js/eth_get_storage_at_test.js (1)
39-39:
⚠️ Potential issueAdd missing 'let' or 'const' declaration.
The
newValue
variable is used without proper declaration.-newValue = 100 +const newValue = 100📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.const newValue = 100
tests/web3js/eth_revert_reason_test.js (2)
82-93: 🛠️ Refactor suggestion
Remove duplicated polling logic
This polling implementation is identical to the previous one. Once the helper function suggested above is implemented, this segment can be simplified.
Replace with:
- rcp = null - while (rcp == null) { - rcp = await helpers.callRPCMethod( - 'eth_getTransactionReceipt', - [txHash] - ) - if (rcp.body.result == null) { - rcp = null - } - } + rcp = await waitForReceipt(txHash)Committable suggestion skipped: line range outside the PR's diff.
45-56: 🛠️ Refactor suggestion
Add timeout and delay to the polling mechanism
The current polling implementation has several potential issues:
- No timeout mechanism could lead to infinite loops
- Continuous polling without delay could overwhelm the node
- The polling logic is duplicated later in the file
Consider refactoring to a helper function with timeout and delay:
- let rcp = null - while (rcp == null) { - rcp = await helpers.callRPCMethod( - 'eth_getTransactionReceipt', - [txHash] - ) - if (rcp.body.result == null) { - rcp = null - } - } + const waitForReceipt = async (txHash, timeout = 60000) => { + const startTime = Date.now() + while (Date.now() - startTime < timeout) { + const rcp = await helpers.callRPCMethod( + 'eth_getTransactionReceipt', + [txHash] + ) + if (rcp.body.result != null) { + return rcp + } + await new Promise(resolve => setTimeout(resolve, 1000)) + } + throw new Error('Transaction receipt not found: timeout') + } + + let rcp = await waitForReceipt(txHash)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.const waitForReceipt = async (txHash, timeout = 60000) => { const startTime = Date.now() while (Date.now() - startTime < timeout) { const rcp = await helpers.callRPCMethod( 'eth_getTransactionReceipt', [txHash] ) if (rcp.body.result != null) { return rcp } await new Promise(resolve => setTimeout(resolve, 1000)) } throw new Error('Transaction receipt not found: timeout') } let rcp = await waitForReceipt(txHash)
storage/pebble/register_storage_test.go (1)
69-69:
⚠️ Potential issueRemove unnecessary error check.
This line contains a redundant error check with no preceding error-generating operation.
- require.NoError(t, err)
tests/web3js/verify_cadence_arch_calls_test.js (3)
33-50: 🛠️ Refactor suggestion
Reduce code duplication by extracting common test patterns
The test structure for contract interactions is repeated across different methods. Consider extracting this pattern into helper functions.
Here's a suggested helper function:
async function verifyContractCall(contract, methodName, params = [], expectedLength = 66) { const data = contract.methods[methodName](...params).encodeABI() // Estimate gas const estimatedGas = await web3.eth.estimateGas({ from: conf.eoa.address, to: contract.options.address, data: data, }) // Submit transaction const txResult = await helpers.signAndSend({ from: conf.eoa.address, to: contract.options.address, data: data, value: '0', gasPrice: conf.minGasPrice, gas: estimatedGas, }) assert.equal(txResult.receipt.status, conf.successStatus) // Verify call result const callResult = await web3.eth.call({ to: contract.options.address, data: data }, 'latest') assert.notEqual(callResult, ZERO_HASH, `${methodName} response should not be zero`) assert.lengthOf(callResult, expectedLength, `Invalid ${methodName} response length`) return callResult }Then use it like this:
-let revertibleRandomData = deployed.contract.methods.verifyArchCallToRevertibleRandom().encodeABI() -res = await helpers.signAndSend({ - from: conf.eoa.address, - to: contractAddress, - data: revertibleRandomData, - value: '0', - gasPrice: conf.minGasPrice, -}) -assert.equal(res.receipt.status, conf.successStatus) - -res = await web3.eth.call({ to: contractAddress, data: revertibleRandomData }, 'latest') -assert.notEqual( - res, - '0x0000000000000000000000000000000000000000000000000000000000000000' -) -assert.lengthOf(res, 66) +res = await verifyContractCall(deployed.contract, 'verifyArchCallToRevertibleRandom')
14-32: 🛠️ Refactor suggestion
Add gas estimation and improve test readability
A few suggestions to enhance this test section:
- Add gas estimation before transaction submission
- Extract magic numbers into named constants
Consider these improvements:
+const RANDOM_SOURCE_HEIGHT = 2 +const EXPECTED_RANDOM_SOURCE_LENGTH = 66 +const ZERO_HASH = '0x' + '0'.repeat(64) -let getRandomSourceData = deployed.contract.methods.verifyArchCallToRandomSource(2).encodeABI() +let getRandomSourceData = deployed.contract.methods.verifyArchCallToRandomSource(RANDOM_SOURCE_HEIGHT).encodeABI() +// Estimate gas before sending transaction +const estimatedGas = await web3.eth.estimateGas({ + from: conf.eoa.address, + to: contractAddress, + data: getRandomSourceData, +}) res = await helpers.signAndSend({ from: conf.eoa.address, to: contractAddress, data: getRandomSourceData, value: '0', gasPrice: conf.minGasPrice, + gas: estimatedGas, }) -assert.notEqual( - res, - '0x0000000000000000000000000000000000000000000000000000000000000000' -) -assert.lengthOf(res, 66) +assert.notEqual(res, ZERO_HASH, 'Response should not be zero') +assert.lengthOf(res, EXPECTED_RANDOM_SOURCE_LENGTH, 'Invalid random source length')📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.const RANDOM_SOURCE_HEIGHT = 2 const EXPECTED_RANDOM_SOURCE_LENGTH = 66 const ZERO_HASH = '0x' + '0'.repeat(64) // submit a transaction that calls verifyArchCallToRandomSource(uint64 height) let getRandomSourceData = deployed.contract.methods.verifyArchCallToRandomSource(RANDOM_SOURCE_HEIGHT).encodeABI() // Estimate gas before sending transaction const estimatedGas = await web3.eth.estimateGas({ from: conf.eoa.address, to: contractAddress, data: getRandomSourceData, }) res = await helpers.signAndSend({ from: conf.eoa.address, to: contractAddress, data: getRandomSourceData, value: '0', gasPrice: conf.minGasPrice, gas: estimatedGas, }) assert.equal(res.receipt.status, conf.successStatus) // make a contract call for verifyArchCallToRandomSource(uint64 height) res = await web3.eth.call({ to: contractAddress, data: getRandomSourceData }, 'latest') assert.notEqual(res, ZERO_HASH, 'Response should not be zero') assert.lengthOf(res, EXPECTED_RANDOM_SOURCE_LENGTH, 'Invalid random source length')
70-91:
⚠️ Potential issueAdd error handling and document test data
The test uses hardcoded values and lacks error handling for transaction retrieval.
Consider these improvements:
+// Test data for COA ownership verification +const TEST_HASH = '0x1bacdb569847f31ade07e83d6bb7cefba2b9290b35d5c2964663215e73519cff' +const TEST_PROOF = 'f853c18088f8d6e0586b0a20c78365766df842b840b90448f4591df2639873be2914c5560149318b7e2fcf160f7bb8ed13cfd97be2f54e6889606f18e50b2c37308386f840e03a9fff915f57b2164cba27f0206a95' -let tx = await web3.eth.getTransactionFromBlock(conf.startBlockHeight, 1) +// Retrieve transaction with error handling +let tx +try { + tx = await web3.eth.getTransactionFromBlock(conf.startBlockHeight, 1) + assert.ok(tx, 'Transaction should exist in the block') +} catch (error) { + throw new Error(`Failed to retrieve transaction: ${error.message}`) +} let verifyCOAOwnershipProofData = deployed.contract.methods.verifyArchCallToVerifyCOAOwnershipProof( tx.to, - '0x1bacdb569847f31ade07e83d6bb7cefba2b9290b35d5c2964663215e73519cff', - web3.utils.hexToBytes('f853c18088f8d6e0586b0a20c78365766df842b840b90448f4591df2639873be2914c5560149318b7e2fcf160f7bb8ed13cfd97be2f54e6889606f18e50b2c37308386f840e03a9fff915f57b2164cba27f0206a95') + TEST_HASH, + web3.utils.hexToBytes(TEST_PROOF) ).encodeABI()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// Test data for COA ownership verification const TEST_HASH = '0x1bacdb569847f31ade07e83d6bb7cefba2b9290b35d5c2964663215e73519cff' const TEST_PROOF = 'f853c18088f8d6e0586b0a20c78365766df842b840b90448f4591df2639873be2914c5560149318b7e2fcf160f7bb8ed13cfd97be2f54e6889606f18e50b2c37308386f840e03a9fff915f57b2164cba27f0206a95' // Retrieve transaction with error handling let tx try { tx = await web3.eth.getTransactionFromBlock(conf.startBlockHeight, 1) assert.ok(tx, 'Transaction should exist in the block') } catch (error) { throw new Error(`Failed to retrieve transaction: ${error.message}`) } let verifyCOAOwnershipProofData = deployed.contract.methods.verifyArchCallToVerifyCOAOwnershipProof( tx.to, TEST_HASH, web3.utils.hexToBytes(TEST_PROOF) ).encodeABI() res = await helpers.signAndSend({ from: conf.eoa.address, to: contractAddress, data: verifyCOAOwnershipProofData, value: '0', gasPrice: conf.minGasPrice, }) assert.equal(res.receipt.status, conf.successStatus) // make a contract call for verifyArchCallToVerifyCOAOwnershipProof(address,bytes32,bytes) res = await web3.eth.call({ to: contractAddress, data: verifyCOAOwnershipProofData }, 'latest') assert.equal( web3.eth.abi.decodeParameter('bool', res), false, )
storage/register_delta_test.go (2)
215-224: 🛠️ Refactor suggestion
Add database cleanup in runDB function.
The
runDB
function should ensure proper cleanup of database resources.Add cleanup:
func runDB(name string, t *testing.T, f func(t *testing.T, db *pebbleStorage.Storage)) { dir := t.TempDir() db, err := pebbleStorage.New(dir, zerolog.New(zerolog.NewTestWriter(t))) require.NoError(t, err) + t.Cleanup(func() { + require.NoError(t, db.Close()) + }) t.Run(name, func(t *testing.T) { f(t, db) }) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func runDB(name string, t *testing.T, f func(t *testing.T, db *pebbleStorage.Storage)) { dir := t.TempDir() db, err := pebbleStorage.New(dir, zerolog.New(zerolog.NewTestWriter(t))) require.NoError(t, err) t.Cleanup(func() { require.NoError(t, db.Close()) }) t.Run(name, func(t *testing.T) { f(t, db) }) }
226-244: 🛠️ Refactor suggestion
Improve error handling in commit function.
The commit function has a few areas for improvement:
- Uses hardcoded height 0 in Store call
- Inconsistent error handling between Store and Commit operations
Consider this improvement:
func commit( t *testing.T, db *pebbleStorage.Storage, d *storage.RegisterDelta, r *pebbleStorage.RegisterStorage, + height uint64, ) error { batch := db.NewBatch() + defer batch.Close() - err := r.Store(d.GetUpdates(), 0, batch) + err := r.Store(d.GetUpdates(), height, batch) if err != nil { return err } - err = batch.Commit(pebble.Sync) - require.NoError(t, err) - return nil + return batch.Commit(pebble.Sync) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// commit is an example on how to commit the delta to storage. func commit( t *testing.T, db *pebbleStorage.Storage, d *storage.RegisterDelta, r *pebbleStorage.RegisterStorage, height uint64, ) error { batch := db.NewBatch() defer batch.Close() err := r.Store(d.GetUpdates(), height, batch) if err != nil { return err } return batch.Commit(pebble.Sync) }
storage/pebble/blocks.go (1)
202-222: 🛠️ Refactor suggestion
Add parameter validation and improve error handling
The InitHeights method has several areas that could be improved:
- The batch parameter should be validated to prevent nil pointer dereference
- The cadenceHeight and cadenceID parameters should be validated
- The error message on line 213 has redundant formatting
Consider applying these improvements:
func (b *Blocks) InitHeights(cadenceHeight uint64, cadenceID flow.Identifier, batch *pebble.Batch) error { + if batch == nil { + return fmt.Errorf("batch cannot be nil") + } + if cadenceHeight == 0 { + return fmt.Errorf("cadenceHeight cannot be zero") + } + if cadenceID == flow.EmptyID { + return fmt.Errorf("cadenceID cannot be empty") + } // ... rest of the method ... - if err := b.store.set(latestEVMHeightKey, nil, uint64Bytes(0), batch); err != nil { - return fmt.Errorf("failed to init latest EVM height at: %d, with: %w", 0, err) + if err := b.store.set(latestEVMHeightKey, nil, uint64Bytes(0), batch); err != nil { + return fmt.Errorf("failed to init latest EVM height with: %w", err) } // ... rest of the method ... }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (b *Blocks) InitHeights(cadenceHeight uint64, cadenceID flow.Identifier, batch *pebble.Batch) error { if batch == nil { return fmt.Errorf("batch cannot be nil") } if cadenceHeight == 0 { return fmt.Errorf("cadenceHeight cannot be zero") } if cadenceID == flow.EmptyID { return fmt.Errorf("cadenceID cannot be empty") } // sanity check, make sure we don't have any heights stored, disable overwriting the database _, err := b.LatestEVMHeight() if !errors.Is(err, errs.ErrStorageNotInitialized) { return fmt.Errorf("can't init the database that already has data stored") } if err := b.store.set(latestCadenceHeightKey, nil, uint64Bytes(cadenceHeight), batch); err != nil { return fmt.Errorf("failed to init latest Cadence height at: %d, with: %w", cadenceHeight, err) } if err := b.store.set(latestEVMHeightKey, nil, uint64Bytes(0), batch); err != nil { return fmt.Errorf("failed to init latest EVM height with: %w", err) } // we store genesis block because it isn't emitted over the network genesisBlock := models.GenesisBlock(b.chainID) if err := b.Store(cadenceHeight, cadenceID, genesisBlock, batch); err != nil { return fmt.Errorf("failed to store genesis block at Cadence height: %d, with: %w", cadenceHeight, err) }
services/replayer/blocks_provider_test.go (1)
58-70:
⚠️ Potential issueFix duplicate test name.
The test case name "with new block non-sequential to latest block" is used twice, but the second test (lines 58-70) actually tests sequential blocks. Please update the name to accurately reflect the test scenario.
- t.Run("with new block non-sequential to latest block", func(t *testing.T) { + t.Run("with new block sequential to latest block", func(t *testing.T) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.t.Run("with new block sequential to latest block", func(t *testing.T) { _, blocks := setupBlocksDB(t) blocksProvider := NewBlocksProvider(blocks, flowGo.Emulator, nil) block1 := mocks.NewBlock(10) err := blocksProvider.OnBlockReceived(block1) require.NoError(t, err) block2 := mocks.NewBlock(11) err = blocksProvider.OnBlockReceived(block2) require.NoError(t, err) }) }
This PR implemented basic EVM state extraction.
It depends on this PR to be merged first.
Summary by CodeRabbit
Release Notes
New Features
debug_flowHeightByBlock
for retrieving flow block height.DebugAPI
for improved transaction and block tracing capabilities.RegisterStorage
for managing register values with new snapshot functionality.BlockExecutor
for executing transactions in a blockchain context.Bug Fixes
Documentation
Tests
Chores