Skip to content

Conversation

@julienrbrt
Copy link
Member

Extract some logic from #2797.
Those refactors were done to ease force inclusion integration but they can be extracted to be merged sooner

@github-actions
Copy link
Contributor

github-actions bot commented Nov 21, 2025

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedNov 21, 2025, 12:16 PM

@claude
Copy link
Contributor

claude bot commented Nov 21, 2025

Claude finished @julienrbrt's task —— View job


Comprehensive Code Review

I've completed a thorough review of this refactoring PR. This is a well-structured extraction that improves code organization and separation of concerns. Below are my findings organized by priority.


🟢 Positive Aspects

1. Excellent Architectural Improvements

  • DA Client Abstraction: The new block/internal/da/client.go provides a clean, reusable wrapper around the core DA interface with pre-calculated namespace bytes and consistent timeout handling. This is a significant improvement.
  • Cache Interface Split: Separating CacheManager and PendingManager concerns (lines 44-93 in manager.go) provides better API surface control and clearer responsibilities.
  • Comprehensive Testing: The new client_test.go provides excellent coverage with 458 lines of tests covering success cases, error conditions, and timeout scenarios.
  • Clean Migration: Successfully moved DA logic from types/da.go (212 lines) into the new client without breaking existing functionality.

2. Code Quality

  • Good use of Go idioms (context handling, error wrapping with %w)
  • Appropriate logging levels (Debug for expected cases, Error for failures)
  • Clear package documentation

🔴 Critical Issues

1. Unsafe Error Handling in DA Client (block/internal/da/client.go:159-180)

The Retrieve method uses strings.Contains for error checking instead of errors.Is:

if strings.Contains(err.Error(), coreda.ErrBlobNotFound.Error()) {
    // ...
}
if strings.Contains(err.Error(), coreda.ErrHeightFromFuture.Error()) {
    // ...
}

Issues:

  • This is fragile and won't work correctly with wrapped errors
  • The same file uses errors.Is correctly in the Submit method (lines 75-97)
  • Inconsistent with Go best practices and the codebase's own patterns
  • The log message at line 171 is a copy-paste error: Msg("Blobs not found at height") should say Msg("Height is from the future")

Recommendation:

if errors.Is(err, coreda.ErrBlobNotFound) {
    c.logger.Debug().Uint64("height", height).Msg("Blobs not found at height")
    return coreda.ResultRetrieve{
        BaseResult: coreda.BaseResult{
            Code:      coreda.StatusNotFound,
            Message:   coreda.ErrBlobNotFound.Error(),
            Height:    height,
            Timestamp: time.Now(),
        },
    }
}
if errors.Is(err, coreda.ErrHeightFromFuture) {
    c.logger.Debug().Uint64("height", height).Msg("Height is from the future")
    return coreda.ResultRetrieve{
        BaseResult: coreda.BaseResult{
            Code:      coreda.StatusHeightFromFuture,
            Message:   coreda.ErrHeightFromFuture.Error(),
            Height:    height,
            Timestamp: time.Now(),
        },
    }
}

Note: The PR author indicated this code was "simply moved" and wasn't touched. However, this represents a good opportunity to fix the issue during this refactoring, especially since it's being extracted into a new, more visible component.


🟡 Medium Priority Issues

2. Potential Nil Pointer Risk in Cache Manager (block/internal/cache/manager.go:110-161)

The NewCacheManager and NewPendingManager functions return partially initialized *implementation structs:

  • NewCacheManager (lines 129-161) creates an implementation with nil values for pendingHeaders and pendingData
  • NewPendingManager (lines 110-127) creates an implementation with nil values for cache fields

Risk:
While the returned types are constrained to CacheManager and PendingManager interfaces, Go's type system allows casting:

cm := NewCacheManager(...) // Returns CacheManager interface
impl := cm.(*implementation) // Type assertion succeeds
impl.GetPendingHeaders() // Nil pointer dereference!

Current Safety: The code appears safe in practice because:

  1. Callers use the interface types appropriately
  2. The full Manager interface is constructed via NewManager with all fields

Recommendation for Future Safety:
Consider separate implementation structs to prevent accidental misuse:

type cacheManager struct {
    headerCache, dataCache, txCache *Cache[...]
    // ... cache-specific fields only
}

type pendingManager struct {
    pendingHeaders *PendingHeaders
    pendingData    *PendingData
    // ... pending-specific fields only
}

type fullManager struct {
    *cacheManager
    *pendingManager
}

This would make incorrect usage a compile-time error rather than a runtime panic. However, this is a non-blocking suggestion for future hardening.


3. Test Helper Error Handling (block/internal/syncing/da_retriever_test.go:66-68)

The test helper now silently ignores errors:

payload, _ := d.MarshalBinary()
sig, err := signer.Sign(payload)
require.NoError(t, err)

Issue: The MarshalBinary error is ignored, which could hide test setup issues.

Recommendation:

payload, err := d.MarshalBinary()
require.NoError(t, err)
sig, err := signer.Sign(payload)
require.NoError(t, err)

This ensures test failures are immediately visible if marshaling fails.


🟡 Code Quality Observations

4. Context Timeout Management (block/internal/da/client.go:154-156, 211-213)

The Retrieve method creates multiple contexts with deferred cancellations:

getIDsCtx, cancel := context.WithTimeout(ctx, c.defaultTimeout)
defer cancel()
// ... later in loop:
getBlobsCtx, cancel := context.WithTimeout(ctx, c.defaultTimeout)
batchBlobs, err := c.da.Get(getBlobsCtx, idsResult.IDs[i:end], namespace)
cancel()

Observation: This pattern is generally safe, but the loop-based timeout handling could lead to long total execution times for large batches (100 IDs per batch × timeout per batch).

Suggestion: Consider adding an overall operation timeout in addition to per-operation timeouts, or documenting the maximum expected duration:

// Maximum retrieval time = timeout × ceil(numIDs/100) batches

5. Magic Numbers

  • client.go:206: batchSize := 100 - Consider making this configurable or documenting why 100 is optimal
  • Multiple timeout values scattered across files - consider centralizing DA-related timeouts

📊 Test Coverage Analysis

The codecov report shows 80.35% patch coverage with 44 missing lines. Key gaps:

  1. block/internal/cache/manager.go: 33 missing lines (0% coverage in report)

    • This is likely the new constructor and interface split logic
    • Recommendation: Add unit tests for NewCacheManager and NewPendingManager to verify correct initialization
  2. block/internal/syncing/syncer.go: 6 missing lines (14.28% coverage)

    • May be error paths or edge cases in syncer integration
  3. block/internal/da/client.go: 4 missing lines (97.29% coverage)

    • Excellent coverage overall; minor gaps likely in edge cases

🔒 Security Considerations

Good Practices:

  • No hardcoded secrets or credentials
  • Proper context propagation for cancellation
  • Signature verification maintained in da_retriever.go:308-333
  • Namespace isolation preserved

Input Validation:

  • Header validation maintained (da_retriever.go:242-245)
  • Proposer address verification (da_retriever.go:299-306)
  • Empty data handling is safe

⚠️ Minor Concern:

  • The duplicate header/data detection at da_retriever.go:155-161, 167-174 logs but doesn't report metrics. Consider adding metrics for detecting malicious resubmissions.

⚡ Performance Considerations

Good Optimizations:

  • Namespace bytes pre-calculated in constructor (avoiding repeated conversions)
  • Batching of blob retrievals (100 per batch)
  • Efficient transient caching in daRetriever struct

Potential Improvement:
Consider parallel batch fetching instead of sequential:

// Current: Sequential batches
for i := 0; i < len(idsResult.IDs); i += batchSize {
    batchBlobs, err := c.da.Get(getBlobsCtx, idsResult.IDs[i:end], namespace)
    // ...
}

// Suggestion: Parallel batches using errgroup
g, ctx := errgroup.WithContext(ctx)
for i := 0; i < len(idsResult.IDs); i += batchSize {
    // Launch parallel fetches with rate limiting
}

This is a future optimization - not critical for this PR.


📝 Documentation

Good:

  • Package-level documentation (block/internal/da/client.go:1-3)
  • Clear function comments
  • Exported API is documented in block/public.go:34-50

Suggestions:

  1. Add godoc examples for NewDAClient in public.go
  2. Document the relationship between namespaces and how they're used
  3. Consider adding architecture diagrams for the cache split design

✅ Verification & Testing Recommendations

Before merging:

  1. ✅ Ensure CI passes (currently pending)
  2. ⚠️ Add tests for cache manager constructors to improve coverage
  3. ⚠️ Fix the error handling in client.go:159-180 using errors.Is
  4. ⚠️ Fix the test helper error handling in da_retriever_test.go:66
  5. ⚠️ Update log message at client.go:171 to reflect actual error

🎯 Summary

This is a high-quality refactoring that successfully extracts DA client logic and splits cache interfaces for better modularity. The changes are well-tested and maintain backward compatibility.

Key Strengths:

  • Clean architectural separation
  • Comprehensive test coverage (80%+)
  • No breaking changes to public APIs
  • Proper error handling in most places

Must Fix Before Merge:

  1. Error handling in Retrieve method (use errors.Is instead of strings.Contains)
  2. Incorrect log message for ErrHeightFromFuture

Should Fix:

  1. Test helper error handling
  2. Consider adding cache manager constructor tests

Nice to Have:

  1. Separate implementation structs for cache managers (future hardening)
  2. Parallel batch fetching optimization (future improvement)

Overall Assessment:Approve with minor fixes

The refactoring achieves its stated goal of extracting logic for easier integration. The identified issues are straightforward to fix and don't fundamentally impact the design.


@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @julienrbrt, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a DAClient to handle interactions with the Data Availability layer, and splits the cache interface for better separation of concerns. These changes aim to simplify DA handling and improve the modularity of the codebase, particularly for force inclusion integration.

Highlights

  • DA Client Creation: Introduces a DAClient to encapsulate interactions with the Data Availability layer, providing a consistent interface for submitting and retrieving data.
  • Cache Interface Splitting: Refactors the cache manager interface into CacheManager and PendingManager, separating concerns for managing cached data and pending operations.
  • Dependency Injection: Updates components to use the new DAClient and split cache interfaces, promoting loose coupling and testability.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

// SetProcessedHeight updates the highest processed block height.
func (h *P2PHandler) SetProcessedHeight(height uint64) {
for {
for range 1_000 {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


// Start main processing loop
go s.processLoop()
s.wg.Add(1)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reverted as per #2873 (comment)

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant and beneficial refactoring by creating a dedicated DA client and splitting the cache manager interface. These changes improve the separation of concerns and make the codebase more modular and testable. The introduction of the da.Client encapsulates all interactions with the DA layer, and the split of cache.Manager into CacheManager and PendingManager clarifies responsibilities.

I've identified a few areas for improvement:

  • A potential for nil pointer panics in the cache package due to how partially initialized manager implementations are created.
  • Inconsistent error handling in the new da.Client.
  • Error handling in a test helper function could be made more robust.

Overall, this is a solid refactoring. Addressing the identified issues will further improve the code's robustness and safety.

Comment on lines +157 to +178
if strings.Contains(err.Error(), coreda.ErrBlobNotFound.Error()) {
c.logger.Debug().Uint64("height", height).Msg("Blobs not found at height")
return coreda.ResultRetrieve{
BaseResult: coreda.BaseResult{
Code: coreda.StatusNotFound,
Message: coreda.ErrBlobNotFound.Error(),
Height: height,
Timestamp: time.Now(),
},
}
}
if strings.Contains(err.Error(), coreda.ErrHeightFromFuture.Error()) {
c.logger.Debug().Uint64("height", height).Msg("Blobs not found at height")
return coreda.ResultRetrieve{
BaseResult: coreda.BaseResult{
Code: coreda.StatusHeightFromFuture,
Message: coreda.ErrHeightFromFuture.Error(),
Height: height,
Timestamp: time.Now(),
},
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using strings.Contains for error checking is less robust than using errors.Is. The Submit method in this same file already uses errors.Is for handling specific errors, and it would be good to be consistent. Using errors.Is correctly handles wrapped errors and is the idiomatic way to check for specific error values in Go.

Additionally, the log message for ErrHeightFromFuture is a copy-paste from ErrBlobNotFound and can be misleading. It should be updated to reflect the actual error.

if errors.Is(err, coreda.ErrBlobNotFound) {
			c.logger.Debug().Uint64("height", height).Msg("Blobs not found at height")
			return coreda.ResultRetrieve{
				BaseResult: coreda.BaseResult{
					Code:      coreda.StatusNotFound,
					Message:   coreda.ErrBlobNotFound.Error(),
					Height:    height,
					Timestamp: time.Now(),
				},
			}
		}
		if errors.Is(err, coreda.ErrHeightFromFuture) {
			c.logger.Debug().Uint64("height", height).Msg("Height is from the future")
			return coreda.ResultRetrieve{
				BaseResult: coreda.BaseResult{
					Code:      coreda.StatusHeightFromFuture,
					Message:   coreda.ErrHeightFromFuture.Error(),
					Height:    height,
					Timestamp: time.Now(),
				},
			}
		}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this code path wasn't touched. simply moved.

@codecov
Copy link

codecov bot commented Nov 21, 2025

Codecov Report

❌ Patch coverage is 80.35714% with 44 lines in your changes missing coverage. Please review.
✅ Project coverage is 64.73%. Comparing base (a465969) to head (5336661).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
block/internal/cache/manager.go 0.00% 33 Missing ⚠️
block/internal/syncing/syncer.go 14.28% 6 Missing ⚠️
block/internal/da/client.go 97.29% 4 Missing ⚠️
block/internal/syncing/da_retriever.go 94.73% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2878      +/-   ##
==========================================
- Coverage   65.09%   64.73%   -0.36%     
==========================================
  Files          81       81              
  Lines        7268     7328      +60     
==========================================
+ Hits         4731     4744      +13     
- Misses       1995     2043      +48     
+ Partials      542      541       -1     
Flag Coverage Δ
combined 64.73% <80.35%> (-0.36%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@tac0turtle tac0turtle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice separation

@julienrbrt julienrbrt added this pull request to the merge queue Nov 21, 2025
Merged via the queue into main with commit 178b4fe Nov 21, 2025
26 of 28 checks passed
@julienrbrt julienrbrt deleted the julien/extract-fi branch November 21, 2025 13:14
alpe added a commit that referenced this pull request Nov 24, 2025
* main:
  chore: remove extra github action yml file (#2882)
  fix(execution/evm): verify payload status (#2863)
  feat: fetch included da height from store (#2880)
  chore: better output on errors (#2879)
  refactor!: create da client and split cache interface (#2878)
  chore!: rename `evm-single` and `grpc-single` (#2839)
  build(deps): Bump golang.org/x/crypto from 0.42.0 to 0.45.0 in /tools/da-debug in the go_modules group across 1 directory (#2876)
  chore: parallel cache de/serialization (#2868)
  chore: bump blob size (#2877)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants