-
Notifications
You must be signed in to change notification settings - Fork 238
refactor!: create da client and split cache interface #2878
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).
|
|
Claude finished @julienrbrt's task —— View job Comprehensive Code ReviewI've completed a thorough review of this refactoring PR. This is a well-structured extraction that improves code organization and separation of concerns. Below are my findings organized by priority. 🟢 Positive Aspects1. Excellent Architectural Improvements
2. Code Quality
🔴 Critical Issues1. Unsafe Error Handling in DA Client ( The if strings.Contains(err.Error(), coreda.ErrBlobNotFound.Error()) {
// ...
}
if strings.Contains(err.Error(), coreda.ErrHeightFromFuture.Error()) {
// ...
}Issues:
Recommendation: if errors.Is(err, coreda.ErrBlobNotFound) {
c.logger.Debug().Uint64("height", height).Msg("Blobs not found at height")
return coreda.ResultRetrieve{
BaseResult: coreda.BaseResult{
Code: coreda.StatusNotFound,
Message: coreda.ErrBlobNotFound.Error(),
Height: height,
Timestamp: time.Now(),
},
}
}
if errors.Is(err, coreda.ErrHeightFromFuture) {
c.logger.Debug().Uint64("height", height).Msg("Height is from the future")
return coreda.ResultRetrieve{
BaseResult: coreda.BaseResult{
Code: coreda.StatusHeightFromFuture,
Message: coreda.ErrHeightFromFuture.Error(),
Height: height,
Timestamp: time.Now(),
},
}
}Note: The PR author indicated this code was "simply moved" and wasn't touched. However, this represents a good opportunity to fix the issue during this refactoring, especially since it's being extracted into a new, more visible component. 🟡 Medium Priority Issues2. Potential Nil Pointer Risk in Cache Manager ( The
Risk: cm := NewCacheManager(...) // Returns CacheManager interface
impl := cm.(*implementation) // Type assertion succeeds
impl.GetPendingHeaders() // Nil pointer dereference!Current Safety: The code appears safe in practice because:
Recommendation for Future Safety: type cacheManager struct {
headerCache, dataCache, txCache *Cache[...]
// ... cache-specific fields only
}
type pendingManager struct {
pendingHeaders *PendingHeaders
pendingData *PendingData
// ... pending-specific fields only
}
type fullManager struct {
*cacheManager
*pendingManager
}This would make incorrect usage a compile-time error rather than a runtime panic. However, this is a non-blocking suggestion for future hardening. 3. Test Helper Error Handling ( The test helper now silently ignores errors: payload, _ := d.MarshalBinary()
sig, err := signer.Sign(payload)
require.NoError(t, err)Issue: The Recommendation: payload, err := d.MarshalBinary()
require.NoError(t, err)
sig, err := signer.Sign(payload)
require.NoError(t, err)This ensures test failures are immediately visible if marshaling fails. 🟡 Code Quality Observations4. Context Timeout Management ( The getIDsCtx, cancel := context.WithTimeout(ctx, c.defaultTimeout)
defer cancel()
// ... later in loop:
getBlobsCtx, cancel := context.WithTimeout(ctx, c.defaultTimeout)
batchBlobs, err := c.da.Get(getBlobsCtx, idsResult.IDs[i:end], namespace)
cancel()Observation: This pattern is generally safe, but the loop-based timeout handling could lead to long total execution times for large batches (100 IDs per batch × timeout per batch). Suggestion: Consider adding an overall operation timeout in addition to per-operation timeouts, or documenting the maximum expected duration: // Maximum retrieval time = timeout × ceil(numIDs/100) batches5. Magic Numbers
📊 Test Coverage AnalysisThe codecov report shows 80.35% patch coverage with 44 missing lines. Key gaps:
🔒 Security Considerations✅ Good Practices:
✅ Input Validation:
⚡ Performance Considerations✅ Good Optimizations:
Potential Improvement: // Current: Sequential batches
for i := 0; i < len(idsResult.IDs); i += batchSize {
batchBlobs, err := c.da.Get(getBlobsCtx, idsResult.IDs[i:end], namespace)
// ...
}
// Suggestion: Parallel batches using errgroup
g, ctx := errgroup.WithContext(ctx)
for i := 0; i < len(idsResult.IDs); i += batchSize {
// Launch parallel fetches with rate limiting
}This is a future optimization - not critical for this PR. 📝 DocumentationGood:
Suggestions:
✅ Verification & Testing RecommendationsBefore merging:
🎯 SummaryThis is a high-quality refactoring that successfully extracts DA client logic and splits cache interfaces for better modularity. The changes are well-tested and maintain backward compatibility. Key Strengths:
Must Fix Before Merge:
Should Fix:
Nice to Have:
Overall Assessment: ✅ Approve with minor fixes The refactoring achieves its stated goal of extracting logic for easier integration. The identified issues are straightforward to fix and don't fundamentally impact the design. |
Summary of ChangesHello @julienrbrt, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
| // SetProcessedHeight updates the highest processed block height. | ||
| func (h *P2PHandler) SetProcessedHeight(height uint64) { | ||
| for { | ||
| for range 1_000 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
|
||
| // Start main processing loop | ||
| go s.processLoop() | ||
| s.wg.Add(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reverted as per #2873 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant and beneficial refactoring by creating a dedicated DA client and splitting the cache manager interface. These changes improve the separation of concerns and make the codebase more modular and testable. The introduction of the da.Client encapsulates all interactions with the DA layer, and the split of cache.Manager into CacheManager and PendingManager clarifies responsibilities.
I've identified a few areas for improvement:
- A potential for nil pointer panics in the
cachepackage due to how partially initialized manager implementations are created. - Inconsistent error handling in the new
da.Client. - Error handling in a test helper function could be made more robust.
Overall, this is a solid refactoring. Addressing the identified issues will further improve the code's robustness and safety.
| if strings.Contains(err.Error(), coreda.ErrBlobNotFound.Error()) { | ||
| c.logger.Debug().Uint64("height", height).Msg("Blobs not found at height") | ||
| return coreda.ResultRetrieve{ | ||
| BaseResult: coreda.BaseResult{ | ||
| Code: coreda.StatusNotFound, | ||
| Message: coreda.ErrBlobNotFound.Error(), | ||
| Height: height, | ||
| Timestamp: time.Now(), | ||
| }, | ||
| } | ||
| } | ||
| if strings.Contains(err.Error(), coreda.ErrHeightFromFuture.Error()) { | ||
| c.logger.Debug().Uint64("height", height).Msg("Blobs not found at height") | ||
| return coreda.ResultRetrieve{ | ||
| BaseResult: coreda.BaseResult{ | ||
| Code: coreda.StatusHeightFromFuture, | ||
| Message: coreda.ErrHeightFromFuture.Error(), | ||
| Height: height, | ||
| Timestamp: time.Now(), | ||
| }, | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using strings.Contains for error checking is less robust than using errors.Is. The Submit method in this same file already uses errors.Is for handling specific errors, and it would be good to be consistent. Using errors.Is correctly handles wrapped errors and is the idiomatic way to check for specific error values in Go.
Additionally, the log message for ErrHeightFromFuture is a copy-paste from ErrBlobNotFound and can be misleading. It should be updated to reflect the actual error.
if errors.Is(err, coreda.ErrBlobNotFound) {
c.logger.Debug().Uint64("height", height).Msg("Blobs not found at height")
return coreda.ResultRetrieve{
BaseResult: coreda.BaseResult{
Code: coreda.StatusNotFound,
Message: coreda.ErrBlobNotFound.Error(),
Height: height,
Timestamp: time.Now(),
},
}
}
if errors.Is(err, coreda.ErrHeightFromFuture) {
c.logger.Debug().Uint64("height", height).Msg("Height is from the future")
return coreda.ResultRetrieve{
BaseResult: coreda.BaseResult{
Code: coreda.StatusHeightFromFuture,
Message: coreda.ErrHeightFromFuture.Error(),
Height: height,
Timestamp: time.Now(),
},
}
}There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this code path wasn't touched. simply moved.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #2878 +/- ##
==========================================
- Coverage 65.09% 64.73% -0.36%
==========================================
Files 81 81
Lines 7268 7328 +60
==========================================
+ Hits 4731 4744 +13
- Misses 1995 2043 +48
+ Partials 542 541 -1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
tac0turtle
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice separation
* main: chore: remove extra github action yml file (#2882) fix(execution/evm): verify payload status (#2863) feat: fetch included da height from store (#2880) chore: better output on errors (#2879) refactor!: create da client and split cache interface (#2878) chore!: rename `evm-single` and `grpc-single` (#2839) build(deps): Bump golang.org/x/crypto from 0.42.0 to 0.45.0 in /tools/da-debug in the go_modules group across 1 directory (#2876) chore: parallel cache de/serialization (#2868) chore: bump blob size (#2877)
Extract some logic from #2797.
Those refactors were done to ease force inclusion integration but they can be extracted to be merged sooner