Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upstream merge v2.60.6 #116

Merged
merged 53 commits into from
Sep 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
d983933
Changing Caplin Finality Checkpoint API response to match spec (#10944)
yperbasis Jun 28, 2024
b7942be
Add zero check in tx.Sender func (#10737)
somnathb1 Jun 28, 2024
30cae52
eth/tracers: fix prestate tracer bug with create with value (#10960)
taratorio Jul 2, 2024
4193e03
eth/tracers: add optional includePrecompiles flag to callTracer - def…
taratorio Jul 3, 2024
7055c2c
Cherry-pick: Caplin's past finalization check (#11006)
Giulio2002 Jul 3, 2024
9710e98
turbo/jsonrpc: add optional includePrecompiles flag to trace_* apis (…
taratorio Jul 4, 2024
1a050db
eth/tracers: always pop precompiles stack in callTracer (#11004)
taratorio Jul 4, 2024
eb7db79
allow to gracefully exit from CL downloading stage (#10887) (#11020)
VBulikov Jul 4, 2024
8d6a173
Less troublesome way of identifying content-type (#10770) (#11018)
VBulikov Jul 4, 2024
1417f89
Diagnostics: loglevel (#11015)
dvovk Jul 4, 2024
fd4f1ac
dl: additional pre-check for having info (#11012)
AskAlexSharov Jul 4, 2024
b4dd12e
Diagnostics: Optimize db write (#11016)
dvovk Jul 4, 2024
bebd788
qa-tests: add Tip-Tracking test for Gnosis (#11053)
mriccobene Jul 8, 2024
1f73ed5
params: version 2.60.3 (#11069)
yperbasis Jul 8, 2024
f55073f
PIP-35 for amoy: enforce 25gwei gas config for amoy for erigon 2 (#11…
manav2401 Jul 9, 2024
72ab70b
params: version 2.60.4 (#11098)
yperbasis Jul 10, 2024
b524c92
qa-test: fix Tip-Tracking test for Gnosis (#11108)
mriccobene Jul 11, 2024
81c28cd
Diagnostics: snapshot stage info gathering (#11105)
dvovk Jul 12, 2024
7e56d99
rpc bottleneck: block files mutex (e2) (#11155)
AskAlexSharov Jul 15, 2024
318275c
Remove cmd/release in release/2.60 branch. (#11162)
lystopad Jul 15, 2024
df66e39
release e3 files to e2 (#11206)
AskAlexSharov Jul 18, 2024
9ee85cd
pool: do fsync by non-empty update (e2) (#11200)
AskAlexSharov Jul 18, 2024
a6a87cb
ots: nil ptr rpc (e2) (#11233)
AskAlexSharov Jul 19, 2024
5577725
tracer: add support bailout on evm.create() on e2 (#11259)
lupin012 Jul 22, 2024
0899d73
Erigon_getLatestLogs fix cherry-pick on e2 (#11258)
lupin012 Jul 22, 2024
f6fcd01
HexOrDecimal - to accept unquoted numbers - in json (e2) (#11264)
AskAlexSharov Jul 22, 2024
0e88a11
PIP-35: enforce 25gwei gas config for all polygon chains for erigon 2…
manav2401 Jul 23, 2024
6fe299c
params: version 2.60.5 (#11327)
yperbasis Jul 25, 2024
1f6f931
Update .goreleaser.yml to fix job (#11330)
VBulikov Jul 25, 2024
ea49def
Removed ghcr from makefile (#11335)
VBulikov Jul 26, 2024
4d45aa9
compatibility with geth - of stateDiff encoding (#11362)
AskAlexSharov Jul 27, 2024
217b5a1
remove e3 metrics from e2 (#11374)
AskAlexSharov Jul 29, 2024
5a37697
bor: "header not found" add debug logs (e2) (#11361)
AskAlexSharov Jul 29, 2024
bcd1372
callTracer: don't capture logs txs failed with error (#11375)
AskAlexSharov Jul 29, 2024
f15e7e1
Don't close `roSn` object while downloading (#11432)
AskAlexSharov Aug 1, 2024
08447f4
changes from main (#11492)
dvovk Aug 6, 2024
4cad1f9
release e3 files to e2 (#11490)
AskAlexSharov Aug 6, 2024
9e9e143
don't use lfs for consensus spec tests (#11545) (#11560)
dvovk Aug 11, 2024
e42dcfa
turbo/snapshotsync: Fmt fix (#11493) (#11559)
dvovk Aug 11, 2024
0f8ad6c
diagnostics: updated sys info to include CPU stats (#11497) (#11561)
dvovk Aug 11, 2024
1f6e0e7
Updated gopsutil version (#11507) (#11562)
dvovk Aug 12, 2024
83482a4
don't use lfs for consensus spec tests (#11545) (#11552)
taratorio Aug 12, 2024
a1e7362
stagedsync: add dbg.SaveHeapProfileNearOOM to headers stage (#11549) …
taratorio Aug 12, 2024
250c70f
diagnostics: export system info (#11567)
dvovk Aug 12, 2024
f0e6013
diagnostics: added flags to report (#11548) (#11570)
dvovk Aug 12, 2024
9b19cd5
dbg: add save heap options for logger and memstats inputs (#11576)
taratorio Aug 12, 2024
aa6c7e6
Cherry-picked essential caplin stuff (#11569)
Giulio2002 Aug 12, 2024
2661ad3
Fix panic in caplin api get validator (#11419) (#11583)
domiwei Aug 13, 2024
60971ea
diagnostics: added api to get sys info data (#11589) (#11592)
dvovk Aug 13, 2024
d24e5d4
Bump version to 2.60.6 (#11607)
VBulikov Aug 14, 2024
a9a0508
Merge branch 'boba-develop' into release/v2.60.6
boyuan-chen Sep 3, 2024
32daa84
Fix go sum
boyuan-chen Sep 3, 2024
4221ddd
Update fork info
boyuan-chen Sep 4, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
125 changes: 125 additions & 0 deletions .github/workflows/qa-tip-tracking-gnosis.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
name: QA - Tip tracking (Gnosis)

on:
push:
branches:
- 'release/2.*'
pull_request:
branches:
- 'release/2.*'
types:
- ready_for_review
workflow_dispatch: # Run manually

jobs:
tip-tracking-test:
runs-on: [self-hosted, Gnosis]
timeout-minutes: 600
env:
ERIGON_REFERENCE_DATA_DIR: /opt/erigon-versions/reference-version/datadir
ERIGON_TESTBED_DATA_DIR: /opt/erigon-testbed/datadir
ERIGON_QA_PATH: /home/qarunner/erigon-qa
TRACKING_TIME_SECONDS: 14400 # 4 hours
TOTAL_TIME_SECONDS: 28800 # 8 hours
CHAIN: gnosis

steps:
- name: Check out repository
uses: actions/checkout@v4

- name: Clean Erigon Build Directory
run: |
make clean

- name: Build Erigon
run: |
make erigon
working-directory: ${{ github.workspace }}

- name: Pause the Erigon instance dedicated to db maintenance
run: |
python3 $ERIGON_QA_PATH/test_system/db-producer/pause_production.py || true

- name: Restore Erigon Testbed Data Directory
run: |
rsync -a --delete $ERIGON_REFERENCE_DATA_DIR/ $ERIGON_TESTBED_DATA_DIR/

- name: Run Erigon, wait sync and check ability to maintain sync
id: test_step
run: |
set +e # Disable exit on error

# 1. Launch the testbed Erigon instance
# 2. Allow time for the Erigon to achieve synchronization
# 3. Begin timing the duration that Erigon maintains synchronization
python3 $ERIGON_QA_PATH/test_system/qa-tests/tip-tracking/run_and_check_tip_tracking.py \
${{ github.workspace }}/build/bin $ERIGON_TESTBED_DATA_DIR $TRACKING_TIME_SECONDS $TOTAL_TIME_SECONDS Erigon2 $CHAIN

# Capture monitoring script exit status
test_exit_status=$?

# Save the subsection reached status
echo "::set-output name=test_executed::true"

# Clean up Erigon process if it's still running
if kill -0 $ERIGON_PID 2> /dev/null; then
echo "Terminating Erigon"
kill $ERIGON_PID
wait $ERIGON_PID
fi

# Check test runner script exit status
if [ $test_exit_status -eq 0 ]; then
echo "Tests completed successfully"
echo "TEST_RESULT=success" >> "$GITHUB_OUTPUT"
else
echo "Error detected during tests"
echo "TEST_RESULT=failure" >> "$GITHUB_OUTPUT"
fi

- name: Delete Erigon Testbed Data Directory
if: always()
run: |
rm -rf $ERIGON_TESTBED_DATA_DIR

- name: Resume the Erigon instance dedicated to db maintenance
run: |
python3 $ERIGON_QA_PATH/test_system/db-producer/resume_production.py || true

- name: Save test results
if: steps.test_step.outputs.test_executed == 'true'
env:
TEST_RESULT: ${{ steps.test_step.outputs.TEST_RESULT }}
run: |
db_version=$(python3 $ERIGON_QA_PATH/test_system/qa-tests/uploads/prod_info.py $ERIGON_REFERENCE_DATA_DIR/../production.ini production erigon_repo_commit)
if [ -z "$db_version" ]; then
db_version="no-version"
fi

python3 $ERIGON_QA_PATH/test_system/qa-tests/uploads/upload_test_results.py \
--repo erigon \
--commit $(git rev-parse HEAD) \
--branch ${{ github.ref_name }} \
--test_name tip-tracking \
--chain $CHAIN \
--runner ${{ runner.name }} \
--db_version $db_version \
--outcome $TEST_RESULT \
--result_file ${{ github.workspace }}/result-$CHAIN.json

- name: Upload test results
if: steps.test_step.outputs.test_executed == 'true'
uses: actions/upload-artifact@v4
with:
name: test-results
path: ${{ github.workspace }}/result-${{ env.CHAIN }}.json

- name: Action for Success
if: steps.test_step.outputs.TEST_RESULT == 'success'
run: echo "::notice::Tests completed successfully"

- name: Action for Not Success
if: steps.test_step.outputs.TEST_RESULT != 'success'
run: |
echo "::error::Error detected during tests"
exit 1
14 changes: 0 additions & 14 deletions .goreleaser.yml
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,6 @@ snapshot:
dockers:
- image_templates:
- thorax/{{ .ProjectName }}:{{ .Version }}-amd64
- ghcr.io/ledgerwatch/{{ .ProjectName }}:{{ .Version }}-amd64
dockerfile: Dockerfile.release
use: buildx
skip_push: true
Expand All @@ -80,7 +79,6 @@ dockers:

- image_templates:
- thorax/{{ .ProjectName }}:{{ .Version }}-arm64
- ghcr.io/ledgerwatch/{{ .ProjectName }}:{{ .Version }}-arm64
dockerfile: Dockerfile.release
skip_push: true
use: buildx
Expand All @@ -97,24 +95,12 @@ docker_manifests:
- thorax/{{ .ProjectName }}:{{ .Version }}-amd64
- thorax/{{ .ProjectName }}:{{ .Version }}-arm64

- name_template: ghcr.io/ledgerwatch/{{ .ProjectName }}:{{ .Version }}
skip_push: true
image_templates:
- ghcr.io/ledgerwatch/{{ .ProjectName }}:{{ .Version }}-amd64
- ghcr.io/ledgerwatch/{{ .ProjectName }}:{{ .Version }}-arm64

- name_template: thorax/{{ .ProjectName }}:latest
skip_push: true
image_templates:
- thorax/{{ .ProjectName }}:{{ .Version }}-amd64
- thorax/{{ .ProjectName }}:{{ .Version }}-arm64

- name_template: ghcr.io/ledgerwatch/{{ .ProjectName }}:latest
skip_push: true
image_templates:
- ghcr.io/ledgerwatch/{{ .ProjectName }}:{{ .Version }}-amd64
- ghcr.io/ledgerwatch/{{ .ProjectName }}:{{ .Version }}-arm64

announce:
slack:
enabled: false
Expand Down
6 changes: 5 additions & 1 deletion cl/antiquary/antiquary.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ import (
"github.com/ledgerwatch/log/v3"
)

const safetyMargin = 2_000 // We retire snapshots 2k blocks after the finalized head
const safetyMargin = 10_000 // We retire snapshots 10k blocks after the finalized head

// Antiquary is where the snapshots go, aka old history, it is what keep track of the oldest records.
type Antiquary struct {
Expand Down Expand Up @@ -304,6 +304,10 @@ func (a *Antiquary) antiquateBlobs() error {
defer roTx.Rollback()
// perform blob antiquation if it is time to.
currentBlobsProgress := a.sn.FrozenBlobs()
// We should NEVER get ahead of the block snapshots.
if currentBlobsProgress >= a.sn.BlocksAvailable() {
return nil
}
minimunBlobsProgress := ((a.cfg.DenebForkEpoch * a.cfg.SlotsPerEpoch) / snaptype.Erigon2MergeLimit) * snaptype.Erigon2MergeLimit
currentBlobsProgress = utils.Max64(currentBlobsProgress, minimunBlobsProgress)
// read the finalized head
Expand Down
30 changes: 15 additions & 15 deletions cl/beacon/beaconhttp/api.go
Original file line number Diff line number Diff line change
Expand Up @@ -97,41 +97,41 @@ func HandleEndpoint[T any](h EndpointHandler[T]) http.HandlerFunc {
}
// TODO: potentially add a context option to buffer these
contentType := r.Header.Get("Accept")
contentTypes := strings.Split(contentType, ",")

// early return for event stream
if slices.Contains(w.Header().Values("Content-Type"), "text/event-stream") {
return
}
switch {
case slices.Contains(contentTypes, "application/octet-stream"):
case contentType == "*/*", contentType == "", strings.Contains(contentType, "text/html"), strings.Contains(contentType, "application/json"):
if !isNil(ans) {
w.Header().Set("Content-Type", "application/json")
err := json.NewEncoder(w).Encode(ans)
if err != nil {
// this error is fatal, log to console
log.Error("beaconapi failed to encode json", "type", reflect.TypeOf(ans), "err", err)
}
} else {
w.WriteHeader(200)
}
case strings.Contains(contentType, "application/octet-stream"):
sszMarshaler, ok := any(ans).(ssz.Marshaler)
if !ok {
NewEndpointError(http.StatusBadRequest, ErrorSszNotSupported).WriteTo(w)
return
}
w.Header().Set("Content-Type", "application/octet-stream")
// TODO: we should probably figure out some way to stream this in the future :)
encoded, err := sszMarshaler.EncodeSSZ(nil)
if err != nil {
WrapEndpointError(err).WriteTo(w)
return
}
w.Write(encoded)
case contentType == "*/*", contentType == "", slices.Contains(contentTypes, "text/html"), slices.Contains(contentTypes, "application/json"):
if !isNil(ans) {
w.Header().Add("content-type", "application/json")
err := json.NewEncoder(w).Encode(ans)
if err != nil {
// this error is fatal, log to console
log.Error("beaconapi failed to encode json", "type", reflect.TypeOf(ans), "err", err)
}
} else {
w.WriteHeader(200)
}
case slices.Contains(contentTypes, "text/event-stream"):
case strings.Contains(contentType, "text/event-stream"):
return
default:
http.Error(w, "content type must be application/json, application/octet-stream, or text/event-stream", http.StatusBadRequest)
http.Error(w, fmt.Sprintf("content type must include application/json, application/octet-stream, or text/event-stream, got %s", contentType), http.StatusBadRequest)
}
})
}
Expand Down
6 changes: 3 additions & 3 deletions cl/beacon/handler/states.go
Original file line number Diff line number Diff line change
Expand Up @@ -205,9 +205,9 @@ func (a *ApiHandler) getFullState(w http.ResponseWriter, r *http.Request) (*beac
}

type finalityCheckpointsResponse struct {
FinalizedCheckpoint solid.Checkpoint `json:"finalized_checkpoint"`
CurrentJustifiedCheckpoint solid.Checkpoint `json:"current_justified_checkpoint"`
PreviousJustifiedCheckpoint solid.Checkpoint `json:"previous_justified_checkpoint"`
FinalizedCheckpoint solid.Checkpoint `json:"finalized"`
CurrentJustifiedCheckpoint solid.Checkpoint `json:"current_justified"`
PreviousJustifiedCheckpoint solid.Checkpoint `json:"previous_justified"`
}

func (a *ApiHandler) getFinalityCheckpoints(w http.ResponseWriter, r *http.Request) (*beaconhttp.BeaconResponse, error) {
Expand Down
2 changes: 1 addition & 1 deletion cl/beacon/handler/states_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -426,7 +426,7 @@ func TestGetStateFinalityCheckpoints(t *testing.T) {
code: http.StatusOK,
},
}
expected := `{"data":{"finalized_checkpoint":{"epoch":"1","root":"0xde46b0f2ed5e72f0cec20246403b14c963ec995d7c2825f3532b0460c09d5693"},"current_justified_checkpoint":{"epoch":"3","root":"0xa6e47f164b1a3ca30ea3b2144bd14711de442f51e5b634750a12a1734e24c987"},"previous_justified_checkpoint":{"epoch":"2","root":"0x4c3ee7969e485696669498a88c17f70e6999c40603e2f4338869004392069063"}},"execution_optimistic":false,"finalized":false,"version":"bellatrix"}` + "\n"
expected := `{"data":{"finalized":{"epoch":"1","root":"0xde46b0f2ed5e72f0cec20246403b14c963ec995d7c2825f3532b0460c09d5693"},"current_justified":{"epoch":"3","root":"0xa6e47f164b1a3ca30ea3b2144bd14711de442f51e5b634750a12a1734e24c987"},"previous_justified":{"epoch":"2","root":"0x4c3ee7969e485696669498a88c17f70e6999c40603e2f4338869004392069063"}},"execution_optimistic":false,"finalized":false,"version":"bellatrix"}` + "\n"
for _, c := range cases {
t.Run(c.blockID, func(t *testing.T) {
server := httptest.NewServer(handler.mux)
Expand Down
2 changes: 1 addition & 1 deletion cl/beacon/handler/test_data/states_1.yaml
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
finality_checkpoint: {"data":{"finalized_checkpoint":{"epoch":"1","root":"0xde46b0f2ed5e72f0cec20246403b14c963ec995d7c2825f3532b0460c09d5693"},"current_justified_checkpoint":{"epoch":"3","root":"0xa6e47f164b1a3ca30ea3b2144bd14711de442f51e5b634750a12a1734e24c987"},"previous_justified_checkpoint":{"epoch":"2","root":"0x4c3ee7969e485696669498a88c17f70e6999c40603e2f4338869004392069063"}},"finalized":false,"version":2,"execution_optimistic":false}
finality_checkpoint: {"data":{"finalized":{"epoch":"1","root":"0xde46b0f2ed5e72f0cec20246403b14c963ec995d7c2825f3532b0460c09d5693"},"current_justified":{"epoch":"3","root":"0xa6e47f164b1a3ca30ea3b2144bd14711de442f51e5b634750a12a1734e24c987"},"previous_justified":{"epoch":"2","root":"0x4c3ee7969e485696669498a88c17f70e6999c40603e2f4338869004392069063"}},"finalized":false,"version":2,"execution_optimistic":false}
randao: {"data":{"randao":"0xdeec617717272914bfd73e02ca1da113a83cf4cf33cd4939486509e2da4ccf4e"},"finalized":false,"execution_optimistic":false}
7 changes: 7 additions & 0 deletions cl/beacon/handler/validators.go
Original file line number Diff line number Diff line change
Expand Up @@ -404,12 +404,19 @@ func (a *ApiHandler) GetEthV1BeaconStatesValidator(w http.ResponseWriter, r *htt
if err != nil {
return nil, err
}
if validatorSet == nil {
return nil, beaconhttp.NewEndpointError(http.StatusNotFound, errors.New("validators not found"))
}
balances, err := a.stateReader.ReadValidatorsBalances(tx, *slot)
if err != nil {
return nil, err
}
if balances == nil {
return nil, beaconhttp.NewEndpointError(http.StatusNotFound, errors.New("balances not found"))
}
return responseValidator(validatorIndex, stateEpoch, balances, validatorSet, true)
}

balances, err := a.forkchoiceStore.GetBalances(blockRoot)
if err != nil {
return nil, err
Expand Down
8 changes: 3 additions & 5 deletions cl/clstages/clstages.go
Original file line number Diff line number Diff line change
Expand Up @@ -32,14 +32,12 @@ func (s *StageGraph[CONFIG, ARGUMENTS]) StartWithStage(ctx context.Context, star
errch := make(chan error)
start := time.Now()
go func() {
sctx, cn := context.WithCancel(ctx)
defer cn()
// we run this is a goroutine so that the process can exit in the middle of a stage
// since caplin is designed to always be able to recover regardless of db state, this should be safe
select {
case errch <- currentStage.ActionFunc(sctx, lg, cfg, args):
case <-sctx.Done():
errch <- sctx.Err()
case errch <- currentStage.ActionFunc(ctx, lg, cfg, args):
case <-ctx.Done(): // we are not sure if actionFunc exits on ctx
errch <- ctx.Err()
}
}()
err := <-errch
Expand Down
6 changes: 5 additions & 1 deletion cl/phase1/network/services/blob_sidecar_service.go
Original file line number Diff line number Diff line change
Expand Up @@ -78,11 +78,15 @@ func (b *blobSidecarService) ProcessMessage(ctx context.Context, subnetId *uint6
currentSlot := b.ethClock.GetCurrentSlot()
sidecarSlot := msg.SignedBlockHeader.Header.Slot
// [IGNORE] The block is not from a future slot (with a MAXIMUM_GOSSIP_CLOCK_DISPARITY allowance) -- i.e. validate that
//signed_beacon_block.message.slot <= current_slot (a client MAY queue future blocks for processing at the appropriate slot).
// signed_beacon_block.message.slot <= current_slot (a client MAY queue future blocks for processing at the appropriate slot).
if currentSlot < sidecarSlot && !b.ethClock.IsSlotCurrentSlotWithMaximumClockDisparity(sidecarSlot) {
return ErrIgnore
}

if b.forkchoiceStore.FinalizedSlot() >= sidecarSlot {
return ErrIgnore
}

blockRoot, err := msg.SignedBlockHeader.Header.HashSSZ()
if err != nil {
return err
Expand Down
2 changes: 1 addition & 1 deletion cl/phase1/stages/clstages.go
Original file line number Diff line number Diff line change
Expand Up @@ -252,7 +252,7 @@ func ConsensusClStages(ctx context.Context,
}
// This stage is special so use context.Background() TODO(Giulio2002): make the context be passed in
startingSlot := cfg.state.LatestBlockHeader().Slot
downloader := network2.NewBackwardBeaconDownloader(context.Background(), cfg.rpc, cfg.executionClient, cfg.indiciesDB)
downloader := network2.NewBackwardBeaconDownloader(ctx, cfg.rpc, cfg.executionClient, cfg.indiciesDB)

if err := SpawnStageHistoryDownload(StageHistoryReconstruction(downloader, cfg.antiquary, cfg.sn, cfg.indiciesDB, cfg.executionClient, cfg.beaconCfg, cfg.backfilling, cfg.blobBackfilling, false, startingRoot, startingSlot, cfg.tmpdir, 600*time.Millisecond, cfg.blockCollector, cfg.blockReader, cfg.blobStore, logger), context.Background(), logger); err != nil {
cfg.hasDownloaded = false
Expand Down
20 changes: 12 additions & 8 deletions cl/phase1/stages/stage_history_download.go
Original file line number Diff line number Diff line change
Expand Up @@ -204,18 +204,18 @@ func SpawnStageHistoryDownload(cfg StageHistoryReconstructionCfg, ctx context.Co
close(finishCh)
if cfg.blobsBackfilling {
go func() {
if err := downloadBlobHistoryWorker(cfg, ctx, logger); err != nil {
if err := downloadBlobHistoryWorker(cfg, ctx, true, logger); err != nil {
logger.Error("Error downloading blobs", "err", err)
}
// set a timer every 1 hour as a failsafe
ticker := time.NewTicker(time.Hour)
// set a timer every 15 minutes as a failsafe
ticker := time.NewTicker(15 * time.Minute)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
if err := downloadBlobHistoryWorker(cfg, ctx, logger); err != nil {
if err := downloadBlobHistoryWorker(cfg, ctx, false, logger); err != nil {
logger.Error("Error downloading blobs", "err", err)
}
}
Expand Down Expand Up @@ -249,7 +249,7 @@ func SpawnStageHistoryDownload(cfg StageHistoryReconstructionCfg, ctx context.Co
}

// downloadBlobHistoryWorker is a worker that downloads the blob history by using the already downloaded beacon blocks
func downloadBlobHistoryWorker(cfg StageHistoryReconstructionCfg, ctx context.Context, logger log.Logger) error {
func downloadBlobHistoryWorker(cfg StageHistoryReconstructionCfg, ctx context.Context, shouldLog bool, logger log.Logger) error {
currentSlot := cfg.startingSlot + 1
blocksBatchSize := uint64(8) // requests 8 blocks worth of blobs at a time
tx, err := cfg.indiciesDB.BeginRo(ctx)
Expand All @@ -263,7 +263,7 @@ func downloadBlobHistoryWorker(cfg StageHistoryReconstructionCfg, ctx context.Co
prevLogSlot := currentSlot
prevTime := time.Now()
targetSlot := cfg.beaconCfg.DenebForkEpoch * cfg.beaconCfg.SlotsPerEpoch
cfg.logger.Info("Downloading blobs backwards", "from", currentSlot, "to", targetSlot)

for currentSlot >= targetSlot {
if currentSlot <= cfg.sn.FrozenBlobs() {
break
Expand Down Expand Up @@ -312,7 +312,9 @@ func downloadBlobHistoryWorker(cfg StageHistoryReconstructionCfg, ctx context.Co
case <-ctx.Done():
return ctx.Err()
case <-logInterval.C:

if !shouldLog {
continue
}
blkSec := float64(prevLogSlot-currentSlot) / time.Since(prevTime).Seconds()
blkSecStr := fmt.Sprintf("%.1f", blkSec)
// round to 1 decimal place and convert to string
Expand Down Expand Up @@ -353,7 +355,9 @@ func downloadBlobHistoryWorker(cfg StageHistoryReconstructionCfg, ctx context.Co
continue
}
}
log.Info("Blob history download finished successfully")
if shouldLog {
logger.Info("Blob history download finished successfully")
}
cfg.antiquary.NotifyBlobBackfilled()
return nil
}
Loading
Loading