Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
25140e2
feat(ci): add performance tracking workflow for AvalancheGo benchmarks
Elvis339 Nov 27, 2025
1ea99bb
ci: track performance
Elvis339 Nov 27, 2025
1ce1481
test(ci): add PR label trigger for testing
Elvis339 Nov 27, 2025
8794250
temp fix ci
Elvis339 Nov 27, 2025
4aa0e08
ci: use switch CI token
Elvis339 Dec 2, 2025
32fd06f
ci: lint
Elvis339 Dec 2, 2025
7a75021
fix
Elvis339 Dec 2, 2025
e35fa5d
docs
Elvis339 Dec 2, 2025
83a113a
Merge branch 'main' into es/enable-firewood-dev-workflow
Elvis339 Dec 2, 2025
18f4035
ci: push performance to benchmark data
Elvis339 Dec 4, 2025
73fc781
ci(perf): add benchmark workflow with nix-based just commands
Elvis339 Dec 4, 2025
a19f1e8
Merge branch 'es/enable-firewood-dev-workflow' of https://github.com/…
Elvis339 Dec 4, 2025
668cef3
docs
Elvis339 Dec 4, 2025
6e65816
chore: remove "\n" that got URL encoded
Elvis339 Dec 4, 2025
1e2bdd3
docs
Elvis339 Dec 4, 2025
01ee24f
ci(track-performance): remove `if: always`
Elvis339 Dec 8, 2025
19d921e
address PR
Elvis339 Dec 8, 2025
1131a1d
Update .github/workflows/track-performance.yml
Elvis339 Dec 8, 2025
5e32e9d
Update .github/workflows/track-performance.yml
Elvis339 Dec 8, 2025
8008264
Update justfile
Elvis339 Dec 8, 2025
f2cf502
fix(ci): replace sleep with retry loop, improve justfile commands
Elvis339 Dec 8, 2025
39c2050
Merge branch 'es/enable-firewood-dev-workflow' of https://github.com/…
Elvis339 Dec 8, 2025
ffcb333
lint: descriptive link text
Elvis339 Dec 8, 2025
02e7e94
docs
Elvis339 Dec 8, 2025
7b16833
Merge branch 'main' into es/enable-firewood-dev-workflow
Elvis339 Dec 8, 2025
625049e
chore: fix syntax error for benchmark command
Elvis339 Dec 9, 2025
bb2c97e
ci: update workflow for C-Chain reexecution benchmarks and improve ju…
Elvis339 Dec 30, 2025
c84dc10
Merge branch 'main' into es/enable-firewood-dev-workflow
Elvis339 Jan 22, 2026
37f1325
ci(perf): add C-Chain reexecution benchmark workflow
Elvis339 Jan 22, 2026
33e6921
chore: revert changes from gh-pages
Elvis339 Jan 25, 2026
268c05f
ci(track-performance): remove checkout fetch depth 0
Elvis339 Jan 25, 2026
7055e9c
refactor(bench): simplify C-Chain benchmark workflow
Elvis339 Jan 25, 2026
8b92cc6
docs
Elvis339 Jan 25, 2026
98c9d00
ci(track-performance): empty `inputs.test` was causing CI workflow to…
Elvis339 Jan 25, 2026
2b128a8
debug
Elvis339 Jan 25, 2026
a3966e6
refactor(benchmark): trigger C-Chain benchmarks via Firewood CI workflow
Elvis339 Jan 25, 2026
65224f8
temp
Elvis339 Jan 25, 2026
25cde31
temp
Elvis339 Jan 25, 2026
e19e951
fix(track-performance): improve concurrent trigger handling
Elvis339 Jan 25, 2026
2b144f0
docs
Elvis339 Jan 25, 2026
fb0a9cc
docs
Elvis339 Jan 25, 2026
bfc7c8a
ci(gh-pages): preserve benchmark data when deploying docs
Elvis339 Jan 25, 2026
7d12619
ci(gh-pages): temp. add workflow_dispatch to rebuild Pages
Elvis339 Jan 25, 2026
09e9f46
fix(gh-pages)
Elvis339 Jan 25, 2026
c7cad22
ci(gh-pages): remove temp. set workflow_dispatch
Elvis339 Jan 25, 2026
84b381a
refactor(bench-cchain-reexecution): rename verify_run_inputs to run_i…
Elvis339 Jan 26, 2026
a8db04d
Merge branch 'main' into es/enable-firewood-dev-workflow
Elvis339 Jan 26, 2026
18d4047
chore: exclude all scripts from license header check
Elvis339 Jan 26, 2026
79a2def
Merge branch 'es/enable-firewood-dev-workflow' of https://github.com/…
Elvis339 Jan 26, 2026
cb0f6be
chore: split PR - extract local tooling to separate PR
Elvis339 Jan 26, 2026
5ef9313
chore(bench-cchain-reexecution): replace hardcoded AvalancheGo branch
Elvis339 Jan 27, 2026
809c8ea
docs: clarify benchmark workflow comments and simplify env var docs
Elvis339 Jan 28, 2026
2236393
Merge branch 'main' into es/enable-firewood-dev-workflow
Elvis339 Jan 28, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/check-license-headers.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@
"clippy.toml",
"**/tests/compile_*/**",
"justfile",
"scripts/run-just.sh",
"scripts/**",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added new benchmark script (bench-cchain-reexecution.sh) which
caused CI to fail with "Config does not cover the file". Shell
scripts aren't checked for license content (only .rs/.go/.h are),
but must be explicitly listed in the config. Exclude entire
scripts/ directory to avoid listing each script individually.

https://github.com/ava-labs/firewood/blob/main/.github/check-license-headers.yaml

],
}
]
26 changes: 26 additions & 0 deletions .github/workflows/gh-pages.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,32 @@ jobs:
run: |
cp -rv target/doc/* ./_site
cp -rv docs/assets ./_site
# GitHub Pages deploys from a single source (overwrites, doesn't merge).
# Benchmark history lives on benchmark-data branch (for append-only storage).
# We merge both into _site/ so a single deployment serves docs + benchmarks.
# See track-performance.yml for how benchmark data is collected and stored.
#
# Structure on benchmark-data branch (see track-performance.yml for how this is populated):
# bench/ - Official benchmark history (main branch only)
# dev/bench/{branch}/ - Feature branch benchmarks (experimental)
- name: Include benchmark data
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Build working: https://github.com/ava-labs/firewood/actions/runs/21334822723/job/61405063779
Deploy only works from main due to environment protection rules.

run: |
# Fetch benchmark-data branch (may not exist on first run)
if ! git fetch origin benchmark-data 2>/dev/null; then
echo "No benchmark-data branch yet, skipping"
exit 0
fi

# Extract benchmark directories from benchmark-data branch into current dir
# These may not exist if no benchmarks have been run yet for that category
git checkout origin/benchmark-data -- dev 2>/dev/null || echo "No dev/ directory (no feature branch benchmarks yet)"
git checkout origin/benchmark-data -- bench 2>/dev/null || echo "No bench/ directory (no main branch benchmarks yet)"

# Copy to _site - at least one must exist
[[ -d dev ]] && cp -rv dev _site/
[[ -d bench ]] && cp -rv bench _site/

[[ -d dev || -d bench ]] || { echo "::error::No benchmark data (dev/ or bench/) found"; exit 1; }
- uses: actions/upload-artifact@v4
with:
name: pages
Expand Down
143 changes: 143 additions & 0 deletions .github/workflows/track-performance.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
# Triggers AvalancheGo's C-Chain reexecution benchmark and publishes
# results to GitHub Pages for trend analysis.
name: C-Chain Reexecution Performance Tracking

on:
workflow_dispatch:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this manually dispatched? Shouldn't this happen automatically on pushes to main, or is this just an intermediate step for testing purposes?

If the latter, please change the PR description to show the followup or create another PR/task and link it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's intermediate step for testing purposes. I created new issue for follow-up: #1639 and update PR description.

inputs:
firewood:
description: 'Firewood commit/branch/tag to test (leave empty to use the commit that triggered the workflow)'
default: ''
libevm:
description: 'libevm commit/branch/tag to test (leave empty to skip)'
default: ''
avalanchego:
description: 'AvalancheGo commit/branch/tag to test against'
default: 'master'
test:
description: 'Predefined test (leave empty to use custom parameters below)' # https://github.com/ava-labs/avalanchego/blob/a85295d87193b30ff17c594680dadd6618022f5e/scripts/benchmark_cchain_range.sh#L63
default: ''
config:
description: 'Config (e.g., firewood, hashdb)'
default: ''
start-block:
default: ''
end-block:
default: ''
block-dir-src:
description: 'Block directory source (e.g., cchain-mainnet-blocks-1m-ldb [without S3 path])'
default: ''
current-state-dir-src:
description: 'Current state directory source (e.g., cchain-mainnet-blocks-30m-40m-ldb [without S3 path])'
default: ''
runner:
description: 'Runner to use in AvalancheGo'
required: true
type: choice
options:
- avalanche-avalanchego-runner-2ti
- avago-runner-i4i-4xlarge-local-ssd
- avago-runner-m6i-4xlarge-ebs-fast
timeout-minutes:
description: 'Timeout in minutes'
default: ''

jobs:
record-benchmark-to-gh-pages:
runs-on: ubuntu-latest
permissions:
contents: write # Required for github-action-benchmark to push to gh-pages
steps:
# NOTE: This checkout is only to get the bench-cchain-reexecution.sh script.
# We're not building or testing Firewood here—the script triggers AvalancheGo's
# workflow via API and passes FIREWOOD_REF to it. AvalancheGo is responsible
# for checking out and building Firewood at that ref.
- name: Checkout Firewood
uses: actions/checkout@v4
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this always check out the default branch? Shouldn't it be parameterized with something like:

ref: {{ inputs.firewood || github.sha }}

Otherwise I think this checks out main, and then the script later switches it to a different branch, which is less efficient and perhaps confusing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The checkout here is only to get the bench-cchain-reexecution.sh script we're not building or testing Firewood in this workflow. The script just triggers AvalancheGo's workflow via API and passes FIREWOOD_REF to it. AvalancheGo is responsible for checking out and building Firewood at that ref.

So the checkout ref doesn't affect which Firewood version gets benchmarked that's controlled by FIREWOOD_REF (inputs.firewood) passed to AvalancheGo. The script itself is stable and backward compatible.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add some comments explaining this to avoid confusion later. In particular we should mention that the only reason to check this out is to get the script.

One interesting side effect of this decision is that if you want to update the script, it won't use the new one until it lands in main I think.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added clarifying comment to void confusion later.

One interesting side effect of this decision is that if you want to update the script, it won't use the new one until it lands in main I think.

Since the checkout doesn't specify a ref, it uses whichever branch you trigger from for manual runs. So you can test script changes by triggering the workflow from your feature branch. For scheduled runs (coming later), it will use main - which is the expected behavior for automated tracking.


- name: Trigger C-Chain Reexecution Benchmark
run: |
if [[ -n "${{ inputs.test }}" ]]; then
./scripts/bench-cchain-reexecution.sh trigger "${{ inputs.test }}"
else
./scripts/bench-cchain-reexecution.sh trigger
fi
env:
GH_TOKEN: ${{ secrets.FIREWOOD_AVALANCHEGO_GITHUB_TOKEN }}
# Custom mode (ignored when test is specified)
CONFIG: ${{ inputs.config }}
START_BLOCK: ${{ inputs.start-block }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like if you set START_BLOCK you better also be setting CURRENT_STATE_DIR_SRC to let it know where to get the bootstrap database, is that correct?

If so, we should verify that either neither is provided or both are.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CURRENT_STATE_DIR is optional, not required because you might want to start from Genesis.

# CURRENT_STATE_DIR_SRC (optional) S3 state directory (empty = genesis run)

Validation:

[[ -z "${START_BLOCK:-}${END_BLOCK:-}${BLOCK_DIR_SRC:-}" ]] && \

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to start from genesis, then you either must set START_BLOCK to 0 (1?) or not set it. So, if START_BLOCK is not 0, then CURRENT_STATE_DIR_SRC must be set. Isn't that correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Close, START_BLOCK should be 1 (not 0) for Genesis, and when CURRENT_STATE_DIR_SRC is empty it means starting from genesis (no pre-existing state to bootstrap from). So the valid combinations are:

  • Genesis run: START_BLOCK=1, CURRENT_STATE_DIR_SRC empty
  • Resume run: START_BLOCK=N, CURRENT_STATE_DIR_SRC points to state at block N-1

END_BLOCK: ${{ inputs.end-block }}
BLOCK_DIR_SRC: ${{ inputs.block-dir-src }}
CURRENT_STATE_DIR_SRC: ${{ inputs.current-state-dir-src }}
# Refs
FIREWOOD_REF: ${{ inputs.firewood || github.sha }}
AVALANCHEGO_REF: ${{ inputs.avalanchego }}
LIBEVM_REF: ${{ inputs.libevm }}
# Execution
RUNNER: ${{ inputs.runner }}
TIMEOUT_MINUTES: ${{ inputs.timeout-minutes }}

# github.ref controls where results are stored (not what gets benchmarked):
# - main branch → bench/ (official history)
# - feature branches → dev/bench/{branch}/ (experimental, won't pollute trends)
# inputs.firewood controls what gets benchmarked (passed to AvalancheGo).
- name: Determine results location
id: location
run: |
if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this correct? I'm not following if github.ref is the tag we really want here, especially if inputs.firewood points to something completely different?

Shouldn't this be doing a git rev-parse --abbrev-ref HEAD or something to get the correct firewood tag?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

github.ref here is intentional it determines who can write to official benchmark history, not what is being benchmarked.

The logic is:

  • Only workflows triggered from main write to bench/ (official history)
  • Feature branches write to dev/bench/{branch}/ (experimental, won't pollute official trends)

This prevents a feature branch from accidentally (or intentionally) writing to official history by setting inputs.firewood=main.

inputs.firewood controls what gets benchmarked; github.ref controls where results are stored.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A comment here would be helpful as it's not obvious.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was a comment:

# Store main branch results in bench/, feature branches in dev/bench/{branch}/
, but I updated it to be more clarifying.

echo "data-dir=bench" >> "$GITHUB_OUTPUT"
else
echo "data-dir=dev/bench/$(echo '${{ github.ref_name }}' | tr '/' '-')" >> "$GITHUB_OUTPUT"
fi

- name: Publish benchmark results
id: store
uses: benchmark-action/github-action-benchmark@v1
with:
name: C-Chain Reexecution with Firewood
tool: 'customBiggerIsBetter'
output-file-path: ./results/benchmark-output.json
github-token: ${{ secrets.GITHUB_TOKEN }}
summary-always: true
auto-push: true
fail-on-alert: true
comment-on-alert: false
gh-pages-branch: benchmark-data
benchmark-data-dir-path: ${{ steps.location.outputs.data-dir }}

- name: Summary
run: |
if [ "${{ steps.store.outcome }}" == "failure" ]; then
echo "::warning::Benchmark storage failed - results were not saved to GitHub Pages"
fi

{
echo "## Firewood Performance Benchmark Results"
echo
echo "**Configuration:**"

if [ -n "${{ inputs.test }}" ]; then
echo "- Mode: Predefined test"
echo "- Test: \`${{ inputs.test }}\`"
else
echo "- Mode: Custom parameters"
echo "- Config: \`${{ inputs.config }}\`"
echo "- Blocks: \`${{ inputs.start-block }}\` → \`${{ inputs.end-block }}\`"
echo "- Block source: \`${{ inputs.block-dir-src }}\`"
echo "- State source: \`${{ inputs.current-state-dir-src }}\`"
fi

echo "- Firewood: \`${{ inputs.firewood || github.sha }}\`"
if [ -n "${{ inputs.libevm }}" ]; then
echo "- libevm: \`${{ inputs.libevm }}\`"
fi
echo "- AvalancheGo: \`${{ inputs.avalanchego }}\`"
echo "- Runner: \`${{ inputs.runner }}\`"
echo "- Timeout: \`${{ inputs.timeout-minutes }}\` minutes"
echo

echo "**Links:**"
echo "- [Performance Trends](https://ava-labs.github.io/firewood/${{ steps.location.outputs.data-dir }}/)"
} >> $GITHUB_STEP_SUMMARY

Loading