Skip to content
Draft
Show file tree
Hide file tree
Changes from 13 commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
25140e2
feat(ci): add performance tracking workflow for AvalancheGo benchmarks
Elvis339 Nov 27, 2025
1ea99bb
ci: track performance
Elvis339 Nov 27, 2025
1ce1481
test(ci): add PR label trigger for testing
Elvis339 Nov 27, 2025
8794250
temp fix ci
Elvis339 Nov 27, 2025
4aa0e08
ci: use switch CI token
Elvis339 Dec 2, 2025
32fd06f
ci: lint
Elvis339 Dec 2, 2025
7a75021
fix
Elvis339 Dec 2, 2025
e35fa5d
docs
Elvis339 Dec 2, 2025
83a113a
Merge branch 'main' into es/enable-firewood-dev-workflow
Elvis339 Dec 2, 2025
18f4035
ci: push performance to benchmark data
Elvis339 Dec 4, 2025
73fc781
ci(perf): add benchmark workflow with nix-based just commands
Elvis339 Dec 4, 2025
a19f1e8
Merge branch 'es/enable-firewood-dev-workflow' of https://github.com/…
Elvis339 Dec 4, 2025
668cef3
docs
Elvis339 Dec 4, 2025
6e65816
chore: remove "\n" that got URL encoded
Elvis339 Dec 4, 2025
1e2bdd3
docs
Elvis339 Dec 4, 2025
01ee24f
ci(track-performance): remove `if: always`
Elvis339 Dec 8, 2025
19d921e
address PR
Elvis339 Dec 8, 2025
1131a1d
Update .github/workflows/track-performance.yml
Elvis339 Dec 8, 2025
5e32e9d
Update .github/workflows/track-performance.yml
Elvis339 Dec 8, 2025
8008264
Update justfile
Elvis339 Dec 8, 2025
f2cf502
fix(ci): replace sleep with retry loop, improve justfile commands
Elvis339 Dec 8, 2025
39c2050
Merge branch 'es/enable-firewood-dev-workflow' of https://github.com/…
Elvis339 Dec 8, 2025
ffcb333
lint: descriptive link text
Elvis339 Dec 8, 2025
02e7e94
docs
Elvis339 Dec 8, 2025
7b16833
Merge branch 'main' into es/enable-firewood-dev-workflow
Elvis339 Dec 8, 2025
625049e
chore: fix syntax error for benchmark command
Elvis339 Dec 9, 2025
bb2c97e
ci: update workflow for C-Chain reexecution benchmarks and improve ju…
Elvis339 Dec 30, 2025
c84dc10
Merge branch 'main' into es/enable-firewood-dev-workflow
Elvis339 Jan 22, 2026
37f1325
ci(perf): add C-Chain reexecution benchmark workflow
Elvis339 Jan 22, 2026
33e6921
chore: revert changes from gh-pages
Elvis339 Jan 25, 2026
268c05f
ci(track-performance): remove checkout fetch depth 0
Elvis339 Jan 25, 2026
7055e9c
refactor(bench): simplify C-Chain benchmark workflow
Elvis339 Jan 25, 2026
8b92cc6
docs
Elvis339 Jan 25, 2026
98c9d00
ci(track-performance): empty `inputs.test` was causing CI workflow to…
Elvis339 Jan 25, 2026
2b128a8
debug
Elvis339 Jan 25, 2026
a3966e6
refactor(benchmark): trigger C-Chain benchmarks via Firewood CI workflow
Elvis339 Jan 25, 2026
65224f8
temp
Elvis339 Jan 25, 2026
25cde31
temp
Elvis339 Jan 25, 2026
e19e951
fix(track-performance): improve concurrent trigger handling
Elvis339 Jan 25, 2026
2b144f0
docs
Elvis339 Jan 25, 2026
fb0a9cc
docs
Elvis339 Jan 25, 2026
bfc7c8a
ci(gh-pages): preserve benchmark data when deploying docs
Elvis339 Jan 25, 2026
7d12619
ci(gh-pages): temp. add workflow_dispatch to rebuild Pages
Elvis339 Jan 25, 2026
09e9f46
fix(gh-pages)
Elvis339 Jan 25, 2026
c7cad22
ci(gh-pages): remove temp. set workflow_dispatch
Elvis339 Jan 25, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .github/workflows/gh-pages.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,14 @@ jobs:
run: |
cp -rv target/doc/* ./_site
cp -rv docs/assets ./_site
- name: Include benchmark data
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Build working: https://github.com/ava-labs/firewood/actions/runs/21334822723/job/61405063779
Deploy only works from main due to environment protection rules.

run: |
git fetch origin benchmark-data || true
if git rev-parse origin/benchmark-data >/dev/null 2>&1; then
git checkout origin/benchmark-data -- dev bench 2>/dev/null || true
cp -rv dev _site/ 2>/dev/null || true
cp -rv bench _site/ 2>/dev/null || true
fi
- uses: actions/upload-artifact@v4
with:
name: pages
Expand Down
196 changes: 196 additions & 0 deletions .github/workflows/track-performance.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,196 @@
name: Track Performance

on:
workflow_dispatch:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this manually dispatched? Shouldn't this happen automatically on pushes to main, or is this just an intermediate step for testing purposes?

If the latter, please change the PR description to show the followup or create another PR/task and link it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's intermediate step for testing purposes. I created new issue for follow-up: #1639 and update PR description.

inputs:
firewood:
description: 'Firewood commit/branch/tag to test (leave empty for current HEAD)'
default: ''
libevm:
description: 'libevm commit/branch/tag to test (leave empty to skip)'
default: ''
avalanchego:
description: 'AvalancheGo commit/branch/tag to test against'
default: 'master'
task:
description: 'Predefined task (leave empty to use custom parameters below)'
type: choice
options:
- c-chain-reexecution-firewood-101-250k
- c-chain-reexecution-firewood-33m-33m500k
- c-chain-reexecution-firewood-33m-40m
config:
description: 'VM config (e.g., firewood, hashdb)'
default: ''
start-block:
default: ''
end-block:
default: ''
block-dir-src:
description: 'Block directory source e.g., cchain-mainnet-blocks-1m-ldb (without S3 path)'
default: ''
current-state-dir-src:
description: 'Current state directory source e.g., cchain-mainnet-blocks-30m-40m-ldb (without S3 path)'
default: ''
runner:
description: 'Runner to use in AvalancheGo'
required: true
type: choice
options:
- avalanche-avalanchego-runner-2ti
- avago-runner-i4i-4xlarge-local-ssd
- avago-runner-m6i-4xlarge-ebs-fast

jobs:
c-chain-benchmark:
runs-on: ubuntu-latest
permissions:
contents: write # Required for github-action-benchmark to push to gh-pages
steps:
- name: Checkout Firewood
uses: actions/checkout@v4
with:
fetch-depth: 0 # Needed for github-action-benchmark

- name: Install Nix
uses: cachix/install-nix-action@02a151ada4993995686f9ed4f1be7cfbb229e56f # v31
with:
github_access_token: ${{ secrets.GITHUB_TOKEN }}

- name: Validate inputs
run: |
if [ -z "${{ inputs.task }}" ]; then
missing=()
[ -z "${{ inputs.config }}" ] && missing+=("config")
[ -z "${{ inputs.start-block }}" ] && missing+=("start-block")
[ -z "${{ inputs.end-block }}" ] && missing+=("end-block")
[ -z "${{ inputs.block-dir-src }}" ] && missing+=("block-dir-src")
[ -z "${{ inputs.current-state-dir-src }}" ] && missing+=("current-state-dir-src")
if [ ${#missing[@]} -gt 0 ]; then
echo "Error: When using custom mode, these fields are required: ${missing[*]}"
exit 1
fi
fi
- name: Trigger AvalancheGo benchmark
id: trigger
shell: nix develop ./ffi --command bash {0}
run: |
FIREWOOD="${{ inputs.firewood || github.sha }}"
echo "firewood=$FIREWOOD" >> "$GITHUB_OUTPUT"
if [ -n "${{ inputs.task }}" ]; then
# Task-based mode: use just command
RUN_ID=$(just benchmark-trigger "$FIREWOOD" "${{ inputs.avalanchego }}" "${{ inputs.task }}" "${{ inputs.runner }}" "${{ inputs.libevm }}")
else
# Granular mode: use direct gh with custom params
LIBEVM_FLAG=""
if [ -n "${{ inputs.libevm }}" ]; then
LIBEVM_FLAG="-f libevm=${{ inputs.libevm }}"
fi
gh workflow run "Firewood Reexecution Benchmark" \
--repo ava-labs/avalanchego \
--ref "${{ inputs.avalanchego }}" \
-f firewood="$FIREWOOD" \
-f config="${{ inputs.config }}" \
-f start-block="${{ inputs.start-block }}" \
-f end-block="${{ inputs.end-block }}" \
-f block-dir-src="${{ inputs.block-dir-src }}" \
-f current-state-dir-src="${{ inputs.current-state-dir-src }}" \
-f runner="${{ inputs.runner }}" \
$LIBEVM_FLAG
sleep 10
RUN_ID=$(gh run list \
--repo ava-labs/avalanchego \
--workflow "Firewood Reexecution Benchmark" \
--limit 5 \
--json databaseId,createdAt \
--jq '[.[] | select(.createdAt | fromdateiso8601 > (now - 60))] | .[0].databaseId')
if [ -z "$RUN_ID" ] || [ "$RUN_ID" = "null" ]; then
echo "Could not find triggered workflow run"
exit 1
fi
fi
echo "run_id=$RUN_ID" >> "$GITHUB_OUTPUT"
echo "run_url=https://github.com/ava-labs/avalanchego/actions/runs/$RUN_ID" >> "$GITHUB_OUTPUT"
env:
GH_TOKEN: ${{ secrets.FIREWOOD_AVALANCHEGO_GITHUB_TOKEN }}

- name: Wait for benchmark completion
shell: nix develop ./ffi --command bash {0}
run: just benchmark-wait "${{ steps.trigger.outputs.run_id }}"
env:
GH_TOKEN: ${{ secrets.FIREWOOD_AVALANCHEGO_GITHUB_TOKEN }}
timeout-minutes: 60

- name: Download benchmark results
id: download
shell: nix develop ./ffi --command bash {0}
run: |
just benchmark-download "${{ steps.trigger.outputs.run_id }}"
# Determine target dashboard
if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
echo "data-dir=bench" >> "$GITHUB_OUTPUT"
else
SAFE_NAME=$(echo "${{ github.ref_name }}" | tr '/' '-')
echo "data-dir=dev/bench/$SAFE_NAME" >> "$GITHUB_OUTPUT"
fi
env:
GH_TOKEN: ${{ secrets.FIREWOOD_AVALANCHEGO_GITHUB_TOKEN }}

- name: Store benchmark results
uses: benchmark-action/github-action-benchmark@v1
with:
name: C-Chain Reexecution Performance
tool: 'go'
output-file-path: ./results/benchmark-output.txt
github-token: ${{ secrets.GITHUB_TOKEN }}
auto-push: true
gh-pages-branch: benchmark-data
benchmark-data-dir-path: ${{ steps.download.outputs.data-dir }}
# Don't fail the workflow if there's an issue with benchmark storage
fail-on-alert: false
comment-on-alert: false

- name: Summary
if: always()
run: |
echo "## Firewood Performance Benchmark Results" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Configuration:**" >> $GITHUB_STEP_SUMMARY
if [ -n "${{ inputs.task }}" ]; then
echo "- Mode: Task-based" >> $GITHUB_STEP_SUMMARY
echo "- Task: \`${{ inputs.task }}\`" >> $GITHUB_STEP_SUMMARY
else
echo "- Mode: Custom parameters" >> $GITHUB_STEP_SUMMARY
echo "- Config: \`${{ inputs.config }}\`" >> $GITHUB_STEP_SUMMARY
echo "- Blocks: \`${{ inputs.start-block }}\` → \`${{ inputs.end-block }}\`" >> $GITHUB_STEP_SUMMARY
echo "- Block source: \`${{ inputs.block-dir-src }}\`" >> $GITHUB_STEP_SUMMARY
echo "- State source: \`${{ inputs.current-state-dir-src }}\`" >> $GITHUB_STEP_SUMMARY
fi
echo "- Firewood: \`${{ steps.trigger.outputs.firewood }}\`" >> $GITHUB_STEP_SUMMARY
if [ -n "${{ inputs.libevm }}" ]; then
echo "- libevm: \`${{ inputs.libevm }}\`" >> $GITHUB_STEP_SUMMARY
fi
echo "- AvalancheGo: \`${{ inputs.avalanchego }}\`" >> $GITHUB_STEP_SUMMARY
echo "- Runner: \`${{ inputs.runner }}\`" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
if [ "${{ steps.download.outcome }}" = "success" ]; then
echo "**Links:**" >> $GITHUB_STEP_SUMMARY
echo "- [AvalancheGo Workflow](${{ steps.trigger.outputs.run_url }})" >> $GITHUB_STEP_SUMMARY
echo "- [Performance Trends](https://ava-labs.github.io/firewood/${{ steps.download.outputs.data-dir }}/)" >> $GITHUB_STEP_SUMMARY
else
echo "**Status:** Failed" >> $GITHUB_STEP_SUMMARY
echo "Check [workflow logs](${{ steps.trigger.outputs.run_url }}) for details." >> $GITHUB_STEP_SUMMARY
fi
43 changes: 43 additions & 0 deletions METRICS.md
Original file line number Diff line number Diff line change
Expand Up @@ -284,3 +284,46 @@ rate(firewood_space_from_end[5m])
rate(firewood_proposal_commit{success="false"}[5m]) /
rate(firewood_proposal_commit[5m])
```

## Performance Tracking

Firewood tracks its performance over time by running C-Chain reexecution benchmarks in AvalancheGo. This allows us to:

- Monitor performance across commits and releases
- Catch performance regressions early
- Validate optimizations against real-world blockchain workloads

Performance data is collected via the `Track Performance` workflow and published to GitHub Pages.

### Running Benchmarks Locally

Benchmarks can be triggered locally using just commands (requires nix):

```bash
# Run full benchmark: trigger, wait, download results
just benchmark

# With specific versions
just benchmark v0.0.15 master c-chain-reexecution-firewood-101-250k avalanche-avalanchego-runner-2ti

# With libevm
just benchmark v0.0.15 master c-chain-reexecution-firewood-101-250k avalanche-avalanchego-runner-2ti v1.0.0
```

### Composable Commands

Individual steps can be run separately:

```bash
# Trigger benchmark, returns run_id
just benchmark-trigger <firewood> <avalanchego> <task> <runner> [libevm]

# Wait for a specific run to complete
just benchmark-wait <run_id>

# Download results from a specific run
just benchmark-download <run_id>

# List recent benchmark runs
just benchmark-list
```
6 changes: 6 additions & 0 deletions ffi/flake.nix
Original file line number Diff line number Diff line change
Expand Up @@ -121,11 +121,17 @@
program = "${pkgs.just}/bin/just";
};

apps.gh = {
type = "app";
program = "${pkgs.gh}/bin/gh";
};

devShells.default = craneLib.devShell {
inputsFrom = [ firewood-ffi ];

packages = with pkgs; [
firewood-ffi
gh
go
jq
just
Expand Down
93 changes: 93 additions & 0 deletions justfile
Original file line number Diff line number Diff line change
Expand Up @@ -138,3 +138,96 @@ update-ffi-flake: check-nix

echo "checking for a consistent golang verion"
../scripts/run-just.sh check-golang-version

# Trigger Reexecution Benchmark
# Usage: just benchmark-trigger <firewood> <avalanchego> <task> <runner> [libevm]
benchmark-trigger firewood avalanchego task runner libevm="": check-nix
#!/usr/bin/env bash
set -euo pipefail

GH="nix run ./ffi#gh --"

LIBEVM_FLAG=""
if [ -n "{{ libevm }}" ]; then
LIBEVM_FLAG="-f libevm={{ libevm }}"
fi

$GH workflow run "Firewood Reexecution Benchmark" \
--repo ava-labs/avalanchego \
--ref "{{ avalanchego }}" \
-f firewood="{{ firewood }}" \
-f task="{{ task }}" \
-f runner="{{ runner }}" \
$LIBEVM_FLAG

sleep 10

RUN_ID=$($GH run list \
--repo ava-labs/avalanchego \
--workflow "Firewood Reexecution Benchmark" \
--limit 5 \
--json databaseId,createdAt \
--jq '[.[] | select(.createdAt | fromdateiso8601 > (now - 60))] | .[0].databaseId')

if [ -z "$RUN_ID" ] || [ "$RUN_ID" = "null" ]; then
echo "Error: Could not find triggered workflow run" >&2
exit 1
fi

echo "$RUN_ID"

# Wait for reexecution benchmark run to complete
# Usage: just benchmark-wait <run_id>
benchmark-wait run_id: check-nix
#!/usr/bin/env bash
set -euo pipefail
nix run ./ffi#gh -- run watch "{{ run_id }}" --repo ava-labs/avalanchego --exit-status

# Download benchmark results
# Usage: just benchmark-download <run_id>
benchmark-download run_id: check-nix
#!/usr/bin/env bash
set -euo pipefail
mkdir -p ./results
nix run ./ffi#gh -- run download "{{ run_id }}" \
--repo ava-labs/avalanchego \
--name benchmark-output \
--dir ./results
cat ./results/benchmark-output.txt

# Run full benchmark: trigger, wait, download (composes the above)
# Usage: just benchmark [firewood] [avalanchego] [task] [runner] [libevm]
benchmark firewood="HEAD" avalanchego="master" task="c-chain-reexecution-firewood-101-250k" runner="avalanche-avalanchego-runner-2ti" libevm="": check-nix
#!/usr/bin/env bash
set -euo pipefail

FIREWOOD="{{ firewood }}"
if [[ "$FIREWOOD" == "HEAD" ]]; then
FIREWOOD=$(git rev-parse HEAD)
fi

echo "Firewood: $FIREWOOD"
echo "AvalancheGo: {{ avalanchego }}"
echo "Task: {{ task }}"
echo "Runner: {{ runner }}"
if [ -n "{{ libevm }}" ]; then
echo "LibEVM: {{ libevm }}"
fi

echo "Triggering reexecution benchmark in AvalancheGo..."
RUN_ID=$(just benchmark-trigger "$FIREWOOD" "{{ avalanchego }}" "{{ task }}" "{{ runner }}" "{{ libevm }}")
echo " Run ID: $RUN_ID"
echo " URL: https://github.com/ava-labs/avalanchego/actions/runs/$RUN_ID \n"

echo "Waiting for benchmark completion..."
just benchmark-wait "$RUN_ID \n"

echo "Downloading results..."
just benchmark-download "$RUN_ID \n"
echo "Results saved to: ./results/benchmark-output.txt"

# List recent AvalancheGo benchmark runs
benchmark-list: check-nix
#!/usr/bin/env bash
set -euo pipefail
nix run ./ffi#gh -- run list --repo ava-labs/avalanchego --workflow="Firewood Reexecution Benchmark" --limit 10