Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
101 changes: 101 additions & 0 deletions METRICS.md
Original file line number Diff line number Diff line change
Expand Up @@ -298,3 +298,104 @@ rate(firewood_space_from_end[5m])
rate(firewood_proposal_commit{success="false"}[5m]) /
rate(firewood_proposal_commit[5m])
```

## Performance Tracking

Firewood tracks its performance over time by running [C-Chain reexecution benchmarks](https://github.com/ava-labs/avalanchego/blob/master/tests/reexecute/c/README.md) in AvalancheGo. These benchmarks re-execute historical mainnet C-Chain blocks against a state snapshot, measuring throughput in mgas/s (million gas per second).

This allows us to:

- Monitor performance across commits and releases
- Catch performance regressions early
- Validate optimizations against real-world blockchain workloads

Performance data is collected via the `Track Performance` workflow and published to GitHub Pages.

### Running Benchmarks from GitHub UI

The easiest way to trigger a benchmark is via the GitHub Actions UI:

1. Go to [Actions → Track Performance](https://github.com/ava-labs/firewood/actions/workflows/track-performance.yml)
2. Click "Run workflow"
3. Select parameters from the dropdowns (task, runner) or enter custom values
4. Click "Run workflow"

### Triggering Benchmarks via CLI

Benchmarks run on AvalancheGo's self-hosted runners, not locally. This enables end-to-end integration testing where:

- Firewood team can benchmark changes against the full AvalancheGo stack
- AvalancheGo team can iterate on their Firewood integration

```mermaid
sequenceDiagram
participant F as Firewood
participant A as AvalancheGo
participant G as GitHub Pages

F->>A: 1. trigger workflow
A->>A: 2. run benchmark
A-->>F: 3. download results
F->>G: 4. publish
```

The CLI commands trigger the remote workflow, wait for completion, and download the results.

```bash
nix run ./ffi#gh -- gh auth login
export GH_TOKEN=$(gh auth token)

# Predefined test
just bench-cchain firewood-101-250k

# With specific Firewood version
FIREWOOD_REF=v0.1.0 just bench-cchain firewood-33m-40m

# Custom block range
START_BLOCK=101 END_BLOCK=250000 \
BLOCK_DIR_SRC=cchain-mainnet-blocks-1m-ldb \
CURRENT_STATE_DIR_SRC=cchain-current-state-firewood-100 \
just bench-cchain
```

**Command:**

```bash
just bench-cchain [test]
```

Triggers Firewood's `track-performance.yml` workflow, which orchestrates the AvalancheGo benchmark. The command polls for the workflow run and watches progress in terminal.

> **Note:** Changes must be pushed to the remote branch for the workflow to use them. By default, the workflow builds Firewood from the current commit. To benchmark a specific version (e.g., a release tag), set `FIREWOOD_REF` explicitly.

**Environment variables:**

| Variable | Default | Description |
|----------|---------|-------------|
| `FIREWOOD_REF` | current commit | Firewood commit/tag/branch to build |
| `AVALANCHEGO_REF` | master | AvalancheGo ref to test against |
| `LIBEVM_REF` | - | Optional libevm ref |
| `RUNNER` | avalanche-avalanchego-runner-2ti | GitHub Actions runner |
| `TIMEOUT_MINUTES` | - | Workflow timeout |
| `DOWNLOAD_DIR` | ./results | Directory for downloaded artifacts |

**Custom mode variables** (when no test specified):

| Variable | Default | Description |
|----------|---------|-------------|
| `CONFIG` | firewood | VM config (firewood, hashdb, etc.) |
| `START_BLOCK` | required | First block number |
| `END_BLOCK` | required | Last block number |
| `BLOCK_DIR_SRC` | required | S3 block directory |
| `CURRENT_STATE_DIR_SRC` | - | S3 state directory (empty = genesis run) |

**Tests and runners** are defined in AvalancheGo:

- [Available tests](https://github.com/ava-labs/avalanchego/blob/master/scripts/benchmark_cchain_range.sh)
- [C-Chain benchmark docs](https://github.com/ava-labs/avalanchego/blob/master/tests/reexecute/c/README.md)

### Viewing Results

Results are published to GitHub Pages via [github-action-benchmark](https://github.com/benchmark-action/github-action-benchmark). View trends at:

- [Performance Trends](https://ava-labs.github.io/firewood/bench/)
6 changes: 6 additions & 0 deletions ffi/flake.nix
Original file line number Diff line number Diff line change
Expand Up @@ -120,11 +120,17 @@
program = "${pkgs.just}/bin/just";
};

apps.gh = {
type = "app";
program = "${pkgs.gh}/bin/gh";
};

devShells.default = craneLib.devShell {
inputsFrom = [ firewood-ffi ];

packages = with pkgs; [
firewood-ffi
gh
go
jq
just
Expand Down
90 changes: 90 additions & 0 deletions justfile
Original file line number Diff line number Diff line change
Expand Up @@ -168,3 +168,93 @@ release-step-refresh-changelog tag:

echo "Generating changelog..."
git cliff -o CHANGELOG.md --tag "{{tag}}"

# Run a C-Chain reexecution benchmark
# Triggers Firewood's track-performance.yml which then triggers AvalancheGo.
# This ensures results appear in Firewood's workflow summary and get published
# to GitHub Pages for the current branch.
#
# Note: Changes must be pushed to the remote branch for the workflow to use them.
#
# By default, uses the current HEAD commit to build Firewood. If you want to
# benchmark a specific version (e.g., a release tag), set FIREWOOD_REF explicitly:
# FIREWOOD_REF=v0.1.0 just bench-cchain firewood-101-250k
#
# Examples:
# just bench-cchain firewood-101-250k
# FIREWOOD_REF=v0.1.0 just bench-cchain firewood-101-250k
# START_BLOCK=1 END_BLOCK=100 BLOCK_DIR_SRC=cchain-mainnet-blocks-200-ldb just bench-cchain
bench-cchain test="" runner="avalanche-avalanchego-runner-2ti":
#!/usr/bin/env -S bash -euo pipefail

# Resolve gh CLI
if command -v gh &>/dev/null; then
GH=gh
elif command -v nix &>/dev/null; then
GH="nix run ./ffi#gh --"
else
echo "error: 'gh' CLI not found. Install it or use 'nix develop ./ffi'" >&2
exit 1
fi

test="{{ test }}"

# Validate: need either test name OR custom block params
if [[ -z "$test" && -z "${START_BLOCK:-}" ]]; then
echo "error: Provide a test name or set START_BLOCK, END_BLOCK, BLOCK_DIR_SRC" >&2
echo "" >&2
echo "Predefined tests:" >&2
echo " firewood-101k-250k, firewood-33m-33m500k, firewood-33m-40m" >&2
echo " firewood-archive-101-250k, firewood-archive-33m-33m500k, firewood-archive-33m-40m" >&2
echo "" >&2
echo "Custom mode example:" >&2
echo " START_BLOCK=1 END_BLOCK=100 BLOCK_DIR_SRC=cchain-mainnet-blocks-200-ldb just bench-cchain" >&2
exit 1
fi

# Build workflow args
args=(-f runner="{{ runner }}")
[[ -n "$test" ]] && args+=(-f test="$test")
[[ -n "${FIREWOOD_REF:-}" ]] && args+=(-f firewood="$FIREWOOD_REF")
[[ -n "${LIBEVM_REF:-}" ]] && args+=(-f libevm="$LIBEVM_REF")
[[ -n "${AVALANCHEGO_REF:-}" ]] && args+=(-f avalanchego="$AVALANCHEGO_REF")
[[ -n "${CONFIG:-}" ]] && args+=(-f config="$CONFIG")
[[ -n "${START_BLOCK:-}" ]] && args+=(-f start-block="$START_BLOCK")
[[ -n "${END_BLOCK:-}" ]] && args+=(-f end-block="$END_BLOCK")
[[ -n "${BLOCK_DIR_SRC:-}" ]] && args+=(-f block-dir-src="$BLOCK_DIR_SRC")
[[ -n "${CURRENT_STATE_DIR_SRC:-}" ]] && args+=(-f current-state-dir-src="$CURRENT_STATE_DIR_SRC")
[[ -n "${TIMEOUT_MINUTES:-}" ]] && args+=(-f timeout-minutes="$TIMEOUT_MINUTES")

branch=$(git rev-parse --abbrev-ref HEAD)

[[ -n "$test" ]] && echo "==> Test: $test"
[[ -n "${START_BLOCK:-}" ]] && echo "==> Custom: blocks $START_BLOCK-${END_BLOCK:-?}"
echo "==> Runner: {{ runner }}"

# Record time before triggering to find our run (avoid race conditions)
trigger_time=$(date -u +%Y-%m-%dT%H:%M:%SZ)

$GH workflow run track-performance.yml --ref "$branch" "${args[@]}"

# Poll for workflow registration (runs created after trigger_time)
echo ""
echo "Polling for workflow to register..."
for i in {1..30}; do
sleep 1
run_id=$($GH run list --workflow=track-performance.yml --limit=10 --json databaseId,createdAt \
--jq "[.[] | select(.createdAt > \"$trigger_time\")] | .[-1].databaseId // empty")
[[ -n "$run_id" ]] && break
done

if [[ -z "$run_id" ]]; then
echo "warning: Could not find run ID. Check manually at:"
echo " https://github.com/ava-labs/firewood/actions/workflows/track-performance.yml"
exit 0
fi

echo ""
echo "Monitor this workflow with cli: $GH run watch $run_id"
echo " or with this URL: https://github.com/ava-labs/firewood/actions/runs/$run_id"
echo ""

$GH run watch "$run_id"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it better to use this or watch it on the web? Maybe we shouldn't run this and let them choose.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the script already resolves and wraps the gh CLI (falling back to nix if needed), waiting by default keeps the experience consistent for users who have the tooling set up. The URL is printed before the watch starts, so users can open it in their browser and Ctrl+C to exit if they prefer the web UI.

# Resolve gh CLI

Does that work for you, or would you prefer adding a something like no-wait flag?

Loading