-
Notifications
You must be signed in to change notification settings - Fork 25
ci(perf): Track Firewood Performance via AvalancheGo Benchmarks #1493
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Implements workflow to trigger C-Chain reexecution benchmarks in AvalancheGo and track Firewood performance over time. Supports task-based and custom parameter modes. Results stored in GitHub Pages via github-action-benchmark.
|
Unblocked, test runs: |
…nputs_match reducing cognitive load
| "**/tests/compile_*/**", | ||
| "justfile", | ||
| "scripts/run-just.sh", | ||
| "scripts/**", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added new benchmark script (bench-cchain-reexecution.sh) which
caused CI to fail with "Config does not cover the file". Shell
scripts aren't checked for license content (only .rs/.go/.h are),
but must be explicitly listed in the config. Exclude entire
scripts/ directory to avoid listing each script individually.
https://github.com/ava-labs/firewood/blob/main/.github/check-license-headers.yaml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Early review, could use another pass
There's a lot of code here and I barely was able to complete the review in my maximum review time. Please consider breaking this up for more timely reviews, especially if the review is anything larger than this.
I'm also a little confused about how we track which firewood and avalanchego versions we ran the test against.
Let's say the performance loss was due to a change in avalanchego, how can we know that?
Remove local developer tooling (justfile recipe, flake.nix, METRICS.md) to reduce PR scope. These will be submitted in a follow-up PR after the CI workflow changes are merged.
I've split into two PRs per your feedback:
On version tracking This is a conscious tradeoff to get data flowing now with minimal setup, then iterate to something more robust (S3 storage, richer metadata, export for analysis). Planning to revisit after ~2 weeks of data collection. For now, each run logs Firewood / Avalanchego refs in the GitHub Actions summary, so the info exists just not queryable. Added your concern to the tracking doc as something to think through for the next iteration. How does that sound? Summary example: https://github.com/ava-labs/firewood/actions/runs/21334552166 |
|
Executing: Triggered reexecution benchmark in AvalancheGo https://github.com/ava-labs/avalanchego/actions/runs/21408080471 |
…ith err and intentional exit code status
RodrigoVillar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving on the condition that #1493 (comment) be addressed
Thank you, it was addressed:
|
|
@Elvis339 I meant this: firewood/scripts/bench-cchain-reexecution.sh Line 333 in 06c4d0a
|
Why
Track C-Chain reexecution benchmark performance over time. Catch regressions before production.
Closes #1494
How
Firewood → triggers AvalancheGo benchmark → downloads results → publishes to GitHub Pages
Changes
scripts/bench-cchain-reexecution.sh.github/workflows/track-performance.ymlbench/, branches →dev/bench/{branch}/)Usage
FIREWOOD_REF=v0.0.18explicitly. Without it, the workflow builds fromHEAD, which currently fails due to changes in FFI layerRelated
AvalancheGo: ava-labs/avalanchego#4650
Local iteration PR: #1642
Followup: #1639