-
Notifications
You must be signed in to change notification settings - Fork 25
ci: add benchmark workflow #1643
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
|
||
| jobs: | ||
| bench: | ||
| runs-on: ubuntu-latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this run on a dedicated/self-hosted runner instead of ubuntu-latest? Shared runners have variable performance which can make benchmark results unreliable for tracking regressions across runs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What dedicated/self-hosted runners are available to use? Open to switching but I'm unaware of the options.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For reference, these self-hosted runners are available (managed via ARC):
| Runner | Spec | Storage |
|---|---|---|
| avalanche-avalanchego-runner | 4 vCPU, 16 GB | EBS |
| avago-runner-m6i-4xlarge-ebs-fast | 16 vCPU, 64 GB | Fast EBS |
| avago-runner-i4i-4xlarge-local-ssd | 16 vCPU, 128 GB | Local NVMe |
Worth reaching out to security/GitHub admin for a full list. Not a blocker tho ubuntu-latest is fine to start.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Full list of self-hosted runners: https://github.com/ava-labs/devops-argocd/blob/main/base/system/actions-runners/action-runner.yaml
|
Having benchmarks in CI is a great step forward. One question tho the goal is seeing trends, but artifacts alone don't provide visualization or comparison. Was there a reason for deferring |
|
With PRs such as #1645 still in review, I didn't want to integrate GAB only for those changes to be modified. If you want, you can take ownership of adding these benchmarks to GAB once all prerequisite PRs are merged in. |
Why this should be merged
A key component of measuring our performance is seeing trends in our benchmarks. Currently, we have benchmarks in Firewood that are not part of our CI. By automatically running these benchmarks on a daily basis, we can start getting an idea of which components are getting faster/slower.
In the future, these benchmarks can be consumed by tools such as Github-Action-Benchmark but for now, the results are uploaded as artifacts.
How this works
Adds a benchmark workflow which runs daily and can be triggered via a manual dispatch; the benchmarks run are as follows:
hashopsserializerHow this was tested
https://github.com/ava-labs/firewood/actions/runs/21412797668/job/61653777101?pr=1643