-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collect timing information in CI scripts, & detect regressions #54
Comments
I talked with my colleague Kevin Quick about how this should work. Here's one possible approach:
|
Sounds good to me. For simplicity I might skip the aggregation - just keep the commit timestamp in the csv and ask gnuplot to regen the graph on each commit (with x axis either commit index or time). |
Github artifacts are deleted after 90 days, so it depends if we care about long term retention of performance logs. If we do, we'll need some other approach to aggregation and storage. |
Oh, maybe you mean we just keep a rolling csv artifact with all the previous logs? That could work. |
Ideally I think we'd end up with all the logs and the current graphs
checked in someplace rather than relying on that transient thing - but I
don't speak this much github to have an opinion how.
…On Fri, 26 Jul 2024 at 06:35, Mike Dodds ***@***.***> wrote:
Oh, maybe you mean we just keep a rolling csv artifact with all the
previous logs? That could work.
—
Reply to this email directly, view it on GitHub
<#54 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABFMZZQ2B4OFSHDAMKDEVX3ZOHN3RAVCNFSM6AAAAABLHFR7JOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENJSGAYDOOJUGI>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
That sounds good to me. And it would indeed be very useful to keep long-term data. However we implement 2 and 3 exactly, I think it's good to have the data from item 3 committed in the repository, to reduce the risk of losing that. |
I've opened a PR that runs benchmarks on every update to master. Next I'll work on running benchmarks on every PR so that we can detect potential regressions. Ideally this will add a comment with graphs of the updated performance numbers. If we can't get that working quickly, the alternative is to have the (optional) benchmarking CI job fail and add a comment if performance degradation exceeds a threshold (2x-3x slower?). For the future, it would also be nice to compare CVC5 vs z3 timings. |
This PR adds a CI workflow that generates performance graphs on every update to master. It stores the data as json in the `gh-pages` branch and renders the graphs on a Github Page. Here's [an example](https://galoisinc.github.io/cerberus/dev/bench/) of what this looks like with [dummy data](https://github.com/GaloisInc/cerberus/blob/gh-pages/dev/bench/data.js). This currently runs `cn` on all the (successful) .c files in the test suite. Eventually, we probably want to make a proper benchmark suite using something like [Core_bench](https://blog.janestreet.com/core_bench-micro-benchmarking-for-ocaml/). Implements part of rems-project/cn-tutorial/issues/54.
This PR adds a CI workflow that generates performance graphs on every update to master. It stores the data as json in the `gh-pages` branch and renders the graphs on a Github Page. Here's [an example](https://galoisinc.github.io/cerberus/dev/bench/) of what this looks like with [dummy data](https://github.com/GaloisInc/cerberus/blob/gh-pages/dev/bench/data.js). This currently runs `cn` on all the .c files in the test suite. Eventually, we probably want to make a proper benchmark suite using something like [Core_bench](https://blog.janestreet.com/core_bench-micro-benchmarking-for-ocaml/). It relies upon the existence of the (currently empty) `gh-pages` branch. Implements part of rems-project/cn-tutorial/issues/54.
Per @cp526 in rems-project/cerberus#365 (comment)
The text was updated successfully, but these errors were encountered: