From discussion here:
Does it make sense to run against previous release as well. I'm thinking else, regressions could be introduced bit by bit were each part is small enough to dissappear in the noise.
After we gather enough nightly data, we'll be able to start looking at trends over longer time intervals, which helps with this. My idea was to have @nanosoldier generate a weekly/monthly report using the existing benchmark data, in addition to the daily reports. Then we'd have daily comparisons, weekly comparisons, and monthly comparisons, as well as the data to do more advanced trend analysis later on.
We can simply run the benchmarks for the current stable release once, and reuse that data when doing comparisons. Then we could just generate report_vs_release.md in addition to our current report.md.
From discussion here:
After we gather enough nightly data, we'll be able to start looking at trends over longer time intervals, which helps with this. My idea was to have @nanosoldier generate a weekly/monthly report using the existing benchmark data, in addition to the daily reports. Then we'd have daily comparisons, weekly comparisons, and monthly comparisons, as well as the data to do more advanced trend analysis later on.
We can simply run the benchmarks for the current stable release once, and reuse that data when doing comparisons. Then we could just generate
report_vs_release.mdin addition to our currentreport.md.