Skip to content

Releases: smarr/ReBench

1.2.0 Custom Gauge Adapters - 2023-08-06

06 Aug 15:12
fd8fa6b
Compare
Choose a tag to compare

The main feature of this release is the new support for custom gauge adapters. This allows the use of a Python file from the ReBench config, which can parse arbitrary output from a benchmark, see #209.

Furthermore, ReBench dropped support for Python 2. If you require ReBench to run with Python 2, please see #208 for the removed supported code, or use version 1.1.0, which was the last version with Python 2 support.

Other new features:

  • add command-line option -D to disable the use of denoise (#217)
  • include CSV headers into .data files (#220, #227)
  • abort all benchmarks for which the exector is missing (#224)
  • make the current invocation accessible in the command as %(invocation)s (#230)

Other changes:

  • fix bug where 'None' instead of null was reported to ReBenchDB (#232)
  • fix handling of environment variables when sudo is used (#210)
  • try gtime from MacPorts as alternative time command on macOS (#212)
  • update py-cpuinfo to work on macOS with ARM-base CPUs (#212)
  • make error more readable when executor is not available (#213)
  • add testing on macOS on Github Actions (#226)

Thanks to @naomiGrew for the contributions!

1.1.0 Denoise - 2023-02-21

21 Feb 08:22
1aaba13
Compare
Choose a tag to compare

This release focuses on reducing the noise from the system (#143, #144).
For this purpose, it introduces the rebench-denoise tool, which will adapt
system parameters to:

  • change CPU governor to the performance setting
  • disables turbo boost
  • reduces the sampling frequency allowed by the kernel
  • execute benchmarks with CPU shielding and nice -n-20

rebench-denoise can also be used as stand-alone tool, is documented here:
https://rebench.readthedocs.io/en/latest/denoise/

The use of rebench-denoise will require root rights.

Other new features include:

  • add support for configuring environment variables (#174)
  • add support for recording profiling information (#190)
  • add support for printing the execution plan without running it (#171)
  • add marker in configuration to make setting important, which overrides
    previous settings, giving more flexibility in composing
    configuration values (#170)
  • add support for filtering experiments by machines (#161)

Thanks to @tobega, @qinsoon, @cmccandless, @OctaveLarose, and @raehik for their contributions.

Other notable improvements:

  • -R now disables data reporting, replacing the previous -S (#145)
  • added support to report experiment completion to ReBenchDB (#149)
  • fixed JMH support (#147)
  • fixed string/byte encoding issues between Python 2 and 3 (#142)
  • updated py-cpuinfo (#137, #138, #141)
  • allow the use of float values in the ReBenchLogAdapter parser (#201)
  • make gauge adapter names in configurations case-insensitive (#202)
  • improve documentation (#197, #198)
  • use PyTest for unit tests (#192)

Full Changelog: v1.0.1...v1.1.0

1.0.1 - 2020-06-23

23 Jun 10:29
511fbb6
Compare
Choose a tag to compare

This is a bug fix release.

  • adopt py-cpuinfo 6.0.0 and pin version to avoid issues with changing APIs (#138)
    Thanks to @tobega for the fix!

1.0.0 Foundations

02 May 23:27
123a918
Compare
Choose a tag to compare

This is the first official release of ReBench as a "feature-complete" product.
Feature-complete here means, it is a tried and tested tool for benchmark
execution. It is highly configurable, documented, and successfully used.

This 1.0 release does not signify any new major features, but instead marks a
point where ReBench has been stable and relieable for a long time.

ReBench is designed to

  • enable reproduction of experiments;
  • document all benchmark parameters;
  • provide a flexible execution model,
    with support for interrupting and continuing benchmarking;
  • enable the definition of complex sets of comparisons
    and their flexible execution;
  • report results to continuous performance monitoring systems,
    e.g., Codespeed or ReBenchDB;
  • provide basic support for building/compiling benchmarks/experiments
    on demand;
  • be extensible to parse output of custom benchmark harnesses.

ReBench isn't

  • a framework for microbenchmarks.
    Instead, it relies on existing harnesses and can be extended to parse their
    output.
  • a performance analysis tool. It is meant to execute experiments and
    record the corresponding measurements.
  • a data analysis tool. It provides only a bare minimum of statistics,
    but has an easily parseable data format that can be processed, e.g., with R.

To use ReBench, install it with Python's pip:

pip install rebench

Acknowledgements

ReBench has been used by a number of people over the years, and their feedback and contributions made it what it is today. Not all of these contributions are recorded, but I'd still like to thank everyone, from the anonymous reviewer of artifacts, to the students who had to wade through bugs and missing documentation.

Thank you!

Changes Since 1.0rc2

  • moved CI to use GitHub Actions (#134)

  • added testing of Python 3.7 (#121) and ruamel.yaml (#123)

  • ensure config is YAML 1.2 compliant (#123)

  • added support for ReBenchDB (#129, #130)

  • fixed issues with error reporting (#128)

  • fixed handling of input size configuration (#117)

1.0 Release Candidate 2

09 Jun 15:22
Compare
Choose a tag to compare
  • added --setup-only option, to run one benchmark for each setup (#110, #115)

  • added ignore_timeout setting to accept known timeouts without error (#118)

  • added retries_after_failure setting (#107, #108)

  • fixed data loading, which ignored warmup setting (#111, #116)

  • fixed how settings are inherited for follow documentation (#112, #113)

  • fixed message for consecutive failures (#109)

  • fixed some reporting issues (#106)

1.0 Release Candidate 1

02 Aug 22:23
Compare
Choose a tag to compare
  • made user interface more consistent and concise (#83, #85, #92, #101, #102)
  • added concept of iterations/invocations (#82, #87)
  • added executor and suite name as command variables (#95, #101)
  • added and improved support for building suites before execution (#59, #78, #84, #96)
  • revised configuration format to me more consistent and add schema (#74, #82, #66, #94, #101)
  • fixed memory usage, avoid running out of memory for large experiments (#103)
  • added support to verify parameter and config file (#104)
  • added documentation (#66, #101)
  • use PyLint (#79)

Bug fix release 2018-06-08

13 Jul 14:24
c4c6720
Compare
Choose a tag to compare
  • fix experiment filters and reporting on codespeed submission errors (#77)

Python 3 Support and improved command-line help

13 Jul 14:21
72b535d
Compare
Choose a tag to compare
  • Restructure command-line options in help, and use argparse (#73)
  • Add support for Python 3 and PyPy (#65)
  • Add support for extra criteria (things beside run time) (#64)
  • Add support for path names in ReBenchLog benchmark names

Bug fix release 2017-12-21

21 Dec 18:38
Compare
Choose a tag to compare
  • Fix time-left reporting of invalid times (#60)
  • Take the number of data points per run into account for estimated time left (#62)
  • Obtain process output on timeout to enable results of partial runs
  • Fix incompatibility with latest setuptools

v0.9.0

23 Apr 19:54
Compare
Choose a tag to compare
New Features
  • added support for building VMs before execution (#58)
  • added support for using binaries on system's path, path does not need
    to be provided for VM anymore (#57)