Skip to content

Implement unit tests (and benchmarks?) #5

@PyPylia

Description

@PyPylia

Currently, all testing for this crate has been done manually (and honestly it hasn't been a great amount). Implementing unit tests would be nice to detect regressions.

Additionally, benchmarks would be nice to detect performance-related regressions (such as with #4), but would have to be implemented carefully (as the initial call will be much slower than any subsequent calls, or if the program is compiled naively such as with the target-cpu option it will optimise dynamic dispatch out entirely). Benchmarks showing the difference between an unspecialised function and a function with maybe_special applied for various number of calls would be really cool, but hard to implement as it would require somehow undoing the initialisation.

Finally, if some form of tests could be conceived such that they could detect regressions for every supported target at once, that would be amazing, but I don't see any way to do that currently (and plus it would likely require installing an ungodly number of rustc toolchains).

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions