-
Notifications
You must be signed in to change notification settings - Fork 30
Description
I keep finding myself writing benchmarks like this:
Benchmark(
"...",
configuration: .init(
...
thresholds: [
.throughput: .init(relative: [.p90: 4])
]
)
) { benchmark in
...
}While having static threshold files containing something like:
{
"throughput": 772500
}These are all running on dedicated-vcpu machines, but still I need to declare thresholds to tolerate noise.
The bad part is how I need to calculate the static threshold number and the percentage:
I need to run the benchmark a few times (eh, fine) then read all the results and figure out what was the minimum and what was the maximum that ever happened.
For example in this case those were 757K and 786K.
Then I have to round those numbers to something that I think will be appropriate; In this case: 750K and 795K.
Now I have to take average of them: 772500.
Then I have to see how many %s is 750K far from the 772500: 3%.
So then I can decide I can use 4 as the percent in thresholds just in case because I've seen this package not like it when the diff is 3% and you've also set 3 in the thresholds, and based on my experience it doesn't do fractional percentages which mean it rounds the percentages to something that might cross the thresholds line.
What I propose is for this package to simply support a range of numbers as threshold. While that doesn't solve every single problem I mentioned above, it does simplify the process by a lot.
The API could look like:
thresholds: [
.throughput: .init(range: [.p90: 750_000..<795_000])
]