-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider adding high-precision microbenchmarks for small sequences #148
Comments
Would Edit: Just to clarify, since the overhead already decreases for larger |
We can choose to go either way. I agree that decreasing |
I believe choosing
So let us take a step at a time. Since the first option is basically a special case of the second one, why don't we implement |
Sounds good. That raises (at least) one question: do we want to tweak the existing benchmarks so that they have higher precision for smaller sequences, or do we want to create a new set of benchmarks for "small sequences"? The first option would obviously be better if that can be done, but that might end up taking too much Travis time, depending on how smart we can be. |
I'd say a bit of both, reduce the maximum size of sequences, so that we don't abuse travis too much, and make sure all measurements are precise enough, so that results don't vary that much across runs. As for a number, @ericniebler suggested we focused on < 100 elements. I have no strong opinion about this, but I think it's a reasonable suggestion. I'm just not very comfortable with the idea of fine tuning |
I don't see a better way either. |
This has been resolved by #150. |
Forking #124 to discuss high precision microbenchmarks for small inputs more precisely. The idea is that we'd like to have a precise view of the behavior of algorithms on small sequences, because this is how they are mostly (but not only) used in the wild. I see two different approaches:
N
, generateK
different sequences of lengthN
and call the algorithm once on each sequence. By having a sufficiently largeK
, the total compilation time is increased and the relative error is reduced.K
to find an approximation of the absolute time taken for a single algorithm. I have reservations with this approach, because the compilation time as we increaseK
(the number of small sequences) is not necessarily linear.The text was updated successfully, but these errors were encountered: