Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conditions of Simple Interval Root-finding problem aren't fair #810

Open
ivan-pi opened this issue Jan 1, 2024 · 1 comment
Open

Conditions of Simple Interval Root-finding problem aren't fair #810

ivan-pi opened this issue Jan 1, 2024 · 1 comment
Labels

Comments

@ivan-pi
Copy link

ivan-pi commented Jan 1, 2024

Describe the bug 🐞

The results of the Simple Interval Rootfinding benchmark claim a 33x speed-up compared to the MATLAB solver, however the conditions of the benchmark were not the same.

The main differences are:

  • MATLAB is using a modified Dekker's algorithm, whereas the fast Julia result was using the Newton-Raphson method (that uses the derivative information)
  • The default termination conditions of MATLAB and NonlinearSolve.jl are different.
  • The Julia result was obtained on newer hardware
  • The operating system was different

Expected behavior

The benchmarks should be "optimal, fair, and reproducible". This one fails in the "fair" comparison category.

When I run the MATLAB code on an Intel i7-11700K @3.60GHz (Rocketlake), the elapsed time is ~0.12 s or roughly the same as using the NonlinearSolve.jl bisection method. (Caveat - the hardware is different.)

Additional context

The benchmark was presented at JuliaCon 2023 (see https://youtu.be/O-2F8fBuRRg?si=GF24GyZEBek0Yi-Y&t=1022) with a claim of an additional 5-fold speed-up or a time of under 0.01 s. An un-merged pull request is mentioned in the video, but I was not able to locate it.

@ivan-pi ivan-pi added the bug label Jan 1, 2024
@ChrisRackauckas
Copy link
Member

Yes, right now that benchmark has been not ideal since we have not had a MATLAB license to be able to update it. So it's currently published with all of the information as to what is ran on what computers for full transparency, knowing that we have a somewhat non-ideal scenario here. If anyone has a license and can run that benchmark we would be really happy to accept a PR with a locally ran version of the benchmark that improves that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants