-
-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ImplicitEulerExtrapolation keywords #2371
Comments
Yeah that branch is trivial, seems like a mistake. There's a few things with the extrapolation methods that are a bit odd right now. They are one of the best methods when tuned (for example https://docs.sciml.ai/SciMLBenchmarksOutput/stable/StiffODE/Pollution/) but the default tuning is pretty bad. @utkarsh530 can we take a second to work with @ArnoStrouwen here on documenting the tuning process, and change the default tuning? We should include https://ieeexplore.ieee.org/abstract/document/9926357 in the documentation of the methods. The original resource is Hairer I and II but our implementation goes well beyond that. These methods seem to only ever make sense if they are using the multithreading, as if you cannot use the multi threading they are less efficient than other methods for stiff and non-stiff. So when documenting them, documenting that Generally the go-to one should be Bumping the minimum order tends to improve multithreaded efficiency since it requires more calculations per step, and so it can take larger stepsizes and fill your CPU much better if you force it. init order should start higher than the minimum. For implicit extrapolation, you only get better multithreading here if it's below the threshold to where BLAS LU factorization multithreads efficiently, which is around 200x200 matrices or so. So its sweet spot is somewhere around 20-500 ODEs. For explicit extrapolation, higher order Runge-Kutta methods like Verner methods just tend to do better. But both the implicit and explicit extrapolation methods are arbitrary order, and they also multithread better at higher orders, so there is a precision value at which they will simply be the best. For explicit extrapolation vs something like Vern9, that seems to be outside of Float128 tolerances. For implicit extrapolation, it's hard to tell how that matches up against a good Radau implementation, since we just got Radau9 and Hairer's radau doesn't have our improved linear algebra but it does really well with its adaptive order 5/9/13. We plan to complete the 5/9/13/beyond version this summer though, in which case it's really a toss up between what will be |
I'm not sure this branch is doing anything:
OrdinaryDiffEq.jl/lib/OrdinaryDiffEqExtrapolation/src/algorithms.jl
Lines 50 to 57 in dc0e1e7
The lpad stuff seems overkill also.
The text was updated successfully, but these errors were encountered: