You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I added DFT benchmarks in #103. Comparing different solvers in a benchmark is not useful imo, because it gives a very limited insight into the capabilities of the solver.
Do you have a specific thought on how to identify Python overhead? For DFT it appears to be of the order of 2-3 ms, but very difficult to assess property.
Do you have a specific thought on how to identify Python overhead? For DFT it appears to be of the order of 2-3 ms
I am note sure. There are options to run Python code using pyo3 (run, eval, from_code, etc.) but I am not sure if we can use these to make proper comparisons or get an idea of the overhead of a function/method. 2-3 ms seems to be a lot, though. Does that include calling a property that returns a numpy array?
Following #65, we should add benchmarks for possible future performance investigations.
Feel free to add to the list of functions to benchmark.
StateHD
(circumventing closure construction and cache) [Add benchmarks and Cargo profile using link-time optimization #89 ].Dual64
HyperDual64
Dual3
State
(creating aState
inside benchmark or manually clearing cache) [Add benchmarks and Cargo profile using link-time optimization #89]PhaseEquilibrium
: constructorsNotes
lto = "thin"
versuslto = true
target-cpu=native
The text was updated successfully, but these errors were encountered: