-
Notifications
You must be signed in to change notification settings - Fork 219
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compatibility with new DPPL version #1900
Conversation
So it seems like on MacOS, the estimates are a bit more noisy, e.g. some tests are failing because now the error is slightly above the And there's a bug with |
@yebai @devmotion Regarding this, should I just increase the threshold? It def seems like the samplers are doing the right thing (and locally on linux, the tests are passing). |
Sure |
Hmm, one thing to note here: it seems the tests takes ~2X to run from before (I just looked at some previously closed PRs) 😕 I'm uncertain if this is compile-time or runtime, but nonetheless maybe something to be aware of. |
Hmm that's very unfortunate. It would be good to know where the regression comes from, if it is run-time or compilation time, if it is AD-backend specific etc. Maybe comparing the timings and allocations in the logs (e.g., https://github.com/TuringLang/Turing.jl/actions/runs/3441679090/jobs/5741483837#step:6:825) could give some hints? |
@torfjelde Re performance regression, can you check
Also, if this PR has not broken tests, I am happy to merge it as is, and then fix regression in a new PR. This will be alright if we don't make any new releases until performance regression is fixed. |
Good news! It looks like the
I need to have a look at what's causing this though. It's also "interesting" that the effect is significantly different between architectures. |
And just for the record, we don't do any execution of the model in this. In particular, for |
Co-authored-by: David Widmann <[email protected]>
Okay, so this is all very weird. If I run the tests locally, then the same slowdown occurs when I hit But if I copy-paste the code from So I'm very confused. Have you seen anything like this before @devmotion? |
Maybe, I'm not sure. But I've definitely seen cases where The more obvious difference is that I've also seen differences due to the use of Revise, so it might be good to also check the behaviour if Revise is not loaded (e.g., by starting a clean Julia process with I guess you already checked if the same package versions are used? And then, of course, scoping is different in the REPL: https://docs.julialang.org/en/v1/manual/variables-and-scoping/ |
After reading your reply @devmotion I was reminded of how |
This PR fixes the performance regressions seen for `Emcee` in TuringLang/Turing.jl#1900. @yebai @devmotion This should be an easy merge.
Oh you made reverted some of my changes @yebai Increasing the number of samples helped but didnt' solve it. I guess I'll just increase further. |
Codecov ReportBase: 81.49% // Head: 81.24% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #1900 +/- ##
==========================================
- Coverage 81.49% 81.24% -0.26%
==========================================
Files 21 21
Lines 1421 1418 -3
==========================================
- Hits 1158 1152 -6
- Misses 263 266 +3
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
The goal of this PR is just to make Turing work with the new DynamicPPL version, not to obtain full feature-parity of
SimpleVarInfo
andVarInfo
(though it is a significant step towards exactly this).Main changes are:
unflatten
is now used to convert a vector into aAbstractVarInfo
.(inv)link!!
instead of deprecated(inv)link!
.setindex!!
instead ofsetindex!
.evaluate
and capture resultingAbstractVarInfo
to also support immutable impls ofAbstractVarInfo
.Closes #1899 and #1898