You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A simple HurdleGamma experiences a very high number of divergences, even when priors are tightly centered around true values and the data generating process is correct.
Some chains get "stuck" -- they do not move from their initialized values.
Dpananos
changed the title
BUG: <Please write a comprehensive title after the 'BUG: ' prefix>
BUG: HurdleGamma results in large number of divergences, even under the correct model
Dec 27, 2024
Let me know if this isn't helpful. I've been reading Some Mixture Modeling Basics by Michael Betancourt which have informed the contents of this comment.
If there was a continuous analog of DiracDelta in which the logp method returned 0 when value != c and 1 when value=c then the mixture probabilities would be correct (I think) and there would be no need for the machine epsilon "hack" which I suspect is the source of the problem.
Would using pytensor.tensor.switch be possible here to return the appropriate density/probability value when needed?
Describe the issue:
A simple
HurdleGamma
experiences a very high number of divergences, even when priors are tightly centered around true values and the data generating process is correct.Some chains get "stuck" -- they do not move from their initialized values.
For more, please see this thread in the PyMC community forums
Reproduceable code example:
Error message:
No response
PyMC version information:
5.19.1
Context for the issue:
No response
The text was updated successfully, but these errors were encountered: