Replies: 13 comments
-
Every solution other than returning a new model will have nasty side effects on the current model. But this may actually result in an okay API: with pmodel.set_data({
"x" : ...,
}) as postpred_model:
idatapp = pm.sample_posterior_predictive() |
Beta Was this translation helpful? Give feedback.
-
So |
Beta Was this translation helpful? Give feedback.
-
Has this been solved in the meantime? The following works for me and the shape change propagates from import numpy as np
import pandas as pd
import pymc as pm
rng = np.random.default_rng(seed=0)
input_data = np.arange(10, dtype=float)
output_data = 0.5*input_data + 10 + rng.random(size=input_data.shape[0])
with pm.Model() as m:
x = pm.MutableData("x", input_data)
y_obs = pm.MutableData("y_obs" , output_data)
beta = pm.Normal("beta")
y_est = x * beta
y = pm.Normal("y", y_est, observed=y_obs)
samples = pm.sample(100, tune=100, cores=1)
print([v.eval().shape for v in [x, y_est, y]]) # All shape (10,)
with m:
pm.set_data({"x": np.linspace(0, 10, 100)})
prediction_samples = pm.sample_posterior_predictive(samples)
print([v.eval().shape for v in [x, y_est, y]]) # All shape (100,) I ran this on the latest commit (ad16bf4). |
Beta Was this translation helpful? Give feedback.
-
I've just found out that the example fails when I explicitly set the shape of y = pm.Normal("y", y_est, observed=y_obs, shape=(10,)) In this case, I get a shape error when trying to sample the posterior. Thus, the following seems to be the case:
|
Beta Was this translation helpful? Give feedback.
-
(I think) you can specify with pm.Model() as m:
x = pm.MutableData("x", input_data)
beta = pm.Normal("beta")
y_est = x * beta
y = pm.Normal("y", y_est, observed=y_obs, shape=x.shape) |
Beta Was this translation helpful? Give feedback.
-
Yes, using But then the current behavior is already reasonable, isn't it? |
Beta Was this translation helpful? Give feedback.
-
Yes, I think it is reasonable, but we perhaps need to advertise it better (e.g., in one of the GLM notebooks) The problem is that users don't usually think about specifying shape for likelihoods because PyMC is so handy and infers it from the observed data. But sometimes users know something more about the shape of the likelihood (e.g., it should be as large as |
Beta Was this translation helpful? Give feedback.
-
In this particular example, the inference even works and allows for changing
Maybe I am missing the point of the issue, but for me it works as I would expect... |
Beta Was this translation helpful? Give feedback.
-
The first case does not always work, and not for all distributions. For instance if you have multiple observations per mean parameter. The last case should indeed not work (it works as expected) |
Beta Was this translation helpful? Give feedback.
-
The first example is still the point of the issue. I think adding an example to the GLM notebook where |
Beta Was this translation helpful? Give feedback.
-
Got stuck on this recently too. I really like the |
Beta Was this translation helpful? Give feedback.
-
Sounds good. |
Beta Was this translation helpful? Give feedback.
-
Cool, put a PR up here #6087 |
Beta Was this translation helpful? Give feedback.
-
When I have a simple model:
Now when I want to predict on new data, I have to do:
because
x
andy
need to have the same shape or I get an error. But it seems wrong that I need to sety
to some dummy-thing just for the shape. So what I think it should look like is this:I'm not sure what the right solution is, perhaps there is a way to link the shapes somehow, so that when I change
x
,y
automatically also gets changed? Or that when I callpm.sample_posterior_predictive()
the data inobserved
just gets removed and the shape should automatically propagate anyway?CC @ricardoV94
Beta Was this translation helpful? Give feedback.
All reactions