Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added MVN_cdf.py #60

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 23 additions & 13 deletions pymc_experimental/distributions/mvncdf.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@
from aesara.graph.basic import Apply
from aesara.graph.op import Op
from aesara.tensor.type import TensorType

from aesara.tensor.random.basic import MvNormalRV
from aeppl.logprob import _logcdf
# Op calculates mvncdf


Expand All @@ -22,38 +23,40 @@ def conditional_covariance(Sigma, mu, conditioned, conditioned_values):
Sigma_12 = Sigma[target][:, conditioned]
Sigma_11 = Sigma[target][:, target]

Sigma_cond = Sigma_11 - np.matmul(np.matmul(Sigma_12, np.linalg.inv(Sigma_22)), Sigma_21)
mean_cond = np.delete(mu, conditioned) + np.matmul(Sigma_12, np.linalg.inv(Sigma_22)).dot(
conditioned_values - mu[conditioned]
)
inv = pm.math.matrix_inverse(Sigma_22)
Sigma_cond = Sigma_11 - pm.math.matrix_dot(Sigma_12, inv, Sigma_21)
mean_cond = pm.math.matrix_dot(Sigma_12, inv, conditioned_values)

return Sigma_cond, mean_cond


class Mvncdf(Op):
__props__ = ()

def make_node(self, upper, mu, cov):
upper = at.as_tensor_variable(upper)
def make_node(self, value, mu, cov):
value = at.as_tensor_variable(value)
mu = at.as_tensor_variable(mu)
cov = at.as_tensor_variable(cov)

return Apply(self, [x], [x.type()])
return Apply(self, [value], [value.type()])
JessSpearing marked this conversation as resolved.
Show resolved Hide resolved

def perform(self, node, inputs, output_storage):
# here do the integration
return scipy.stats.multivariate_normal.logcdf(upper, mu, cov)
value, mu, cov = inputs
output_storage[0] = scipy.stats.multivariate_normal.logcdf(value, mu, cov)

return output_storage
JessSpearing marked this conversation as resolved.
Show resolved Hide resolved

def infer_shape(self, fgraph, node, i0_shapes):
return
return i0_shapes[0]

def grad(self, inputs, output_grads):

value, mu, cov = inputs
grad_ = []
for i in range(len(mu)):
grad_.append(conditional_covariance(cov, mu, i, upper[i]))
grad_.append(conditional_covariance(cov, mu, i, value[i]))

return np.array(grad_)
return np.array(grad_)*output_grads[0]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The returned gradients need to be symbolic Aesara expressions, not the result of Numpy computations. You might need a separate Op for the gradient. See an example here: https://www.pymc.io/projects/examples/en/latest/case_studies/blackbox_external_likelihood_numpy.html#blackbox_external_likelihood_numpy and https://www.pymc.io/projects/examples/en/latest/case_studies/wrapping_jax_function.html

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for now it might be best not to implement the gradient method. The scipy autograd library doesn't exist for multivariate normal cdf. For now I can just try and sample using MH.


def R_op(self, inputs, eval_points):
# R_op can receive None as eval_points.
Expand All @@ -63,3 +66,10 @@ def R_op(self, inputs, eval_points):
if eval_points[0] is None:
return eval_points
return self.grad(inputs, eval_points)



@_logcdf.register(MvNormalRV)
def mvnormal_logcdf(value, mu, cov):

return pm.logcdf(MvNormal.dist(mu, cov), value)
Copy link
Member

@ricardoV94 ricardoV94 Nov 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be something like

Suggested change
@_logcdf.register(MvNormalRV)
def mvnormal_logcdf(value, mu, cov):
return pm.logcdf(MvNormal.dist(mu, cov), value)
mvncdf = Mvncdf()
@_logcdf.register(MvNormalRV)
def mvnormal_logcdf(op, value, rng, size, dtype, mu, cov, **kwargs):
return mvncdf(value, mu, cov)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which you can then test is working by calling pm.logcdf(MvNormal.dist(mu, cov), value)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any documentation which shows me how to do this? I tried searching for dispatch function but I have no idea how it works, and what it does. So it's quite tricky for me to know how to fix this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No documentation AFAIK, but you can see how we do it for some custom distributions: https://github.com/pymc-devs/pymc/blob/9b9826c586f04367b5027a0e472122ebcc636139/pymc/distributions/mixture.py#L359-L383

The idea is that the function should return an Aesara expression, which in your case is just the new Op you created.