Skip to content

Commit

Permalink
#4 suggestion for approximate kl divergence
Browse files Browse the repository at this point in the history
  • Loading branch information
DavAug committed Feb 6, 2021
1 parent 8368857 commit 8e9d155
Showing 1 changed file with 32 additions and 0 deletions.
32 changes: 32 additions & 0 deletions pintsfunctest/_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,3 +87,35 @@ def __call__(self, parameters):
score = logsumexp(scores)

return score

def compute_kullback_leibler_divergence(
self, parameters, chain, n_samples):
r"""
Approximately computes the Kullback-Leibler divergence and its
estimated error.
The Kullback-Leibler divergence is defined as
.. math::
D(f || g) = \mathbb{E}\left[ \log \frac{f(x)}{g(x)} \right] ,
where the expectation is taken w.r.t. :math:`f(x)`. We approximate the
divergence by drawing :math:`n` i.i.d. samples from :math:`f(x)` and
compute
.. math::
\hat{D}(f || g) \approx \frac{1}{n}
\sum _{i=1}^n \log \frac{f(x_i)}{g(x_i)}.
Note that the draws and the score :math:`f(x)` can be computed exactly
from the mixture model, while :math:`g(x)` is computed by normalising
the chain samples.
The variance of the Kullback-Leibler divergence estimate error can be
estimated with
.. math::
\text(Var)\left[ \hat{D}(f || g)\right] =
\frac{1}{n} \text(Var)\left[ \log \frac{f(x)}{g(x)} \right]
"""

0 comments on commit 8e9d155

Please sign in to comment.