Skip to content

A Python module for estimating divergence between two sets of samples.

License

Notifications You must be signed in to change notification settings

slaypni/universal-divergence

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

universal-divergence

universal-divergence is a Python module for estimating divergence of two sets of samples generated from the two underlying distributions. The theory of the estimator is based on a paper written by Q.Wang et al [1].

Install

pip install universal-divergence

Example

from __future__ import print_function

import numpy as np
from universal_divergence import estimate

mean = [0, 0]
cov = [[1, 0], [0, 10]]
x = np.random.multivariate_normal(mean, cov, 100)
y = np.random.multivariate_normal(mean, cov, 100)
print(estimate(x, y))  # will be close to 0.0

mean2 = [10, 0]
cov2 = [[5, 0], [0, 5]]
z = np.random.multivariate_normal(mean2, cov2, 100)
print(estimate(x, z))  # will be bigger than 0.0

References

[1]Qing Wang, Sanjeev R. Kulkarni, and Sergio Verdú. "Divergence estimation for multidimensional densities via k-nearest-neighbor distances." Information Theory, IEEE Transactions on 55.5 (2009): 2392-2405.

About

A Python module for estimating divergence between two sets of samples.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages