Skip to content

Latest commit

 

History

History
149 lines (125 loc) · 9.35 KB

README.md

File metadata and controls

149 lines (125 loc) · 9.35 KB
Just Relax It

Just Relax It

Discrete Variables Relaxation

Compatible with PyTorch Inspired by Pyro

Coverage_2 Coverage Docs

License GitHub Contributors Issues GitHub Pull Requests

"Just Relax It" is a cutting-edge Python library designed to streamline the optimization of discrete probability distributions in neural networks, offering a suite of advanced relaxation techniques compatible with PyTorch.

📬 Assets

  1. Technical Meeting 1 - Presentation
  2. Technical Meeting 2 - Jupyter Notebook
  3. Technical Meeting 3 - Jupyter Notebook
  4. Blog Post
  5. Documentation
  6. Tests

💡 Motivation

For lots of mathematical problems we need an ability to sample discrete random variables. The problem is that due to continuous nature of deep learning optimization, the usage of truly discrete random variables is infeasible. Thus we use different relaxation methods. One of them, Concrete distribution or Gumbel-Softmax (this is one distribution proposed in parallel by two research groups) is implemented in different DL packages. In this project we implement different alternatives to it.

🗃 Algorithms

🛠️ Install

Install using pip

pip install relaxit

Install from source

pip install git+https://github.com/intsystems/relaxit

Install via Git clone

git clone https://github.com/intsystems/relaxit
cd relaxit
pip install -e .

🚀 Quickstart

Open In Colab

import torch
from relaxit.distributions import InvertibleGaussian

# initialize distribution parameters
loc = torch.zeros(3, 4, 5, requires_grad=True)
scale = torch.ones(3, 4, 5, requires_grad=True)
temperature = torch.tensor([1e-0])

# initialize distribution
distribution = InvertibleGaussian(loc, scale, temperature)

# sample with reparameterization
sample = distribution.rsample()
print('sample.shape:', sample.shape)
print('sample.requires_grad:', sample.requires_grad)

🎮 Demo

Laplace Bridge REINFORCE in Acrobot environment VAE with discrete latents
Laplace Bridge REINFORCE VAE
Open In Colab Open In Colab Open In Colab

For demonstration purposes, we divide our algorithms in three1 different groups. Each group relates to the particular demo code:

We describe our demo experiments here.

📚 Stack

Some of the alternatives for GS were implemented in pyro, so we base our library on their codebase.

🧩 Some details

To make to library consistent, we integrate imports of distributions from pyro and torch into the library, so that all the categorical distributions can be imported from one entrypoint.

👥 Contributors

  • Daniil Dorin (Basic code writing, Final demo, Algorithms)
  • Igor Ignashin (Project wrapping, Documentation writing, Algorithms)
  • Nikita Kiselev (Project planning, Blog post, Algorithms)
  • Andrey Veprikov (Tests writing, Documentation writing, Algorithms)
  • You are welcome to contribute to our project!

🔗 Useful links

Footnotes

  1. We also implement REINFORCE algorithm as a score function estimator alternative for our relaxation methods that are inherently pathwise derivative estimators. This one is implemented only for demo experiments and is not included into the source code of package.