Skip to content
/ VAE Public

A fast & flexible implementation of Variational Autoencoders using PyTorch

License

Notifications You must be signed in to change notification settings

rVSaxena/VAE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VAE

This is a generic and fast implementation of variational auto-encoders in SP float32 (GTX and RTX series Nvidia GPUs
have significantly higher SP performance vs DP) in PyTorch.

For a different usage (dataset), just a new constructs file needs to be written.


An example usage on the MNIST dataset is provided.

2-D latent space:

49D latent space:

About

A fast & flexible implementation of Variational Autoencoders using PyTorch

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages