An implementation of variational auto-encoding (VAE) for MNIST represented by Kingma et al. Paper(2013)
- Auto-Encoding Variational Bayes by Kingma et al.
- This implementation is referred from hwalsuklee’s tutorial
Different from the hwalsuklee’s program, this tutorial is not required to use other class file or functional set file.
Within this IPython notebook, all operations in VAE are performed.
Given training set data, VAE architecture can generate the similar images without any label.
In this example, we introduce 2-Dimensional latent space unit.
For exploiting latent space in VAE, we generate and put synthetic mesh-grid data (-4:4, -4:4) into the VAE model as hidden variable, Z instead of training set data.
Visualizations of learned data manifold for generative models with 2-dim.
Latent space are given in Figure. 4 in the paper.
The implementation is based on the projects:
[1] https://github.com/oduerr/dl_tutorial/tree/master/tensorflow/vae
[2] https://github.com/fastforwardlabs/vae-tf/tree/master
[3] https://github.com/kvfrans/variational-autoencoder
[4] https://github.com/altosaar/vae
[5] https://github.com/hwalsuklee/tensorflow-mnist-VAE
This implementation has been performed on Tensorflow 1.7 in Ubuntu 16.04.