Skip to content

Latest commit

 

History

History
67 lines (46 loc) · 2.05 KB

File metadata and controls

67 lines (46 loc) · 2.05 KB

Training-Triplet-Networks-With-GAN

This repository contains PyTorch implementation of the Paper Training Triplet Networks With GAN (Triplet-GANs) on MNIST Dataset

Hyperparameters

  • Batch Size: 100
  • Pre-train learning rate: 0.0003
  • Train learning rate: 0.0003
  • Pre-train epochs: 100
  • Training epochs: 30
  • Input size of generator: 100

Important techniques used for training.

  1. Weight initialization to standard normal value with mean 0 and variance 0.05 for convolutional layers and to variance of 0.02 for fully connected layers.
  2. Weight normalization.
  3. Batch norm layers in initial layers of generator.
  4. Set output layer of generator to Sigmoid non-linearity.
  5. Use feature matching to calculate generator's loss.

Results

Accuracy
N=100, M=16 0.9806
N=100, M=32 0.9773
N=200, M=16 0.9817

Plots

Pre-train Loss Curve

Generated Images after Pre-training

Generated Images after Pre-training Training Loss Curve

Generated Images after training

Generated Images after training

TSNE-Plots After Training

TSNE-Plot

References

  1. Improved techniques for training GANs. [Paper] (NeurIPS, 2016), [Code]
  2. Official Code Repo (Lasagne Code): https://github.com/maciejzieba/tripletGAN