Training examples with reproducible performance.
Reproducing a method is usually easy, but you don't know whether you've made mistakes, because wrong code will often appear to work. Reproducible performance results are what really matters. See Unawareness of Deep Learning Mistakes.
- An illustrative mnist example with explanation of the framework
- The same mnist example using tf-slim, and with weights visualizations
- A tiny Cifar ConvNet and SVHN ConvNet
- A boilerplate file to start with, for your own tasks
- If you've used Keras, check out Keras examples.
Name | Performance |
---|---|
Train ResNet and ShuffleNet on ImageNet | reproduce paper |
Train Faster-RCNN / Mask-RCNN on COCO | reproduce paper |
DoReFa-Net: training binary / low-bitwidth CNN on ImageNet | reproduce paper |
Generative Adversarial Network(GAN) variants, including DCGAN, InfoGAN, Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN |
visually reproduce |
Inception-BN | reproduce reference code |
Fully-convolutional Network for Holistically-Nested Edge Detection(HED) | visually reproduce |
Spatial Transformer Networks on MNIST addition | reproduce paper |
Visualize CNN saliency maps | visually reproduce |
Similarity learning on MNIST | |
Single-image super-resolution using EnhanceNet | visually reproduce |
Learn steering filters with Dynamic Filter Networks | visually reproduce |
Load a pre-trained AlexNet, VGG16, or Convolutional Pose Machines |
Name | Performance |
---|---|
Deep Q-Network(DQN) variants on Atari games, including DQN, DoubleDQN, DuelingDQN. |
reproduce paper |
Asynchronous Advantage Actor-Critic(A3C) on Atari games | reproduce paper |
Name | Performance |
---|---|
LSTM-CTC for speech recognition | reproduce paper |
char-rnn for fun | fun |
LSTM language model on PennTreebank | reproduce reference code |