in this video (Pt. 13 from the Karpathy et al. lecture), at minute 17:00 Justin Johnson starts to talk about the FCNN (Roni' s blueprint) paper, and at min. 20:00 he starts to describe in detail how the "Deconvolution" works.
https://www.youtube.com/watch?v=ByjaPdWXKJ4
Tensor Flow already has a "Deconvolution" operation, tf.nn.conv2d_transpose(value, filter, output_shape, strides, padding='SAME', name=None)
see also: Just a simple use example of the conv2d_transpose function in TensorFlow. Its run on MNIST. https://github.com/loliverhennigh/All-Convnet-Autoencoder-Example
there is an issue to implement conv2d_transpose in Theano
https://github.com/Theano/Theano/issues/3989
Its last entry: @meanmee: this is an open issue, meaning it is not done yet. Someone has been assigned to it, though, so it should become available soon.
1) mnist_autoencoder.py a small autoencoder to encode the mnist dataset. Compiling and running, but not working. Needs more MaxPool, and more conv2d_transpose. Inspired by loliverhennigh, see above.
2) convolution_transpose.py needs to go into keras/layers/. Input of the weight and bias dimensions is not Keras-Like. Needs Conv1D if Conv2D is confirmed to be working.
"While previous GAN work has used momentum to accelerate training, we used the Adam optimizer (Kingma & Ba, 2014) with tuned hyperparameters. We found the suggested learning rate of 0.001, to be too high, using 0.0002 instead. Additionally, we found leaving the momentum term β 1 at the suggested value of 0.9 resulted in training oscillation and instability while reducing it to 0.5 helped stabilize training." (UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL N ETWORKS) Alec Radford & Luke Metz