• IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis

    WHY? VAE can learn useful representation while GAN can sample sharp images. WHAT? Introspective Variational Autoencoder(IVAE) combines the advantage of VAE and GAN to make a model to learn useful representation and output sharp images. IVAE uses encoder to introspectively estimate the generated samples and the training data as a...


  • Deep AutoRegressive Networks

    WHY? Learning directed generative model is difficult. WHAT? Deep AutoRegressive Network(DARN) models images with hierarchical, autoregressive hidden layers.DARN has three components: a encoder q(H|X), a decoder prior p(H), and a decoder conditional p(X|H). We consider all the latent variables (h) in this model are binary. The decoder prior is an...


  • Deep Generative Image Models using a Laplacian Pyramid of Adversarial Network

    WHY? GAN had troble modeling the entire image. Note Laplacian Pyramid Framework is used for restoring compressed image. When a image is compressed to smaller size, it would lose information of high-resolution image so that simply enlarging the image would not be enough to restore original data. Laplacian pyramid framework...


  • Breaking the Softmax Bottleneck: A High-Rank RNN Language Model

    WHY? This paper first prove that the expresiveness of a language model is restricted by softmax and suggest a way to overcome this limit. WHAT? The last part of language models usuallt consist of softmax layer applied on a product of context vector(h) and a word embedding w. This paper...


  • Eigenvalues of the Hessian in Deep Learning: Singularity and Beyond

    WHY? Gradient descent methods depend on the first order gradient of a loss function wrt parameters. However, the second order gradient(Hessian) is often neglected. WHAT? This paper explored exact Hessian prodect of neural network (after convergence) and discovered that the eigenvalue of Hessian is separated into two groups: 0s and...