• Markov Chain Monte Carlo and Variational Inference: Bridging the Gap

    WHY? Two approximation methods, Variational inference and MCMC, have different advantages: usually, variational inference is fast while MCMC is more accurate. Note Markov Chain Monte Carlo (MCMC) is approximation method of estimating a variable. MCMC first sample a random draw and than draw a chain of variables from a stochastic...


  • Neural Process

    WHY? Gaussian process has several advantages. Based on robust statistical assumptions, GP does not require expensive training phase and can represent uncertainty of unobserved areas. However, HP is computationally expensive. Neural process tried to combine the best of Gaussian process and neural network. WHAT? Neural process satisfy two condtions of...


  • Grammar Variational Autoencoder

    WHY? Generative models of discrete data with particular structure (grammar) often result invalid outputs. Grammar Variational Autoencoder(GVAE) forces the decoder of VAE to result only valid outputs. WHAT? Particular structure of data can be formulated with context-free grammars(CFG). Data with defined CFG can be represented as a parse tree. Encoder...


  • IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis

    WHY? VAE can learn useful representation while GAN can sample sharp images. WHAT? Introspective Variational Autoencoder(IVAE) combines the advantage of VAE and GAN to make a model to learn useful representation and output sharp images. IVAE uses encoder to introspectively estimate the generated samples and the training data as a...


  • Deep AutoRegressive Networks

    WHY? Learning directed generative model is difficult. WHAT? Deep AutoRegressive Network(DARN) models images with hierarchical, autoregressive hidden layers.DARN has three components: a encoder q(H|X), a decoder prior p(H), and a decoder conditional p(X|H). We consider all the latent variables (h) in this model are binary. The decoder prior is an...