• VAE with a VampPrior

    WHY? Choosing an appropriate prior is important for VAE. This paper suggests two-layered VAE with flexible VampPrior. WHAT? The original variational lower-bound of VAE can be decomposed as follows. The first component is the negative reconstruction error, the second component is the expectation of the entropy of the variational posterior,...


  • Large Scale GAN Training for High Fidelity Natural Image Synthesis

    WHY? Generating a high resolution image with GAN is difficult despite of recent advances. This paper suggests BigGAN which adds few tricks on previous model to generate large scale images without progressively growing the network. WHAT? BigGAN is made by a series of tricks over baseline model. Self-Attention GAN(SA-GAN) which...


  • Hybrid computing using a neural network with dynamic external memory

    WHY? Using external memory as modern computer enable neural net the use of extensible memory. This paper suggests Differentible Neural Computer(DNC) which is an advanced version of Neural Turing Machine. WHAT? Reading and writing in DNC are implemented with differentiable attention mechanism. The controller of DNC is an variant of...


  • SSD: Single Shot MultiBox Detector

    WHY? Object box proposal process is complicated and slow in object detection process. This paper proposes Single Shot Detector(SSD) to detect objects with single neural network. WHAT? SSD prodeces fixed-size collection of bounding boxes and scores the presence of class objects in the boxes. The front of SSD is standard...


  • Progressive Growing of GANs for improved Quality, Stability, and Variation

    WHY? Training GAN on high-resolution images is known to be difficult. WHAT? This paper suggests new method of training GAN to train progressively from coarse to fine scale. A pair of generator and discriminator are trained with low scale real and fake images at first. As input image size grows,...