• Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks

    WHY? Batch normalization is known as a good method to stablize the optimization of neural network by reducing internal covariate shift. However, batch normalization inheritantly depends on minibatch which impeding the use in recurrent models. WHAT? The core idea of weight normalization is to reparameterize the weight to decompose it...


  • [Pytorch] Alphachu (Ape-X DQN)

    Alphachu: Ape-x DQN implementation of Pikachu Volleyball [Demo] [Paper] Training agents to learn how to play Pikachu Volleyball. Architecture is based on Ape-x DQN from the paper. The game is in exe file which makes the whole problem much more complicated than other Atari games. I built python environment to...


  • [Pytorch] VAE-NF

    Pytorch implementation of Variational Inference with Normalizing Flows. https://github.com/Lyusungwon/generative_models_pytorch Reference https://github.com/ex4sperans/variational-inference-with-normalizing-flows


  • Spectral Normalization for Generative Adversarial Networks

    WHY? The largest drawback of training Generative Adversarial Network (GAN) is its instability. Especially, the power of discriminator greatly affect the performance of GAN. This paper suggests to weaken the discriminator by restricting the functional space of it to stablize the training. Note Matrix norm can be defined in various...


  • A Two-Step Disentanglement Method

    WHY? This paper wanted to disentangle the label related and label unrelated information from data. The model of this paper is simpler and more effective than that of Disentangling Factors of Variation in Deep Representations Using Adversarial Training. WHAT? Let S represents labeled information and Z represents the rest. The...