• Adversarial Variational Bayes: Unifying Variational Autoencoder and Generative Adversarial Networks

    WHY? In VAE framework, the quality of the generation relies on the expressiveness of inference model. Restricting hidden variables to Gaussian distribution with KL divergence limits the expressiveness of the model. WHAT? Adversarial Variational Bayes apply gan loss to VAE framework. AVB is different from former VAE in 2 ways:...


  • Hidden Dragon (2000)

    평점: 4 아름다운 영화를 만드는데 꼭 최첨단 기술만이 필요한 건 아니라는 것을 잘 보여주는 사례. 와이어 액션도 촌스럽지 않을 수 있다.


  • April Story (1998)

    평점: 4 새로운 시작의 설렘과 혼란. 근데 러닝타임에 비해 중간에 고전영화 상영장면이 왜 이렇게 길지? 4월은 다들 뻘짓하며 보낸다는 뜻인가


  • Massively Parallel Methods for Deep Reinforcement Learning

    WHY? A single agent usually takes too long to train. WHAT? Stacking experience with multiple agents would boost-up the speed of training. In addition, parralell gradient calculation can be used if multiple GPUs are available. This paper called this framework Gorila(General Reinforcement Learning Architecture). Gorilla consists of 4 parts: Actors,...


  • Spatial Transformer Networks

    WHY? Former CNNs were not spatially invariant. WHAT? Spatial Transformer Module can make a former CNN model invariant to cropping, translation, scale, rotation and skew without greatly increasing the computation cost. ST module consists of 3 parts: localisation network, grid generator, and sampler. Localisation network takes input feature map (...