Deep Learning Travels
Don't panic

How Does Batch Normalization Help Optimization? (No, It Is Not About Internal Covariate Shift
WHY? While the effect of batch normalization was widely proven empirically, the exact mechanism of it is yet been understood. Commonly known explanation for this was internal covariance shift(ICS) meaning the change in the distribution of layer inputs caused by updates to the preceeding layers. WHAT? Critic So? Ha, David,...

World Models
WHY? Instead of instantly responding to incoming stimulus, having a model of environment to make some level of prediction would help perform in reinforcement learning. WHAT? Agent model of this paper consists of three parts: Vision(V), Memory(M), and Controller(C). Since simulating the whole pixels of environment is inefficient, VAE model...

Gradient Estimation Using Stochastic Computation Graphs
WHY? Many machine learning problems involves loss function that contains random variables. To perform backpropagation, estimating gradient of the loss function is required. WHAT? This paper tried to formalize the computation of gradient of loss function with computation graphs. Assume we want to compute . There are two differnt way...

Amortized Inference in Probabilistic Reasoning
WHY? Former studies on probabilistic reasoning assume that reasoning is memoryless, which means all the inference occur independently without the reuse of previous computation. WHAT? This paper tried to prove that probabilistic reasoning process of human is a amortized inference. When some queries are parts of complex queries, brain memorize...

Vanilla Sky (2001)
평점: 4 결말을 대책없이 열어놨다면 꽤나 골치가 아팠을텐데 결말에 친절한 설명으로 가닥을 잡아주어 고마웠다. 그럼에도 불구하고 현실을 택하는 결말이 조금 아쉽다. 현실의 가치는 거짓일지도 모르는 것에 대하여 느끼는 찝찝함과 언젠가 현실을 자각해야하는 개연성이 높은데서 나타난다. 이 두가지가 전제되지 않는다면 굳이 아픈 현실을 택해야 하는 이유가 있을까?