Deep Learning Travels
Don't panic

Learning to Reconstruct Shapes from Unseen Classes
WHY? Constructing 3D shape from a single image is challenging. Training endtoend to predict 3D shape from 2D image often end up overfitting while not generalizing well to other shapes. WHAT? This paper suggests Generalizable Reconstruction(GenRe) algorithm to construct 3D shape with classagnostic shape prior. Instead of training neural network...

Isolating Sources of Disentanglement in VAEs
WHY? VAE is known to disentangle latent variables. WHAT? This paper further decomposed the KL divergence term in bVAE (TCDecomposition). The first term of the right hand side is referred to as the indexcode mutual information(MI) which is the mutual information between data and latent variable. The second term is...

Relationships from Entity Stream
WHY? Relational Network showed great performance in relational reasoning, but calculations and memory consumption grow quadratically with the number of the objects due to fully connected pairing process. WHAT? Using the last layer of CNN as objects is the same as RN. Instead of pairing this objects, RNN is used...

Linguistic Regularities in Sparse and Explicit Word Representations
WHY? Vector offset method is used for word analogy task. WHAT? The objective function of vector offset method can be interpreted as similarity in direction(PairDirection). Also, this objective can be reinterpreted as addition of 3 cosine similarity(3CosAdd). These two objectives show different performance. Since PairwiseDirection does not take into account...

Linguistic Regularities in Continuous Space Word Representations
WHY? Vector space word representations capture syntactic and semantic regularities in language well. WHAT? To test how continuous word representation capture regularities, this paper introduce relationspecific vector offset method. All the analogy tasks can be formulated as “a is to b as c is to __”. Synthetic test asks gramatical...