• Learning to Reconstruct Shapes from Unseen Classes

    WHY? Constructing 3D shape from a single image is challenging. Training end-to-end to predict 3D shape from 2D image often end up overfitting while not generalizing well to other shapes. WHAT? This paper suggests Generalizable Reconstruction(GenRe) algorithm to construct 3D shape with class-agnostic shape prior. Instead of training neural network...


  • Isolating Sources of Disentanglement in VAEs

    WHY? -VAE is known to disentangle latent variables. WHAT? This paper further decomposed the KL divergence term in b-VAE (TC-Decomposition). The first term of the right hand side is referred to as the index-code mutual information(MI) which is the mutual information between data and latent variable. The second term is...


  • Relationships from Entity Stream

    WHY? Relational Network showed great performance in relational reasoning, but calculations and memory consumption grow quadratically with the number of the objects due to fully connected pairing process. WHAT? Using the last layer of CNN as objects is the same as RN. Instead of pairing this objects, RNN is used...


  • Linguistic Regularities in Sparse and Explicit Word Representations

    WHY? Vector offset method is used for word analogy task. WHAT? The objective function of vector offset method can be interpreted as similarity in direction(PairDirection). Also, this objective can be re-interpreted as addition of 3 cosine similarity(3CosAdd). These two objectives show different performance. Since PairwiseDirection does not take into account...


  • Linguistic Regularities in Continuous Space Word Representations

    WHY? Vector space word representations capture syntactic and semantic regularities in language well. WHAT? To test how continuous word representation capture regularities, this paper introduce relation-specific vector offset method. All the analogy tasks can be formulated as “a is to b as c is to __”. Synthetic test asks gramatical...