WHY?
Former CNN models fully activate(filly distributed features) for a single input showing poor performance on invariant relational reasoning.
Continue reading
WHY?
Hard attention is relatively less explored than soft attention.
Continue reading
WHY?
Relational information is important in some reinforcement learning tasks.
Continue reading
WHY?
Most VQA reasoning algorithms are not transparent in their reasoning and not robust to complex reasoning.
Continue reading
WHY?
Constructing 3D shape from a single image is challenging. Training end-to-end to predict 3D shape from 2D image often end up overfitting while not generalizing well to other shapes.
Continue reading
WHY?
\beta
-VAE is known to disentangle latent variables.
Continue reading
WHY?
Relational Network showed great performance in relational reasoning, but calculations and memory consumption grow quadratically with the number of the objects due to fully connected pairing process.
Continue reading
WHY?
Vector offset method is used for word analogy task.
Continue reading