## WHY?

While conventional deep learning models have performed well on inference regarding individual entities, few models have been proposed focusing on inference regarding the relations among entities. Relational network aimed to capture relational information from images. In this paper, relational recurrent neural network tried to improve former memory augmented neural network models to capture relations among memories.

## WHAT?

Assume that we have memory matrix M. To capture interaction among these memories, Multi-Head Dot Product Attention(MHDPA) is used (also known as self-attention).

Linear projections are used to make queries(Q), keys(K) and values(V) for each memory. Dot product of queries and keys are normalized to gain attention weights. Weighted average of values are used as next memory.

This composes one head of MHDPA. Assuming h is the number of head in MHDPA, F dimensional memories are projected to F/h dimension for each head, and heads are concatnated to represent next memory. New inputs for memory can be encoded easily without adding more parameters.

Resulted memory can be used as memory cell in LSTM. row-wise MLP with layer normalization is used for $g_{\psi}$.

## So?

RRN achieved good performance in Nth farthest vector calculation task, program evaluation, reinforcement learning task and language modeling.

## Critic

Beautiful, but not sure how this work. Need more experiments.

Santoro, Adam, et al. “Relational recurrent neural networks.” arXiv preprint arXiv:1806.01822 (2018).