This paper first prove that the expresiveness of a language model is restricted by softmax and suggest a way to overcome this limit.
The last part of language models usuallt consist of softmax layer applied on a product of context vector(h) and a word embedding w.
This paper formulate language modeling as language factorization problem. To do this, three matrices can be defined: context vectors, word embedding, and log probabilities of the true data distribution.
Also, we can define a set of matrices formed by applying row-wise shift to A.
We can derive two property of this set: F(A) is all possible logits of the true data distribution, and all matrices in F(A) have similar rank, with the maximum rank difference being 1. If we want HW to be in F(A), HW must have rank as large as A. However, the rank of HW is strictly upperbounded by the embedding size d.
This proves the softmax bottleneck meaning softmax layer does not have the capacity to express the true data if the dimension d is too small.
To solve this problem, this paper suggest mixture of softmaxes which have improved expresiveness. Since the matrix is a nonlinear function of context vectors and word embeddings, the rank of the matrix is not restricted to d.
The perplexity of MoS on Penn Treebank, WikiText, and 1B word dataset showed clearly improved performance than softmax. Even though the computation time of MoS is 2~3 times slower, MoS was better at making context dependent prediction.
Mixture of softmax seems incredible, but there may be computationally efficient way of doing this.
Subscribe via RSS