Learning to Screen for Fast Softmax Inference on Large Vocabulary Neural Networks

by   Patrick H. Chen, et al.

Neural language models have been widely used in various NLP tasks, including machine translation, next word prediction and conversational agents. However, it is challenging to deploy these models on mobile devices due to their slow prediction speed, where the bottleneck is to compute top candidates in the softmax layer. In this paper, we introduce a novel softmax layer approximation algorithm by exploiting the clustering structure of context vectors. Our algorithm uses a light-weight screening model to predict a much smaller set of candidate words based on the given context, and then conducts an exact softmax only within that subset. Training such a procedure end-to-end is challenging as traditional clustering methods are discrete and non-differentiable, and thus unable to be used with back-propagation in the training process. Using the Gumbel softmax, we are able to train the screening model end-to-end on the training set to exploit data distribution. The algorithm achieves an order of magnitude faster inference than the original softmax layer for predicting top-k words in various tasks such as beam search in machine translation or next words prediction. For example, for machine translation task on German to English dataset with around 25K vocabulary, we can achieve 20.4 times speed up with 98.9% precision@1 and 99.3% precision@5 with the original softmax layer prediction, while state-of-the-art MSRprediction only achieves 6.7x speedup with 98.7% precision@1 and 98.1% precision@5 for the same task.


page 1

page 2

page 3

page 4


Navigating with Graph Representations for Fast and Scalable Decoding of Neural Language Models

Neural language models (NLMs) have recently gained a renewed interest by...

Pointing the Unknown Words

The problem of rare and unknown words is an important issue that can pot...

Neural Machine Translation via Binary Code Prediction

In this paper, we propose a new method for calculating the output layer ...

Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs

The Softmax function is used in the final layer of nearly all existing s...

Real-time Neural-based Input Method

The input method is an essential service on every mobile and desktop dev...

Gumbel-Softmax Selective Networks

ML models often operate within the context of a larger system that can a...

Fast Vocabulary Projection Method via Clustering for Multilingual Machine Translation on GPU

Multilingual Neural Machine Translation has been showing great success u...

Please sign up or login with your details

Forgot password? Click here to reset