Transformer-based language modeling and decoding for conversational speech recognition

01/04/2020
by   Kareem Nassar, et al.
0

We propose a way to use a transformer-based language model in conversational speech recognition. Specifically, we focus on decoding efficiently in a weighted finite-state transducer framework. We showcase an approach to lattice re-scoring that allows for longer range history captured by a transfomer-based language model and takes advantage of a transformer's ability to avoid computing sequentially.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset