Exploiting Large-scale Teacher-Student Training for On-device Acoustic Models
We present results from Alexa speech teams on semi-supervised learning (SSL) of acoustic models (AM) with experiments spanning over 3000 hours of GPU time, making our study one of the largest of its kind. We discuss SSL for AMs in a small footprint setting, showing that a smaller capacity model trained with 1 million hours of unsupervised data can outperform a baseline supervised system by 14.3 to seven-fold, our gains diminish to 7.1 larger supervised data regimes, we employ a step-wise distillation into a smaller model, obtaining a WERR of 14.4 student models in low data regimes; while learning efficiency with unsupervised data is higher, student models may outperform teacher models in such a setting. We develop a theoretical sketch to explain this behavior.
READ FULL TEXT