Large-scale generative models such as GPT and DALL-E have revolutionized...
Expanding the language coverage of speech technology has the potential t...
Squeeze and Efficient Wav2vec (SEW) is a recently proposed architecture ...
In this work, we investigate if the wav2vec 2.0 self-supervised pretrain...
In this work, we propose lattice-free MMI (LFMMI) for supervised adaptat...
We present a simple wrapper that is useful to train acoustic models in
P...
Transformers have been proven a successful model for a variety of tasks ...
Transformers achieve remarkable performance in several tasks but due to ...
As deep learning methods form a critical part in commercially important
...