AV-data2vec: Self-supervised Learning of Audio-Visual Speech Representations with Contextualized Target Representations

02/10/2023
by   Jiachen Lian, et al.
0

Self-supervision has shown great potential for audio-visual speech recognition by vastly reducing the amount of labeled data required to build good systems. However, existing methods are either not entirely end-to-end or do not train joint representations of both modalities. In this paper, we introduce AV-data2vec which addresses these challenges and builds audio-visual representations based on predicting contextualized representations which has been successful in the uni-modal case. The model uses a shared transformer encoder for both audio and video and can combine both modalities to improve speech recognition. Results on LRS3 show that AV-data2vec consistently outperforms existing methods under most settings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset