Visual Speech Enhancement using Noise-Invariant Training

11/23/2017
by   Aviv Gabbay, et al.
0

Visual speech enhancement is used on videos shot in noisy environments to enhance the voice of a visible speaker and to reduce background noise. While most existing methods use audio-only inputs, we propose an audio-visual neural network model for this purpose. The visible mouth movements are used to separate the speaker's voice from the background sounds. Instead of training our speech enhancement model on a wide range of possible noise types, we train the model on videos where other speech samples of the target speaker are used as background noise. A model trained using this paradigm generalizes well to various noise types, while also substantially reducing training time. The proposed model outperforms prior audio visual methods on two public lipreading datasets. It is also the first to be demonstrated on a general dataset not designed for lipreading. Our dataset was composed of weekly addresses of Barack Obama.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset