Visual Speech Enhancement

11/23/2017
by   Aviv Gabbay, et al.
0

When video is shot in noisy environment, the voice of a speaker seen in the video can be enhanced using the visible mouth movements, reducing background noise. While most existing methods use audio-only inputs, improved performance is obtained with our visual speech enhancement, based on an audio-visual neural network. We add to the training data videos with synthetic background noise taken from the voice of the target speaker. Since the audio input is not sufficient to separate the voice of a speaker from his own voice, the trained model better exploits the visual input and generalizes well to different noise types. The proposed model outperforms prior audio visual methods on two public lipreading datasets. It is also the first to be demonstrated on a dataset not designed for lipreading, such as the weekly addresses of Barack Obama.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset