Perceptual-based deep-learning denoiser as a defense against adversarial attacks on ASR systems

07/12/2021
by   Anirudh Sreeram, et al.
0

In this paper we investigate speech denoising as a defense against adversarial attacks on automatic speech recognition (ASR) systems. Adversarial attacks attempt to force misclassification by adding small perturbations to the original speech signal. We propose to counteract this by employing a neural-network based denoiser as a pre-processor in the ASR pipeline. The denoiser is independent of the downstream ASR model, and thus can be rapidly deployed in existing systems. We found that training the denoisier using a perceptually motivated loss function resulted in increased adversarial robustness without compromising ASR performance on benign samples. Our defense was evaluated (as a part of the DARPA GARD program) on the 'Kenansville' attack strategy across a range of attack strengths and speech samples. An average improvement in Word Error Rate (WER) of about 7.7 undefended model at 20 dB signal-to-noise-ratio (SNR) attack strength.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset