Compensating Removed Frequency Components: Thwarting Voice Spectrum Reduction Attacks

by   Shu Wang, et al.
George Mason University
Tsinghua University

Automatic speech recognition (ASR) provides diverse audio-to-text services for humans to communicate with machines. However, recent research reveals ASR systems are vulnerable to various malicious audio attacks. In particular, by removing the non-essential frequency components, a new spectrum reduction attack can generate adversarial audios that can be perceived by humans but cannot be correctly interpreted by ASR systems. It raises a new challenge for content moderation solutions to detect harmful content in audio and video available on social media platforms. In this paper, we propose an acoustic compensation system named ACE to counter the spectrum reduction attacks over ASR systems. Our system design is based on two observations, namely, frequency component dependencies and perturbation sensitivity. First, since the Discrete Fourier Transform computation inevitably introduces spectral leakage and aliasing effects to the audio frequency spectrum, the frequency components with similar frequencies will have a high correlation. Thus, considering the intrinsic dependencies between neighboring frequency components, it is possible to recover more of the original audio by compensating for the removed components based on the remaining ones. Second, since the removed components in the spectrum reduction attacks can be regarded as an inverse of adversarial noise, the attack success rate will decrease when the adversarial audio is replayed in an over-the-air scenario. Hence, we can model the acoustic propagation process to add over-the-air perturbations into the attacked audio. We implement a prototype of ACE and the experiments show ACE can effectively reduce up to 87.9 attacks. Also, by analyzing residual errors, we summarize six general types of ASR inference errors and investigate the error causes and potential mitigation solutions.


Detecting Audio Attacks on ASR Systems with Dropout Uncertainty

Various adversarial audio attacks have recently been developed to fool a...

Blackbox Untargeted Adversarial Testing of Automatic Speech Recognition Systems

Automatic speech recognition (ASR) systems are prevalent, particularly i...

Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding

Voice interfaces are becoming accepted widely as input methods for a div...

Beyond L_p clipping: Equalization-based Psychoacoustic Attacks against ASRs

Automatic Speech Recognition (ASR) systems convert speech into text and ...

AudioFool: Fast, Universal and synchronization-free Cross-Domain Attack on Speech Recognition

Automatic Speech Recognition systems have been shown to be vulnerable to...

Universal Fourier Attack for Time Series

A wide variety of adversarial attacks have been proposed and explored us...

Please sign up or login with your details

Forgot password? Click here to reset