Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information

by   Baolin Zheng, et al.
City University of Hong Kong
Xi'an Jiaotong University
Tsinghua University
Wuhan University

Adversarial attacks against commercial black-box speech platforms, including cloud speech APIs and voice control devices, have received little attention until recent years. The current "black-box" attacks all heavily rely on the knowledge of prediction/confidence scores to craft effective adversarial examples, which can be intuitively defended by service providers without returning these messages. In this paper, we propose two novel adversarial attacks in more practical and rigorous scenarios. For commercial cloud speech APIs, we propose Occam, a decision-only black-box adversarial attack, where only final decisions are available to the adversary. In Occam, we formulate the decision-only AE generation as a discontinuous large-scale global optimization problem, and solve it by adaptively decomposing this complicated problem into a set of sub-problems and cooperatively optimizing each one. Our Occam is a one-size-fits-all approach, which achieves 100 an average SNR of 14.23dB, on a wide range of popular speech and speaker recognition APIs, including Google, Alibaba, Microsoft, Tencent, iFlytek, and Jingdong, outperforming the state-of-the-art black-box attacks. For commercial voice control devices, we propose NI-Occam, the first non-interactive physical adversarial attack, where the adversary does not need to query the oracle and has no access to its internal information and training data. We combine adversarial attacks with model inversion attacks, and thus generate the physically-effective audio AEs with high transferability without any interaction with target devices. Our experimental results show that NI-Occam can successfully fool Apple Siri, Microsoft Cortana, Google Assistant, iFlytek and Amazon Echo with an average SRoA of 52 on non-interactive physical attacks against voice control devices.


PhantomSound: Black-Box, Query-Efficient Audio Adversarial Attack via Split-Second Phoneme Injection

In this paper, we propose PhantomSound, a query-efficient black-box atta...

Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems

Speaker recognition (SR) is widely used in our daily life as a biometric...

EI-MTD:Moving Target Defense for Edge Intelligence against Adversarial Attacks

With the boom of edge intelligence, its vulnerability to adversarial att...

QFA2SR: Query-Free Adversarial Transfer Attacks to Speaker Recognition Systems

Current adversarial attacks against speaker recognition systems (SRSs) r...

SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems

Despite their immense popularity, deep learning-based acoustic systems a...

Adversarial Attacks against Neural Networks in Audio Domain: Exploiting Principal Components

Adversarial attacks are inputs that are similar to original inputs but a...

Enrollment-stage Backdoor Attacks on Speaker Recognition Systems via Adversarial Ultrasound

Automatic Speaker Recognition Systems (SRSs) have been widely used in vo...

Please sign up or login with your details

Forgot password? Click here to reset