Adversarial Example Detection by Classification for Deep Speech Recognition

10/22/2019
by   Saeid Samizade, et al.
0

Machine Learning systems are vulnerable to adversarial attacks and will highly likely produce incorrect outputs under these attacks. There are white-box and black-box attacks regarding to adversary's access level to the victim learning algorithm. To defend the learning systems from these attacks, existing methods in the speech domain focus on modifying input signals and testing the behaviours of speech recognizers. We, however, formulate the defense as a classification problem and present a strategy for systematically generating adversarial example datasets: one for white-box attacks and one for black-box attacks, containing both adversarial and normal examples. The white-box attack is a gradient-based method on Baidu DeepSpeech with the Mozilla Common Voice database while the black-box attack is a gradient-free method on a deep model-based keyword spotting system with the Google Speech Command dataset. The generated datasets are used to train a proposed Convolutional Neural Network (CNN), together with cepstral features, to detect adversarial examples. Experimental results show that, it is possible to accurately distinct between adversarial and normal examples for known attacks, in both single-condition and multi-condition training settings, while the performance degrades dramatically for unknown attacks. The adversarial datasets and the source code are made publicly available.

READ FULL TEXT

page 2

page 4

research
08/09/2021

Meta Gradient Adversarial Attack

In recent years, research on adversarial attacks has become a hot spot. ...
research
06/03/2023

Towards Black-box Adversarial Example Detection: A Data Reconstruction-based Method

Adversarial example detection is known to be an effective adversarial de...
research
04/15/2018

Adversarial Attacks Against Medical Deep Learning Systems

The discovery of adversarial examples has raised concerns about the prac...
research
07/12/2021

Detect and Defense Against Adversarial Examples in Deep Learning using Natural Scene Statistics and Adaptive Denoising

Despite the enormous performance of deepneural networks (DNNs), recent s...
research
10/08/2020

Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks

As an essential processing step in computer vision applications, image r...
research
03/23/2017

Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains

While modern day web applications aim to create impact at the civilizati...
research
03/24/2018

Security Theater: On the Vulnerability of Classifiers to Exploratory Attacks

The increasing scale and sophistication of cyberattacks has led to the a...

Please sign up or login with your details

Forgot password? Click here to reset