AdvEst: Adversarial Perturbation Estimation to Classify and Detect Adversarial Attacks against Speaker Identification

04/08/2022
by   Sonal Joshi, et al.
0

Adversarial attacks pose a severe security threat to the state-of-the-art speaker identification systems, thereby making it vital to propose countermeasures against them. Building on our previous work that used representation learning to classify and detect adversarial attacks, we propose an improvement to it using AdvEst, a method to estimate adversarial perturbation. First, we prove our claim that training the representation learning network using adversarial perturbations as opposed to adversarial examples (consisting of the combination of clean signal and adversarial perturbation) is beneficial because it eliminates nuisance information. At inference time, we use a time-domain denoiser to estimate the adversarial perturbations from adversarial examples. Using our improved representation learning approach to obtain attack embeddings (signatures), we evaluate their performance for three applications: known attack classification, attack verification, and unknown attack detection. We show that common attacks in the literature (Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), Carlini-Wagner (CW) with different Lp threat models) can be classified with an accuracy of  96 of  9

READ FULL TEXT
research
10/30/2020

Perception Improvement for Free: Exploring Imperceptible Black-box Adversarial Attacks on Image Classification

Deep neural networks are vulnerable to adversarial attacks. White-box ad...
research
08/23/2021

Kryptonite: An Adversarial Attack Using Regional Focus

With the Rise of Adversarial Machine Learning and increasingly robust ad...
research
02/09/2023

Exploiting Certified Defences to Attack Randomised Smoothing

In guaranteeing that no adversarial examples exist within a bounded regi...
research
05/27/2023

Adversarial Attack On Yolov5 For Traffic And Road Sign Detection

This paper implements and investigates popular adversarial attacks on th...
research
09/27/2020

Beneficial Perturbations Network for Defending Adversarial Examples

Adversarial training, in which a network is trained on both adversarial ...
research
08/30/2021

Sample Efficient Detection and Classification of Adversarial Attacks via Self-Supervised Embeddings

Adversarial robustness of deep models is pivotal in ensuring safe deploy...
research
05/19/2022

On Trace of PGD-Like Adversarial Attacks

Adversarial attacks pose safety and security concerns for deep learning ...

Please sign up or login with your details

Forgot password? Click here to reset