On the adversarial robustness of DNNs based on error correcting output codes

03/26/2020
by   Bowen Zhang, et al.
0

Adversarial examples represent a great security threat for deep learning systems, pushing researchers to develop suitable defense mechanisms. The use of networks adopting error-correcting output codes (ECOC) has recently been proposed to deal with white-box attacks. In this paper, we carry out an in-depth investigation of the security achieved by the ECOC approach. In contrast to previous findings, our analysis reveals that, when the attack in the white-box framework is carried out properly, the ECOC scheme can be attacked by introducing a rather small perturbation. We do so by considering both the popular adversarial attack proposed by Carlini and Wagner (C W) and a new variant of C W attack specifically designed for multi-label classification architectures, like the ECOC-based structure. Experimental results regarding different classification tasks demonstrate that ECOC networks can be successfully attacked by both the original C W attack and the new attack.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro