Using an ensemble color space model to tackle adversarial examples

03/10/2020
by   Shreyank N Gowda, et al.
0

Minute pixel changes in an image drastically change the prediction that the deep learning model makes. One of the most significant problems that could arise due to this, for instance, is autonomous driving. Many methods have been proposed to combat this with varying amounts of success. We propose a 3 step method for defending such attacks. First, we denoise the image using statistical methods. Second, we show that adopting multiple color spaces in the same model can help us to fight these adversarial attacks further as each color space detects certain features explicit to itself. Finally, the feature maps generated are enlarged and sent back as an input to obtain even smaller features. We show that the proposed model does not need to be trained to defend an particular type of attack and is inherently more robust to black-box, white-box, and grey-box adversarial attack techniques. In particular, the model is 56.12 percent more robust than compared models in case of white box attacks when the models are not subject to adversarial example training.

READ FULL TEXT

page 2

page 7

page 8

research
04/19/2021

Direction-Aggregated Attack for Transferable Adversarial Examples

Deep neural networks are vulnerable to adversarial examples that are cra...
research
10/31/2022

Scoring Black-Box Models for Adversarial Robustness

Deep neural networks are susceptible to adversarial inputs and various m...
research
11/12/2020

Adversarial Robustness Against Image Color Transformation within Parametric Filter Space

We propose Adversarial Color Enhancement (ACE), a novel approach to gene...
research
07/15/2020

AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows

Deep learning classifiers are susceptible to well-crafted, imperceptible...
research
09/09/2018

Towards Query Efficient Black-box Attacks: An Input-free Perspective

Recent studies have highlighted that deep neural networks (DNNs) are vul...
research
05/31/2018

PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks

Deep learning systems have become ubiquitous in many aspects of our live...
research
11/25/2020

Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumption

This work examines the vulnerability of multimodal (image + text) models...

Please sign up or login with your details

Forgot password? Click here to reset