Adversarial Examples from Dimensional Invariance

04/13/2023
by   Benjamin L. Badger, et al.
0

Adversarial examples have been found for various deep as well as shallow learning models, and have at various times been suggested to be either fixable model-specific bugs, or else inherent dataset feature, or both. We present theoretical and empirical results to show that adversarial examples are approximate discontinuities resulting from models that specify approximately bijective maps f: R^n → R^m; n ≠ m over their inputs, and this discontinuity follows from the topological invariance of dimension.

READ FULL TEXT
research
02/16/2023

On the Effect of Adversarial Training Against Invariance-based Adversarial Examples

Adversarial examples are carefully crafted attack points that are suppos...
research
03/25/2019

Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness

Adversarial examples are malicious inputs crafted to cause a model to mi...
research
01/28/2022

Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning

Adversarial examples are inputs for machine learning models that have be...
research
05/06/2019

Adversarial Examples Are Not Bugs, They Are Features

Adversarial examples have attracted significant attention in machine lea...
research
12/01/2016

A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples

Most machine learning classifiers, including deep neural networks, are v...
research
10/27/2022

Isometric 3D Adversarial Examples in the Physical World

3D deep learning models are shown to be as vulnerable to adversarial exa...
research
02/13/2018

Predicting Adversarial Examples with High Confidence

It has been suggested that adversarial examples cause deep learning mode...

Please sign up or login with your details

Forgot password? Click here to reset