Data Augmentation via Structured Adversarial Perturbations

by   Calvin Luo, et al.

Data augmentation is a major component of many machine learning methods with state-of-the-art performance. Common augmentation strategies work by drawing random samples from a space of transformations. Unfortunately, such sampling approaches are limited in expressivity, as they are unable to scale to rich transformations that depend on numerous parameters due to the curse of dimensionality. Adversarial examples can be considered as an alternative scheme for data augmentation. By being trained on the most difficult modifications of the inputs, the resulting models are then hopefully able to handle other, presumably easier, modifications as well. The advantage of adversarial augmentation is that it replaces sampling with the use of a single, calculated perturbation that maximally increases the loss. The downside, however, is that these raw adversarial perturbations appear rather unstructured; applying them often does not produce a natural transformation, contrary to a desirable data augmentation technique. To address this, we propose a method to generate adversarial examples that maintain some desired natural structure. We first construct a subspace that only contains perturbations with the desired structure. We then project the raw adversarial gradient onto this space to select a structured transformation that would maximally increase the loss when applied. We demonstrate this approach through two types of image transformations: photometric and geometric. Furthermore, we show that training on such structured adversarial images improves generalization.


page 2

page 6


Affine-Invariant Robust Training

The field of adversarial robustness has attracted significant attention ...

Adversarial and Random Transformations for Robust Domain Adaptation and Generalization

Data augmentation has been widely used to improve generalization in trai...

Training Augmentation with Adversarial Examples for Robust Speech Recognition

This paper explores the use of adversarial examples in training speech r...

Trace-Norm Adversarial Examples

White box adversarial perturbations are sought via iterative optimizatio...

Achieving Generalizable Robustness of Deep Neural Networks by Stability Training

We study the recently introduced stability training as a general-purpose...

Visual Data Augmentation through Learning

The rapid progress in machine learning methods has been empowered by i) ...

Semantic Perturbations with Normalizing Flows for Improved Generalization

Data augmentation is a widely adopted technique for avoiding overfitting...

Please sign up or login with your details

Forgot password? Click here to reset