Beneficial Perturbations Network for Defending Adversarial Examples
Adversarial training, in which a network is trained on both adversarial and clean examples, is one of the most trusted defense methods against adversarial attacks. However, there are three major practical difficulties in implementing and deploying this method - expensive in terms of running memory and computation costs; accuracy trade-off between clean and adversarial examples; cannot foresee all adversarial attacks at training time. Here, we present a new solution to ease these three difficulties - Beneficial perturbation Networks (BPN). BPN generates and leverages beneficial perturbations (somewhat opposite to well-known adversarial perturbations) as biases within the parameter space of the network, to neutralize the effects of adversarial perturbations on data samples. Thus, BPN can effectively defend against adversarial examples. Compared to adversarial training, we demonstrate that BPN can significantly reduce the required running memory and computation costs, by generating beneficial perturbations through recycling of the gradients computed from training on clean examples. In addition, BPN can alleviate the accuracy trade-off difficulty and the difficulty of foreseeing multiple attacks, by improving the generalization of the network, thanks to increased diversity of the training set achieved through neutralization between adversarial and beneficial perturbations.
READ FULL TEXT