Towards Certifying ℓ_∞ Robustness using Neural Networks with ℓ_∞-dist Neurons

02/10/2021
by   Bohang Zhang, et al.
2

It is well-known that standard neural networks, even with a high classification accuracy, are vulnerable to small ℓ_∞-norm bounded adversarial perturbations. Although many attempts have been made, most previous works either can only provide empirical verification of the defense to a particular attack method, or can only develop a certified guarantee of the model robustness in limited scenarios. In this paper, we seek for a new approach to develop a theoretically principled neural network that inherently resists ℓ_∞ perturbations. In particular, we design a novel neuron that uses ℓ_∞-distance as its basic operation (which we call ℓ_∞-dist neuron), and show that any neural network constructed with ℓ_∞-dist neurons (called ℓ_∞-dist net) is naturally a 1-Lipschitz function with respect to ℓ_∞-norm. This directly provides a rigorous guarantee of the certified robustness based on the margin of prediction outputs. We also prove that such networks have enough expressive power to approximate any 1-Lipschitz function with robust generalization guarantee. Our experimental results show that the proposed network is promising. Using ℓ_∞-dist nets as the basic building blocks, we consistently achieve state-of-the-art performance on commonly used datasets: 93.09 (ϵ=0.1) and 35.10

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset