Adversarial Training Using Feedback Loops

Deep neural networks (DNN) have found wide applicability in numerous fields due to their ability to accurately learn very complex input-output relations. Despite their accuracy and extensive use, DNNs are highly susceptible to adversarial attacks due to limited generalizability. For future progress in the field, it is essential to build DNNs that are robust to any kind of perturbations to the data points. In the past, many techniques have been proposed to robustify DNNs using first-order derivative information of the network. This paper proposes a new robustification approach based on control theory. A neural network architecture that incorporates feedback control, named Feedback Neural Networks, is proposed. The controller is itself a neural network, which is trained using regular and adversarial data such as to stabilize the system outputs. The novel adversarial training approach based on the feedback control architecture is called Feedback Looped Adversarial Training (FLAT). Numerical results on standard test problems empirically show that our FLAT method is more effective than the state-of-the-art to guard against adversarial attacks.


page 1

page 2

page 3

page 4


Dual Head Adversarial Training

Deep neural networks (DNNs) are known to be vulnerable to adversarial ex...

Distributed Adversarial Training to Robustify Deep Neural Networks at Scale

Current deep neural networks (DNNs) are vulnerable to adversarial attack...

Initializing Perturbations in Multiple Directions for Fast Adversarial Training

Recent developments in the filed of Deep Learning have demonstrated that...

Differentiable Search of Accurate and Robust Architectures

Deep neural networks (DNNs) are found to be vulnerable to adversarial at...

Adversarial Generation of Real-time Feedback with Neural Networks for Simulation-based Training

Simulation-based training (SBT) is gaining popularity as a low-cost and ...

AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks through Accuracy Gradient

Adversarial training is exploited to develop a robust Deep Neural Networ...

NINNs: Nudging Induced Neural Networks

New algorithms called nudging induced neural networks (NINNs), to contro...

Please sign up or login with your details

Forgot password? Click here to reset