Guarantees on learning depth-2 neural networks under a data-poisoning attack

05/04/2020
by   Anirbit Mukherjee, et al.
0

In recent times many state-of-the-art machine learning models have been shown to be fragile to adversarial attacks. In this work we attempt to build our theoretical understanding of adversarially robust learning with neural nets. We demonstrate a specific class of neural networks of finite size and a non-gradient stochastic algorithm which tries to recover the weights of the net generating the realizable true labels in the presence of an oracle doing a bounded amount of malicious additive distortion to the labels. We prove (nearly optimal) trade-offs among the magnitude of the adversarial attack, the accuracy and the confidence achieved by the proposed algorithm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset