Finding classifiers robust to adversarial examples is critical for their...
Deep neural networks are known to be vulnerable to adversarially perturb...
We focus on the use of proxy distributions, i.e., approximations of the
...
Neural networks are vulnerable to input perturbations such as additive n...
Out-of-distribution (OoD) detection is a natural downstream task for dee...