Spectral Norm Regularization for Improving the Generalizability of Deep Learning

05/31/2017
by   Yuichi Yoshida, et al.
0

We investigate the generalizability of deep learning based on the sensitivity to input perturbation. We hypothesize that the high sensitivity to the perturbation of data degrades the performance on it. To reduce the sensitivity to perturbation, we propose a simple and effective regularization method, referred to as spectral norm regularization, which penalizes the high spectral norm of weight matrices in neural networks. We provide supportive evidence for the abovementioned hypothesis by experimentally confirming that the models trained using spectral norm regularization exhibit better generalizability than other baseline methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/27/2022

Exact Spectral Norm Regularization for Neural Networks

We pursue a line of research that seeks to regularize the spectral norm ...
research
11/04/2022

Spectral Regularization: an Inductive Bias for Sequence Modeling

Various forms of regularization in learning tasks strive for different n...
research
09/27/2022

Why neural networks find simple solutions: the many regularizers of geometric complexity

In many contexts, simpler models are preferable to more complex models a...
research
12/27/2015

New Perspectives on k-Support and Cluster Norms

We study a regularizer which is defined as a parameterized infimum of qu...
research
03/13/2023

Domain Generalization via Nuclear Norm Regularization

The ability to generalize to unseen domains is crucial for machine learn...
research
12/01/2022

Generalizing and Improving Jacobian and Hessian Regularization

Jacobian and Hessian regularization aim to reduce the magnitude of the f...
research
07/15/2020

Fast Differentiable Clipping-Aware Normalization and Rescaling

Rescaling a vector δ⃗∈ℝ^n to a desired length is a common operation in m...

Please sign up or login with your details

Forgot password? Click here to reset