Characterizing Implicit Bias in Terms of Optimization Geometry

02/22/2018
by   Suriya Gunasekar, et al.
0

We study the bias of generic optimization methods, including Mirror Descent, Natural Gradient Descent and Steepest Descent with respect to different potentials and norms, when optimizing underdetermined linear regression or separable linear classification problems. We ask the question of whether the global minimum (among the many possible global minima) reached by optimization algorithms can be characterized in terms of the potential or norm, and independently of hyperparameter choices such as step size and momentum.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset