Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins
We analyze the properties of gradient descent on convex surrogates for the zero-one loss for the agnostic learning of linear halfspaces. If 𝖮𝖯𝖳 is the best classification error achieved by a halfspace, by appealing to the notion of soft margins we are able to show that gradient descent finds halfspaces with classification error Õ(𝖮𝖯𝖳^1/2) + ε in poly(d,1/ε) time and sample complexity for a broad class of distributions that includes log-concave isotropic distributions as a subclass. Along the way we answer a question recently posed by Ji et al. (2020) on how the tail behavior of a loss function can affect sample complexity and runtime guarantees for gradient descent.
READ FULL TEXT