Improved Classification Rates for Localized SVMs

05/04/2019
by   Ingrid Blaschzyk, et al.
0

One of the main characteristics of localized support vector machines that solve SVMs on many spatially defined small chunks is, besides the computational benefit compared to global SVMs, the freedom of choosing arbitrary kernel and regularization parameter on each cell. We take advantage of this observation to derive global learning rates for localized SVMs with Gaussian kernels and hinge loss. Under certain assumptions the rates we obtain outperform known classification rates for localized SVMs, for global SVMs, and other learning algorithms based on e.g., plug-in rules, trees, or DNNs. These rates are achieved under a set of margin conditions that describe the behavior of the data-generating distribution, where no assumption on the existence of a density is made. We observe that a crucial assumption is a margin condition that relates the distance to the decision boundary to the amount of noise. The analysis relies on a careful analysis of the excess risk which includes a separation of the input space into a subset which is close to the decision boundary and into a subset that is sufficiently far away. Moreover, we show that our rates are obtained adaptively, that is, without knowing the parameters that result from the margin conditions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset