SVM via Saddle Point Optimization: New Bounds and Distributed Algorithms

05/20/2017
by   Yifei Jin, et al.
0

Support Vector Machine is one of the most classical approaches for classification and regression. Despite being studied for decades, obtaining practical algorithms for SVM is still an active research problem in machine learning. In this paper, we propose a new perspective for SVM via saddle point optimization. We provide an algorithm which achieves (1-ϵ)-approximations with running time Õ(nd+n√(d / ϵ)) for both separable (hard margin SVM) and non-separable cases (ν-SVM ), where n is the number of points and d is the dimensionality. To the best of our knowledge, the current best algorithm for hard margin SVM achieved by Gilbert algorithm gartner2009coresets requires O(nd / ϵ ) time. Our algorithm improves the running time by a factor of √(d)/√(ϵ). For ν-SVM, besides the well known quadratic programming approach which requires Ω(n^2 d) time joachims1998making,platt199912, no better algorithm is known. In the paper, we provide the first nearly linear time algorithm for ν-SVM. We also consider the distributed settings and provide distributed algorithms with low communication cost via saddle point optimization. Our algorithms require Õ(k(d +√(d/ϵ))) communication cost where k is the number of clients, almost matching the theoretical lower bound.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset