Iterative Refinement for ℓ_p-norm Regression

01/21/2019
by   Deeksha Adil, et al.
0

We give improved algorithms for the ℓ_p-regression problem, _xx_p such that A x=b, for all p ∈ (1,2) ∪ (2,∞). Our algorithms obtain a high accuracy solution in Õ_p(m^|p-2|/2p + |p-2|) <Õ_p(m^1/3) iterations, where each iteration requires solving an m × m linear system, m being the dimension of the ambient space. By maintaining an approximate inverse of the linear systems that we solve in each iteration, we give algorithms for solving ℓ_p-regression to 1 / poly(n) accuracy that run in time Õ_p(m^{ω, 7/3}), where ω is the matrix multiplication constant. For the current best value of ω > 2.37, we can thus solve ℓ_p regression as fast as ℓ_2 regression, for all constant p bounded away from 1. Our algorithms can be combined with fast graph Laplacian linear equation solvers to give minimum ℓ_p-norm flow / voltage solutions to 1 / poly(n) accuracy on an undirected graph with m edges in Õ_p(m^1 + |p-2|/2p + |p-2|) <Õ_p(m^4/3) time. For sparse graphs and for matrices with similar dimensions, our iteration counts and running times improve on the p-norm regression algorithm by [Bubeck-Cohen-Lee-Li STOC`18] and general-purpose convex optimization algorithms. At the core of our algorithms is an iterative refinement scheme for ℓ_p-norms, using the smoothed ℓ_p-norms introduced in the work of Bubeck et al. Given an initial solution, we construct a problem that seeks to minimize a quadratically-smoothed ℓ_p norm over a subspace, such that a crude solution to this problem allows us to improve the initial solution by a constant factor, leading to algorithms with fast convergence.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro