A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates

by   Tianbao Yang, et al.

This paper focuses on convex constrained optimization problems, where the solution is subject to a convex inequality constraint. In particular, we aim at challenging problems for which both projection into the constrained domain and a linear optimization under the inequality constraint are time-consuming, which render both projected gradient methods and conditional gradient methods (a.k.a. the Frank-Wolfe algorithm) expensive. In this paper, we develop projection reduced optimization algorithms for both smooth and non-smooth optimization with improved convergence rates under a certain regularity condition of the constraint function. We first present a general theory of optimization with only one projection. Its application to smooth optimization with only one projection yields O(1/ϵ) iteration complexity, which improves over the O(1/ϵ^2) iteration complexity established before for non-smooth optimization and can be further reduced under strong convexity. Then we introduce a local error bound condition and develop faster algorithms for non-strongly convex optimization at the price of a logarithmic number of projections. In particular, we achieve an iteration complexity of O(1/ϵ^2(1-θ)) for non-smooth optimization and O(1/ϵ^1-θ) for smooth optimization, where θ∈(0,1] appearing the local error bound condition characterizes the functional local growth rate around the optimal solutions. Novel applications in solving the constrained ℓ_1 minimization problem and a positive semi-definite constrained distance metric learning problem demonstrate that the proposed algorithms achieve significant speed-up compared with previous algorithms.


page 1

page 2

page 3

page 4


Solving Non-smooth Constrained Programs with Lower Complexity than O(1/ε): A Primal-Dual Homotopy Smoothing Approach

We propose a new primal-dual homotopy smoothing algorithm for a linearly...

Conditional Gradient Methods for convex optimization with function constraints

Conditional gradient methods have attracted much attention in both machi...

Accelerated Stochastic Subgradient Methods under Local Error Bound Condition

In this paper, we propose two accelerated stochastic subgradient method...

RSG: Beating Subgradient Method without Smoothness and Strong Convexity

In this paper, we study the efficiency of a Restarted Sub Gradient (RS...

A Unified Approach to Error Bounds for Structured Convex Optimization Problems

Error bounds, which refer to inequalities that bound the distance of vec...

Projection-Free Algorithms in Statistical Estimation

Frank-Wolfe algorithm (FW) and its variants have gained a surge of inter...

Fast Algorithm for Constrained Linear Inverse Problems

We consider the constrained Linear Inverse Problem (LIP), where a certai...

Please sign up or login with your details

Forgot password? Click here to reset