Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling

03/29/2020
by   HanQin Cai, et al.
1

We consider the problem of minimizing a high-dimensional objective function, which may include a regularization term, using (possibly noisy) evaluations of the function. Such optimization is also called derivative-free, zeroth-order, or black-box optimization. We propose a new Zeroth-Order Regularized Optimization method, dubbed ZORO. When the underlying gradient is approximately sparse at an iterate, ZORO needs very few objective function evaluations to obtain a new iterate that decreases the objective function. We achieve this with an adaptive, randomized gradient estimator, followed by an inexact proximal-gradient scheme. Under a novel approximately sparse gradient assumption and various different convex settings, we show the (theoretical and empirical) convergence rate of ZORO is only logarithmically dependent on the problem dimension. Numerical experiments show that ZORO outperforms the existing methods with similar assumptions, on both synthetic and real datasets.

READ FULL TEXT
research
12/21/2020

Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box Optimization Framework

In this work, we focus on the study of stochastic zeroth-order (ZO) opti...
research
06/02/2020

Sparse Perturbations for Improved Convergence in Stochastic Zeroth-Order Optimization

Interest in stochastic zeroth-order (SZO) methods has recently been revi...
research
02/08/2023

Adaptive State-Dependent Diffusion for Derivative-Free Optimization

This paper develops and analyzes a stochastic derivative-free optimizati...
research
10/11/2022

Zeroth-Order Hard-Thresholding: Gradient Error vs. Expansivity

ℓ_0 constrained optimization is prevalent in machine learning, particula...
research
07/19/2021

High-Dimensional Simulation Optimization via Brownian Fields and Sparse Grids

High-dimensional simulation optimization is notoriously challenging. We ...
research
11/14/2019

Gradientless Descent: High-Dimensional Zeroth-Order Optimization

Zeroth-order optimization is the process of minimizing an objective f(x)...
research
10/29/2017

Stochastic Zeroth-order Optimization in High Dimensions

We consider the problem of optimizing a high-dimensional convex function...

Please sign up or login with your details

Forgot password? Click here to reset