Communication Efficient Distributed Optimization using an Approximate Newton-type Method

12/30/2013
by   Ohad Shamir, et al.
0

We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably improves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as one-shot parameter averaging and ADMM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2017

Improved Optimization of Finite Sums with Minibatch Stochastic Variance Reduced Proximal Iterations

We present novel minibatch stochastic optimization methods for empirical...
research
01/16/2019

DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization

For optimization of a sum of functions in a distributed computing enviro...
research
07/13/2022

Communication-efficient Distributed Newton-like Optimization with Gradients and M-estimators

In modern data science, it is common that large-scale data are stored an...
research
12/03/2021

Regularized Newton Method with Global O(1/k^2) Convergence

We present a Newton-type method that converges fast from any initializat...
research
09/11/2017

GIANT: Globally Improved Approximate Newton Method for Distributed Optimization

For distributed computing environments, we consider the canonical machin...
research
03/22/2017

Weight Design of Distributed Approximate Newton Algorithms for Constrained Optimization

Motivated by economic dispatch and linearly-constrained resource allocat...
research
11/28/2022

Stochastic Steffensen method

Is it possible for a first-order method, i.e., only first derivatives al...

Please sign up or login with your details

Forgot password? Click here to reset