Gradient-Consensus Method for Distributed Optimization in Directed Multi-Agent Networks

09/22/2019
by   Vivek Khatana, et al.
0

In this article, a distributed optimization problem for minimizing a sum, ∑_i=1^n f_i, of convex objective functions, f_i, is addressed. Here each function f_i is a function of n variables, private to agent i which defines the agent's objective. Agents can only communicate locally with neighbors defined by a communication network topology. These f_i's are assumed to be Lipschitz-differentiable convex functions. For solving this optimization problem, we develop a novel distributed algorithm, which we term as the gradient-consensus method. The gradient-consensus scheme uses a finite-time terminated consensus protocol called ρ-consensus, which allows each local estimate to be ρ-close to each other at every iteration. The parameter ρ is a fixed constant which can be determined independently of the network size or topology. It is shown that the estimate of the optimal solution at any local agent i converges geometrically to the optimal solution within O(ρ) where ρ can be chosen to be arbitrarily small.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset