Anytime Minibatch with Delayed Gradients
Distributed optimization is widely deployed in practice to solve a broad range of problems. In a typical asynchronous scheme, workers calculate gradients with respect to out-of-date optimization parameters while the master uses stale (i.e., delayed) gradients to update the parameters. While using stale gradients can slow the convergence, asynchronous methods speed up the overall optimization with respect to wall clock time by allowing more frequent updates and reducing idling times. In this paper, we present a variable per-epoch minibatch scheme called Anytime Minibatch with Delayed Gradients (AMB-DG). In AMB-DG, workers compute gradients in epochs of a fixed time while the master uses stale gradients to update the optimization parameters. We analyze AMB-DG in terms of its regret bound and convergence rate. We prove that for convex smooth objective functions, AMB-DG achieves the optimal regret bound and convergence rate. We compare the performance of AMB-DG with that of Anytime Minibatch (AMB) which is similar to AMB-DG but does not use stale gradients. In AMB, workers stay idle after each gradient transmission to the master until they receive the updated parameters from the master while in AMB-DG workers never idle. We also extend AMB-DG to the fully distributed setting. We compare AMB-DG with AMB when the communication delay is long and observe that AMB-DG converges faster than AMB in wall clock time. We also compare the performance of AMB-DG with the state-of-the-art fixed minibatch approach that uses delayed gradients. We run our experiments on a real distributed system and observe that AMB-DG converges more than two times.
READ FULL TEXT