Tracking Performance of Online Stochastic Learners

04/04/2020
by   Stefan Vlaski, et al.
0

The utilization of online stochastic algorithms is popular in large-scale learning settings due to their ability to compute updates on the fly, without the need to store and process data in large batches. When a constant step-size is used, these algorithms also have the ability to adapt to drifts in problem parameters, such as data or model properties, and track the optimal solution with reasonable accuracy. Building on analogies with the study of adaptive filters, we establish a link between steady-state performance derived under stationarity assumptions and the tracking performance of online learners under random walk models. The link allows us to infer the tracking performance from steady-state expressions directly and almost by inspection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/18/2018

A Communication-Efficient Random-Walk Algorithm for Decentralized Optimization

This paper addresses consensus optimization problem in a multi-agent net...
research
04/06/2020

Adaptive Social Learning

This work proposes a novel strategy for social learning by introducing t...
research
02/03/2019

Finite-Time Error Bounds For Linear Stochastic Approximation and TD Learning

We consider the dynamics of a linear stochastic approximation algorithm ...
research
10/05/2015

On the Online Frank-Wolfe Algorithms for Convex and Non-convex Optimizations

In this paper, the online variants of the classical Frank-Wolfe algorith...
research
03/04/2020

Adaptation in Online Social Learning

This work studies social learning under non-stationary conditions. Altho...
research
01/23/2019

Analysis of the (μ/μ_I,λ)-CSA-ES with Repair by Projection Applied to a Conically Constrained Problem

Theoretical analyses of evolution strategies are indispensable for gaini...

Please sign up or login with your details

Forgot password? Click here to reset