Gradient Temporal Difference with Momentum: Stability and Convergence

11/22/2021
by   Rohan Deb, et al.
0

Gradient temporal difference (Gradient TD) algorithms are a popular class of stochastic approximation (SA) algorithms used for policy evaluation in reinforcement learning. Here, we consider Gradient TD algorithms with an additional heavy ball momentum term and provide choice of step size and momentum parameter that ensures almost sure convergence of these algorithms asymptotically. In doing so, we decompose the heavy ball Gradient TD iterates into three separate iterates with different step sizes. We first analyze these iterates under one-timescale SA setting using results from current literature. However, the one-timescale case is restrictive and a more general analysis can be provided by looking at a three-timescale decomposition of the iterates. In the process, we provide the first conditions for stability and convergence of general three-timescale SA. We then prove that the heavy ball Gradient TD algorithm is convergent using our three-timescale SA analysis. Finally, we evaluate these algorithms on standard RL problems and report improvement in performance over the vanilla algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2021

Heavy Ball Momentum for Conditional Gradient

Conditional gradient, aka Frank Wolfe (FW) algorithms, have well-documen...
research
10/29/2021

Does Momentum Help? A Sample Complexity Analysis

Momentum methods are popularly used in accelerating stochastic iterative...
research
11/05/2018

Non-ergodic Convergence Analysis of Heavy-Ball Algorithms

In this paper, we revisit the convergence of the Heavy-ball method, and ...
research
06/07/2021

Correcting Momentum in Temporal Difference Learning

A common optimization tool used in deep reinforcement learning is moment...
research
11/11/2022

Online Signal Recovery via Heavy Ball Kaczmarz

Recovering a signal x^∗∈ℝ^n from a sequence of linear measurements is an...
research
06/14/2020

On the convergence of the Stochastic Heavy Ball Method

We provide a comprehensive analysis of the Stochastic Heavy Ball (SHB) m...
research
04/07/2015

From Averaging to Acceleration, There is Only a Step-size

We show that accelerated gradient descent, averaged gradient descent and...

Please sign up or login with your details

Forgot password? Click here to reset