Learning and Planning in Average-Reward Markov Decision Processes

06/29/2020
by   Yi Wan, et al.
8

We introduce improved learning and planning algorithms for average-reward MDPs, including 1) the first general proven-convergent off-policy model-free control algorithm without reference states, 2) the first proven-convergent off-policy model-free prediction algorithm, and 3) the first learning algorithms that converge to the actual value function rather than to the value function plus an offset. All of our algorithms are based on using the temporal-difference error rather than the conventional error when updating the estimate of the average reward. Our proof techniques are based on those of Abounadi, Bertsekas, and Borkar (2001). Empirically, we show that the use of the temporal-difference error generally results in faster learning, and that reliance on a reference state generally results in slower learning and risks divergence. All of our learning algorithms are fully online, and all of our planning algorithms are fully incremental.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2023

Sharper Model-free Reinforcement Learning for Average-reward Markov Decision Processes

We develop several provably efficient model-free reinforcement learning ...
research
10/11/2022

Factors of Influence of the Overestimation Bias of Q-Learning

We study whether the learning rate α, the discount factor γ and the rewa...
research
02/08/2020

Provably Efficient Adaptive Approximate Policy Iteration

Model-free reinforcement learning algorithms combined with value functio...
research
01/08/2021

Average-Reward Off-Policy Policy Evaluation with Function Approximation

We consider off-policy policy evaluation with function approximation (FA...
research
08/24/2023

Intentionally-underestimated Value Function at Terminal State for Temporal-difference Learning with Mis-designed Reward

Robot control using reinforcement learning has become popular, but its l...
research
12/28/2018

Differential Temporal Difference Learning

Value functions derived from Markov decision processes arise as a centra...
research
10/27/2020

γ-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction

We introduce the γ-model, a predictive model of environment dynamics wit...

Please sign up or login with your details

Forgot password? Click here to reset