Maxmin Q-learning: Controlling the Estimation Bias of Q-learning

02/16/2020
by   Qingfeng Lan, et al.
29

Q-learning suffers from overestimation bias, because it approximates the maximum action value using the maximum estimated action value. Algorithms have been proposed to reduce overestimation bias, but we lack an understanding of how bias interacts with performance, and the extent to which existing algorithms mitigate bias. In this paper, we 1) highlight that the effect of overestimation bias on learning efficiency is environment-dependent; 2) propose a generalization of Q-learning, called Maxmin Q-learning, which provides a parameter to flexibly control bias; 3) show theoretically that there exists a parameter choice for Maxmin Q-learning that leads to unbiased estimation with a lower approximation variance than Q-learning; and 4) prove the convergence of our algorithm in the tabular case, as well as convergence of several previous Q-learning variants, using a novel Generalized Q-learning framework. We empirically verify that our algorithm better controls estimation bias in toy environments, and that it achieves superior performance on several benchmark problems.

READ FULL TEXT
research
05/03/2021

Action Candidate Based Clipped Double Q-learning for Discrete and Continuous Action Tasks

Double Q-learning is a popular reinforcement learning algorithm in Marko...
research
02/28/2021

Ensemble Bootstrapping for Q-Learning

Q-learning (QL), a common reinforcement learning algorithm, suffers from...
research
09/29/2021

On the Estimation Bias in Double Q-Learning

Double Q-learning is a classical method for reducing overestimation bias...
research
03/22/2022

Action Candidate Driven Clipped Double Q-learning for Discrete and Continuous Action Tasks

Double Q-learning is a popular reinforcement learning algorithm in Marko...
research
06/20/2023

Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error Feedback

The ensemble method is a promising way to mitigate the overestimation is...
research
03/08/2021

Bias-Corrected Peaks-Over-Threshold Estimation of the CVaR

The conditional value-at-risk (CVaR) is a useful risk measure in fields ...
research
09/16/2022

Reducing Variance in Temporal-Difference Value Estimation via Ensemble of Deep Networks

In temporal-difference reinforcement learning algorithms, variance in va...

Please sign up or login with your details

Forgot password? Click here to reset