The Effect of Q-function Reuse on the Total Regret of Tabular, Model-Free, Reinforcement Learning

03/07/2021
by   Volodymyr Tkachuk, et al.
0

Some reinforcement learning methods suffer from high sample complexity causing them to not be practical in real-world situations. Q-function reuse, a transfer learning method, is one way to reduce the sample complexity of learning, potentially improving usefulness of existing algorithms. Prior work has shown the empirical effectiveness of Q-function reuse for various environments when applied to model-free algorithms. To the best of our knowledge, there has been no theoretical work showing the regret of Q-function reuse when applied to the tabular, model-free setting. We aim to bridge the gap between theoretical and empirical work in Q-function reuse by providing some theoretical insights on the effectiveness of Q-function reuse when applied to the Q-learning with UCB-Hoeffding algorithm. Our main contribution is showing that in a specific case if Q-function reuse is applied to the Q-learning with UCB-Hoeffding algorithm it has a regret that is independent of the state or action space. We also provide empirical results supporting our theoretical findings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset