A Deeper Look at Experience Replay
Experience replay plays an important role in the success of deep reinforcement learning (RL) by helping stabilize the neural networks. It has become a new norm in deep RL algorithms. In this paper, however, we showcase that varying the size of the experience replay buffer can hurt the performance even in very simple tasks. The size of the replay buffer is actually a hyper-parameter which needs careful tuning. Moreover, our study of experience replay leads to the formulation of the Combined DQN algorithm, which can significantly outperform primitive DQN in some tasks.
READ FULL TEXT