Optimizing the Long-Term Behaviour of Deep Reinforcement Learning for Pushing and Grasping

by   Rodrigo Chau, et al.

We investigate the "Visual Pushing for Grasping" (VPG) system by Zeng et al. and the "Hourglass" system by Ewerton et al., an evolution of the former. The focus of our work is the investigation of the capabilities of both systems to learn long-term rewards and policies. Zeng et al. original task only needs a limited amount of foresight. Ewerton et al. attain their best performance using an agent which only takes the most immediate action under consideration. We are interested in the ability of their models and training algorithms to accurately predict long-term Q-Values. To evaluate this ability, we design a new bin sorting task and reward function. Our task requires agents to accurately estimate future rewards and therefore use high discount factors in their Q-Value calculation. We investigate the behaviour of an adaptation of the VPG training algorithm on our task. We show that this adaptation can not accurately predict the required long-term action sequences. In addition to the limitations identified by Ewerton et al., it suffers from the known Deep Q-Learning problem of overestimated Q-Values. In an effort to solve our task, we turn to the Hourglass models and combine them with the Double Q-Learning approach. We show that this approach enables the models to accurately predict long-term action sequences when trained with large discount factors. Our results show that the Double Q-Learning technique is essential for training with very high discount factors, as the models Q-Value predictions diverge otherwise. We also experiment with different approaches for discount factor scheduling, loss calculation and exploration procedures. Our results show that the latter factors do not visibly influence the model's performance for our task.


page 1

page 31


Learning Accurate Long-term Dynamics for Model-based Reinforcement Learning

Accurately predicting the dynamics of robotic systems is crucial for mod...

Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles

Learning complex behaviors through interaction requires coordinated long...

Hierarchical Model for Long-term Video Prediction

Video prediction has been an active topic of research in the past few ye...

Policy composition in reinforcement learning via multi-objective policy optimization

We enable reinforcement learning agents to learn successful behavior pol...

Hierarchical Long-term Video Prediction without Supervision

Much of recent research has been devoted to video prediction and generat...

Population-based Global Optimisation Methods for Learning Long-term Dependencies with RNNs

Despite recent innovations in network architectures and loss functions, ...

In Hindsight: A Smooth Reward for Steady Exploration

In classical Q-learning, the objective is to maximize the sum of discoun...

Please sign up or login with your details

Forgot password? Click here to reset