Fast Efficient Hyperparameter Tuning for Policy Gradients

by   Supratik Paul, et al.

The performance of policy gradient methods is sensitive to hyperparameter settings that must be tuned for any new application. Widely used grid search methods for tuning hyperparameters are sample inefficient and computationally expensive. More advanced methods like Population Based Training that learn optimal schedules for hyperparameters instead of fixed settings can yield better results, but are also sample inefficient and computationally expensive. In this paper, we propose Hyperparameter Optimisation on the Fly (HOOF), a gradient-free meta-learning algorithm that can automatically learn an optimal schedule for hyperparameters that affect the policy update directly through the gradient. The main idea is to use existing trajectories sampled by the policy gradient method to optimise a one-step improvement objective, yielding a sample and computationally efficient algorithm that is easy to implement. Our experimental results across multiple domains and algorithms show that using HOOF to learn these hyperparameter schedules leads to faster learning with improved performance.


page 1

page 2

page 3

page 4


Episodic Policy Gradient Training

We introduce a novel training procedure for policy gradient methods wher...

SoftTreeMax: Policy Gradient with Tree Search

Policy-gradient methods are widely used for learning control policies. T...

A Framework for History-Aware Hyperparameter Optimisation in Reinforcement Learning

A Reinforcement Learning (RL) system depends on a set of initial conditi...

Automatic hyperparameter selection in Autodock

Autodock is a widely used molecular modeling tool which predicts how sma...

Genealogical Population-Based Training for Hyperparameter Optimization

Hyperparameter optimization aims at finding more rapidly and efficiently...

Faster Improvement Rate Population Based Training

The successful training of neural networks typically involves careful an...

Guided Hyperparameter Tuning Through Visualization and Inference

For deep learning practitioners, hyperparameter tuning for optimizing mo...

Code Repositories


Implementation of the Fast Efficient Hyperparameter Tuning for Policy Gradient Methods

view repo



view repo

Please sign up or login with your details

Forgot password? Click here to reset