Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents

by   Edoardo Conti, et al.

Evolution strategies (ES) are a family of black-box optimization algorithms able to train deep neural networks roughly as well as Q-learning and policy gradient methods on challenging deep reinforcement learning (RL) problems, but are much faster (e.g. hours vs. days) because they parallelize better. However, many RL problems require directed exploration because they have reward functions that are sparse or deceptive (i.e. contain local optima), and it is not known how to encourage such exploration with ES. Here we show that algorithms that have been invented to promote directed exploration in small-scale evolved neural networks via populations of exploring agents, specifically novelty search (NS) and quality diversity (QD) algorithms, can be hybridized with ES to improve its performance on sparse or deceptive deep RL tasks, while retaining scalability. Our experiments confirm that the resultant new algorithms, NS-ES and a version of QD we call NSR-ES, avoid local optima encountered by ES to achieve higher performance on tasks ranging from playing Atari to simulated robots learning to walk around a deceptive trap. This paper thus introduces a family of fast, scalable algorithms for reinforcement learning that are capable of directed exploration. It also adds this new family of exploration algorithms to the RL toolbox and raises the interesting possibility that analogous algorithms with multiple simultaneous paths of exploration might also combine well with existing RL algorithms outside ES.


page 6

page 14

page 15


Instance Weighted Incremental Evolution Strategies for Reinforcement Learning in Dynamic Environments

Evolution strategies (ES), as a family of black-box optimization algorit...

Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning

Deep artificial neural networks (DNNs) are typically trained via gradien...

Improving the Diversity of Bootstrapped DQN via Noisy Priors

Q-learning is one of the most well-known Reinforcement Learning algorith...

Deep Curiosity Search: Intra-Life Exploration Improves Performance on Challenging Deep Reinforcement Learning Problems

Traditional exploration methods in RL require agents to perform random a...

Novelty Search for Deep Reinforcement Learning Policy Network Weights by Action Sequence Edit Metric Distance

Reinforcement learning (RL) problems often feature deceptive local optim...

Adaptive Combination of a Genetic Algorithm and Novelty Search for Deep Neuroevolution

Evolutionary Computation (EC) has been shown to be able to quickly train...

Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari

Evolution Strategies (ES) have recently been demonstrated to be a viable...

Please sign up or login with your details

Forgot password? Click here to reset