Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning

by   Felipe Petroski Such, et al.

Deep artificial neural networks (DNNs) are typically trained via gradient-based learning algorithms, namely backpropagation. Evolution strategies (ES) can rival backprop-based algorithms such as Q-learning and policy gradients on challenging deep reinforcement learning (RL) problems. However, ES can be considered a gradient-based algorithm because it performs stochastic gradient descent via an operation similar to a finite-difference approximation of the gradient. That raises the question of whether non-gradient-based evolutionary algorithms can work at DNN scales. Here we demonstrate they can: we evolve the weights of a DNN with a simple, gradient-free, population-based genetic algorithm (GA) and it performs well on hard deep RL problems, including Atari and humanoid locomotion. The Deep GA successfully evolves networks with over four million free parameters, the largest neural networks ever evolved with a traditional evolutionary algorithm. These results (1) expand our sense of the scale at which GAs can operate, (2) suggest intriguingly that in some cases following the gradient is not the best choice for optimizing performance, and (3) make immediately available the multitude of techniques that have been developed in the neuroevolution community to improve performance on RL problems. To demonstrate the latter, we show that combining DNNs with novelty search, which was designed to encourage exploration on tasks with deceptive or sparse reward functions, can solve a high-dimensional problem on which reward-maximizing algorithms (e.g. DQN, A3C, ES, and the GA) fail. Additionally, the Deep GA parallelizes better than ES, A3C, and DQN, and enables a state-of-the-art compact encoding technique that can represent million-parameter DNNs in thousands of bytes.


Novelty Search for Deep Reinforcement Learning Policy Network Weights by Action Sequence Edit Metric Distance

Reinforcement learning (RL) problems often feature deceptive local optim...

Adaptive Combination of a Genetic Algorithm and Novelty Search for Deep Neuroevolution

Evolutionary Computation (EC) has been shown to be able to quickly train...

Proximal Distilled Evolutionary Reinforcement Learning

Reinforcement Learning (RL) has recently achieved tremendous success due...

Deep Neuroevolution of Recurrent and Discrete World Models

Neural architectures inspired by our own human cognitive system, such as...

ES Is More Than Just a Traditional Finite-Difference Approximator

An evolution strategy (ES) variant recently attracted significant attent...

Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents

Evolution strategies (ES) are a family of black-box optimization algorit...

Recombination of Artificial Neural Networks

We propose a genetic algorithm (GA) for hyperparameter optimization of a...

Please sign up or login with your details

Forgot password? Click here to reset