Same State, Different Task: Continual Reinforcement Learning without Interference

by   Samuel Kessler, et al.

Continual Learning (CL) considers the problem of training an agent sequentially on a set of tasks while seeking to retain performance on all previous tasks. A key challenge in CL is catastrophic forgetting, which arises when performance on a previously mastered task is reduced when learning a new task. While a variety of methods exist to combat forgetting, in some cases tasks are fundamentally incompatible with each other and thus cannot be learnt by a single policy. This can occur, in reinforcement learning (RL) when an agent may be rewarded for achieving different goals from the same observation. In this paper we formalize this “interference” as distinct from the problem of forgetting. We show that existing CL methods based on single neural network predictors with shared replay buffers fail in the presence of interference. Instead, we propose a simple method, OWL, to address this challenge. OWL learns a factorized policy, using shared feature extraction layers, but separate heads, each specializing on a new task. The separate heads in OWL are used to prevent interference. At test time, we formulate policy selection as a multi-armed bandit problem, and show it is possible to select the best policy for an unknown task using feedback from the environment. The use of bandit algorithms allows the OWL agent to constructively re-use different continually learnt policies at different times during an episode. We show in multiple RL environments that existing replay based CL methods fail, while OWL is able to achieve close to optimal performance when training sequentially.


page 4

page 8

page 18

page 19

page 20


Predictive Experience Replay for Continual Visual Control and Forecasting

Learning physical dynamics in a series of non-stationary environments is...

Bandit-Based Policy Invariant Explicit Shaping for Incorporating External Advice in Reinforcement Learning

A key challenge for a reinforcement learning (RL) agent is to incorporat...

Policy Consolidation for Continual Reinforcement Learning

We propose a method for tackling catastrophic forgetting in deep reinfor...

DisCoRL: Continual Reinforcement Learning via Policy Distillation

In multi-task reinforcement learning there are two main challenges: at t...

Multi-agent Continual Coordination via Progressive Task Contextualization

Cooperative Multi-agent Reinforcement Learning (MARL) has attracted sign...

Continual Task Allocation in Meta-Policy Network via Sparse Prompting

How to train a generalizable meta-policy by continually learning a seque...

Catastrophic Interference in Reinforcement Learning: A Solution Based on Context Division and Knowledge Distillation

The powerful learning ability of deep neural networks enables reinforcem...

Please sign up or login with your details

Forgot password? Click here to reset