Towards Interpretable Reinforcement Learning Using Attention Augmented Agents

06/06/2019
by   Alex Mott, et al.
0

Inspired by recent work in attention models for image captioning and question answering, we present a soft attention model for the reinforcement learning domain. This model uses a soft, top-down attention mechanism to create a bottleneck in the agent, forcing it to focus on task-relevant information by sequentially querying its view of the environment. The output of the attention mechanism allows direct observation of the information used by the agent to select its actions, enabling easier interpretation of this model than of traditional models. We analyze different strategies that the agents learn and show that a handful of strategies arise repeatedly across different games. We also show that the model learns to query separately about space and content (`where' vs. `what'). We demonstrate that an agent using this mechanism can achieve performance competitive with state-of-the-art models on ATARI tasks while still being interpretable.

READ FULL TEXT

page 4

page 5

page 6

page 7

page 8

page 14

research
01/31/2022

Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism

Interpretable graph learning is in need as many scientific applications ...
research
01/10/2023

Learning to Perceive in Deep Model-Free Reinforcement Learning

This work proposes a novel model-free Reinforcement Learning (RL) agent ...
research
05/24/2017

State Space Decomposition and Subgoal Creation for Transfer in Deep Reinforcement Learning

Typical reinforcement learning (RL) agents learn to complete tasks speci...
research
01/24/2022

The Paradox of Choice: Using Attention in Hierarchical Reinforcement Learning

Decision-making AI agents are often faced with two important challenges:...
research
09/27/2018

Learning to Coordinate Multiple Reinforcement Learning Agents for Diverse Query Reformulation

We propose a method to efficiently learn diverse strategies in reinforce...
research
08/01/2018

Learning Visual Question Answering by Bootstrapping Hard Attention

Attention mechanisms in biological perception are thought to select subs...
research
03/02/2020

Scaling Up Multiagent Reinforcement Learning for Robotic Systems: Learn an Adaptive Sparse Communication Graph

The complexity of multiagent reinforcement learning (MARL) in multiagent...

Please sign up or login with your details

Forgot password? Click here to reset