Crawling in Rogue's dungeons with (partitioned) A3C

04/23/2018
by   Andrea Asperti, et al.
0

Rogue is a famous dungeon-crawling video-game of the 80ies, the ancestor of its gender. Rogue-like games are known for the necessity to explore partially observable and always different randomly-generated labyrinths, preventing any form of level replay. As such, they serve as a very natural and challenging task for reinforcement learning, requiring the acquisition of complex, non-reactive behaviors involving memory and planning. In this article we show how, exploiting a version of A3C partitioned on different situations, the agent is able to reach the stairs and descend to the next level in 98

READ FULL TEXT
research
10/15/2020

Recurrent Distributed Reinforcement Learning for Partially Observable Robotic Assembly

In this work we solve for partially observable reinforcement learning (R...
research
09/20/2021

A Survey of Text Games for Reinforcement Learning informed by Natural Language

Reinforcement Learning has shown success in a number of complex virtual ...
research
05/25/2020

Reinforcement Learning with Iterative Reasoning for Merging in Dense Traffic

Maneuvering in dense traffic is a challenging task for autonomous vehicl...
research
08/16/2023

Partially Observable Multi-agent RL with (Quasi-)Efficiency: The Blessing of Information Sharing

We study provable multi-agent reinforcement learning (MARL) in the gener...
research
08/26/2019

OpenSpiel: A Framework for Reinforcement Learning in Games

OpenSpiel is a collection of environments and algorithms for research in...
research
11/29/2022

Configurable Agent With Reward As Input: A Play-Style Continuum Generation

Modern video games are becoming richer and more complex in terms of game...
research
03/07/2019

MinAtar: An Atari-inspired Testbed for More Efficient Reinforcement Learning Experiments

The Arcade Learning Environment (ALE) is a popular platform for evaluati...

Please sign up or login with your details

Forgot password? Click here to reset