DeepAI AI Chat
Log In Sign Up

Robust Dual View Deep Agent

04/13/2018
by   Ibrahim M. Sobh, et al.
0

Motivated by recent advance of machine learning using Deep Reinforcement Learning this paper proposes a modified architecture that produces more robust agents and speeds up the training process. Our architecture is based on Asynchronous Advantage Actor-Critic (A3C) algorithm where the total input dimensionality is halved by dividing the input into two independent streams. We use ViZDoom, 3D world software that is based on the classical first person shooter video game, Doom, as a test case. The experiments show that in comparison to single input agents, the proposed architecture succeeds to have the same playing performance and shows more robust behavior, achieving significant reduction in the number of training parameters of almost 30

READ FULL TEXT

page 2

page 7

page 16

page 17

page 18

04/13/2018

Robust Dual View Depp Agent

Motivated by recent advance of machine learning using Deep Reinforcement...
07/06/2019

Playing Flappy Bird via Asynchronous Advantage Actor Critic Algorithm

Flappy Bird, which has a very high popularity, has been trained in many ...
09/02/2018

Visual Transfer between Atari Games using Competitive Reinforcement Learning

This paper explores the use of deep reinforcement learning agents to tra...
10/28/2017

Diff-DAC: Distributed Actor-Critic for Average Multitask Deep Reinforcement Learning

We propose a fully distributed actor-critic algorithm approximated by de...
06/09/2019

Gossip-based Actor-Learner Architectures for Deep Reinforcement Learning

Multi-simulator training has contributed to the recent success of Deep R...
02/12/2020

Fully Differentiable Procedural Content Generation through Generative Playing Networks

To procedurally create interactive content such as environments or game ...
05/21/2022

Co-design of Embodied Neural Intelligence via Constrained Evolution

We introduce a novel co-design method for autonomous moving agents' shap...