DeepAI AI Chat
Log In Sign Up

Robust Dual View Deep Agent

by   Ibrahim M. Sobh, et al.

Motivated by recent advance of machine learning using Deep Reinforcement Learning this paper proposes a modified architecture that produces more robust agents and speeds up the training process. Our architecture is based on Asynchronous Advantage Actor-Critic (A3C) algorithm where the total input dimensionality is halved by dividing the input into two independent streams. We use ViZDoom, 3D world software that is based on the classical first person shooter video game, Doom, as a test case. The experiments show that in comparison to single input agents, the proposed architecture succeeds to have the same playing performance and shows more robust behavior, achieving significant reduction in the number of training parameters of almost 30


page 2

page 7

page 16

page 17

page 18


Robust Dual View Depp Agent

Motivated by recent advance of machine learning using Deep Reinforcement...

Playing Flappy Bird via Asynchronous Advantage Actor Critic Algorithm

Flappy Bird, which has a very high popularity, has been trained in many ...

Visual Transfer between Atari Games using Competitive Reinforcement Learning

This paper explores the use of deep reinforcement learning agents to tra...

Diff-DAC: Distributed Actor-Critic for Average Multitask Deep Reinforcement Learning

We propose a fully distributed actor-critic algorithm approximated by de...

Gossip-based Actor-Learner Architectures for Deep Reinforcement Learning

Multi-simulator training has contributed to the recent success of Deep R...

Fully Differentiable Procedural Content Generation through Generative Playing Networks

To procedurally create interactive content such as environments or game ...

Co-design of Embodied Neural Intelligence via Constrained Evolution

We introduce a novel co-design method for autonomous moving agents' shap...