Adversarial Environment Generation for Learning to Navigate the Web

by   Izzeddin Gür, et al.

Learning to autonomously navigate the web is a difficult sequential decision making task. The state and action spaces are large and combinatorial in nature, and websites are dynamic environments consisting of several pages. One of the bottlenecks of training web navigation agents is providing a learnable curriculum of training environments that can cover the large variety of real-world websites. Therefore, we propose using Adversarial Environment Generation (AEG) to generate challenging web environments in which to train reinforcement learning (RL) agents. We provide a new benchmarking environment, gMiniWoB, which enables an RL adversary to use compositional primitives to learn to generate arbitrarily complex websites. To train the adversary, we propose a new technique for maximizing regret using the difference in the scores obtained by a pair of navigator agents. Our results show that our approach significantly outperforms prior methods for minimax regret AEG. The regret objective trains the adversary to design a curriculum of environments that are "just-the-right-challenge" for the navigator agents; our results show that over time, the adversary learns to generate increasingly complex web navigation tasks. The navigator agents trained with our technique learn to complete challenging, high-dimensional web navigation tasks, such as form filling, booking a flight etc. We show that the navigator agent trained with our proposed Flexible b-PAIRED technique significantly outperforms competitive automatic curriculum generation baselines – including a state-of-the-art RL web navigation approach – on a set of challenging unseen test environments, and achieves more than 80


page 11

page 14


Environment Generation for Zero-Shot Compositional Reinforcement Learning

Many real-world problems are compositional - solving them requires compl...

Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design

A wide range of reinforcement learning (RL) problems - including robustn...

Stabilizing Unsupervised Environment Design with a Learned Adversary

A key challenge in training generally-capable agents is the design of tr...

CLUTR: Curriculum Learning via Unsupervised Task Representation Learning

Reinforcement Learning (RL) algorithms are often known for sample ineffi...

Evolving Curricula with Regret-Based Environment Design

It remains a significant challenge to train generally capable agents wit...

Embodied Visual Navigation with Automatic Curriculum Learning in Real Environments

We present NavACL, a method of automatic curriculum learning tailored to...

End-to-End Goal-Driven Web Navigation

We propose a goal-driven web navigation as a benchmark task for evaluati...

Please sign up or login with your details

Forgot password? Click here to reset