Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization

by   Weiran Yao, et al.

Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.


page 1

page 2

page 3

page 4


Shattering the Agent-Environment Interface for Fine-Tuning Inclusive Language Models

A centerpiece of the ever-popular reinforcement learning from human feed...

Cooperative Multi-Agent Reinforcement Learning with Partial Observations

In this paper, we propose a distributed zeroth-order policy optimization...

Directed Policy Gradient for Safe Reinforcement Learning with Human Advice

Many currently deployed Reinforcement Learning agents work in an environ...

Emergence of Communication in an Interactive World with Consistent Speakers

Training agents to communicate with one another given task-based supervi...

The Intentional Unintentional Agent: Learning to Solve Many Continuous Control Tasks Simultaneously

This paper introduces the Intentional Unintentional (IU) agent. This age...

Generative Agents: Interactive Simulacra of Human Behavior

Believable proxies of human behavior can empower interactive application...

APIA: An Architecture for Policy-Aware Intentional Agents

This paper introduces the APIA architecture for policy-aware intentional...

Please sign up or login with your details

Forgot password? Click here to reset