Neuro-Symbolic Causal Language Planning with Commonsense Prompting

by   Yujie Lu, et al.

Language planning aims to implement complex high-level goals by decomposition into sequential simpler low-level steps. Such procedural reasoning ability is essential for applications such as household robots and virtual assistants. Although language planning is a basic skill set for humans in daily life, it remains a challenge for large language models (LLMs) that lack deep-level commonsense knowledge in the real world. Previous methods require either manual exemplars or annotated programs to acquire such ability from LLMs. In contrast, this paper proposes Neuro-Symbolic Causal Language Planner (CLAP) that elicits procedural knowledge from the LLMs with commonsense-infused prompting. Pre-trained knowledge in LLMs is essentially an unobserved confounder that causes spurious correlations between tasks and action plans. Through the lens of a Structural Causal Model (SCM), we propose an effective strategy in CLAP to construct prompts as a causal intervention toward our SCM. Using graph sampling techniques and symbolic program executors, our strategy formalizes the structured causal prompts from commonsense knowledge bases. CLAP obtains state-of-the-art performance on WikiHow and RobotHow, achieving a relative improvement of 5.28 This indicates the superiority of CLAP in causal language planning semantically and sequentially.


page 1

page 2

page 3

page 4


TaskLAMA: Probing the Complex Task Understanding of Language Models

Structured Complex Task Decomposition (SCTD) is the problem of breaking ...

Task and Motion Planning with Large Language Models for Object Rearrangement

Multi-object rearrangement is a crucial skill for service robots, and co...

Symbolic Knowledge Distillation: from General Language Models to Commonsense Models

The common practice for training commonsense models has gone from-human-...

Bridging Commonsense Reasoning and Probabilistic Planning via a Probabilistic Action Language

To be responsive to dynamically changing real-world environments, an int...

TANGO: Commonsense Generalization in Predicting Tool Interactions for Mobile Manipulators

Robots assisting us in factories or homes must learn to make use of obje...

Preserving Commonsense Knowledge from Pre-trained Language Models via Causal Inference

Fine-tuning has been proven to be a simple and effective technique to tr...

Automated Action Model Acquisition from Narrative Texts

Action models, which take the form of precondition/effect axioms, facili...

Please sign up or login with your details

Forgot password? Click here to reset