Toward Grounded Social Reasoning

by   Minae Kwon, et al.

Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9 on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at


page 18

page 25

page 27

page 28

page 29

page 30

page 31

page 32


Statler: State-Maintaining Language Models for Embodied Reasoning

Large language models (LLMs) provide a promising tool that enable robots...

Structured World Models from Human Videos

We tackle the problem of learning complex, general behaviors directly in...

Vision System of Curling Robots: Thrower and Skip

We built a vision system of curling robot which can be expected to play ...

Active Metric-Semantic Mapping by Multiple Aerial Robots

Traditional approaches for active mapping focus on building geometric ma...

Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control

Recent progress in large language models (LLMs) has demonstrated the abi...

GP22: A Car Styling Dataset for Automotive Designers

An automated design data archiving could reduce the time wasted by desig...

Reconstructing and grounding narrated instructional videos in 3D

Narrated instructional videos often show and describe manipulations of s...

Please sign up or login with your details

Forgot password? Click here to reset