DualAfford: Learning Collaborative Visual Affordance for Dual-gripper Object Manipulation

07/05/2022
by   Yan Zhao, et al.
3

It is essential yet challenging for future home-assistant robots to understand and manipulate diverse 3D objects in daily human environments. Towards building scalable systems that can perform diverse manipulation tasks over various 3D shapes, recent works have advocated and demonstrated promising results learning visual actionable affordance, which labels every point over the input 3D geometry with an action likelihood of accomplishing the downstream task (e.g., pushing or picking-up). However, these works only studied single-gripper manipulation tasks, yet many real-world tasks require two hands to achieve collaboratively. In this work, we propose a novel learning framework, DualAfford, to learn collaborative affordance for dual-gripper manipulation tasks. The core design of the approach is to reduce the quadratic problem for two grippers into two disentangled yet interconnected subtasks for efficient learning. Using the large-scale PartNet-Mobility and ShapeNet datasets, we set up four benchmark tasks for dual-gripper manipulation. Experiments prove the effectiveness and superiority of our method over three baselines. Additional results and videos can be found at https://hyperplane-lab.github.io/DualAfford .

READ FULL TEXT

page 2

page 4

page 5

page 7

page 8

page 13

page 15

page 18

research
06/28/2021

VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects

Perceiving and manipulating 3D articulated objects (e.g., cabinets, door...
research
12/01/2021

AdaAfford: Learning to Adapt Manipulation Affordance for 3D Articulated Objects via Few-shot Interactions

Perceiving and interacting with 3D articulated objects, such as cabinets...
research
11/25/2019

Robot Learning and Execution of Collaborative Manipulation Plans from YouTube Videos

People often watch videos on the web to learn how to cook new recipes, a...
research
09/14/2023

Learning Environment-Aware Affordance for 3D Articulated Object Manipulation under Occlusions

Perceiving and manipulating 3D articulated objects in diverse environmen...
research
11/10/2022

DisPositioNet: Disentangled Pose and Identity in Semantic Image Manipulation

Graph representation of objects and their relations in a scene, known as...
research
09/13/2022

A Dual-Arm Collaborative Framework for Dexterous Manipulation in Unstructured Environments with Contrastive Planning

Most object manipulation strategies for robots are based on the assumpti...
research
06/29/2021

O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning

Contrary to the vast literature in modeling, perceiving, and understandi...

Please sign up or login with your details

Forgot password? Click here to reset