DTF-Net: Category-Level Pose Estimation and Shape Reconstruction via Deformable Template Field

by   Haowen Wang, et al.

Estimating 6D poses and reconstructing 3D shapes of objects in open-world scenes from RGB-depth image pairs is challenging. Many existing methods rely on learning geometric features that correspond to specific templates while disregarding shape variations and pose differences among objects in the same category. As a result, these methods underperform when handling unseen object instances in complex environments. In contrast, other approaches aim to achieve category-level estimation and reconstruction by leveraging normalized geometric structure priors, but the static prior-based reconstruction struggles with substantial intra-class variations. To solve these problems, we propose the DTF-Net, a novel framework for pose estimation and shape reconstruction based on implicit neural fields of object categories. In DTF-Net, we design a deformable template field to represent the general category-wise shape latent features and intra-category geometric deformation features. The field establishes continuous shape correspondences, deforming the category template into arbitrary observed instances to accomplish shape reconstruction. We introduce a pose regression module that shares the deformation features and template codes from the fields to estimate the accurate 6D pose of each object in the scene. We integrate a multi-modal representation extraction module to extract object features and semantic masks, enabling end-to-end inference. Moreover, during training, we implement a shape-invariant training strategy and a viewpoint sampling method to further enhance the model's capability to extract object pose features. Extensive experiments on the REAL275 and CAMERA25 datasets demonstrate the superiority of DTF-Net in both synthetic and real scenes. Furthermore, we show that DTF-Net effectively supports grasping tasks with a real robot arm.


page 4

page 6

page 7

page 8


Generative Category-Level Shape and Pose Estimation with Semantic Primitives

Empowering autonomous agents with 3D understanding for daily objects is ...

Prior-free Category-level Pose Estimation with Implicit Space Transformation

Category-level 6D pose estimation aims to predict the poses and sizes of...

Topologically-Aware Deformation Fields for Single-View 3D Reconstruction

We present a framework for learning 3D object shapes and dense cross-obj...

Canonical Fields: Self-Supervised Learning of Pose-Canonicalized Neural Fields

Coordinate-based implicit neural networks, or neural fields, have emerge...

Category-level 6D Object Pose Recovery in Depth Images

Intra-class variations, distribution shifts among source and target doma...

SSP-Pose: Symmetry-Aware Shape Prior Deformation for Direct Category-Level Object Pose Estimation

Category-level pose estimation is a challenging problem due to intra-cla...

CHORD: Category-level Hand-held Object Reconstruction via Shape Deformation

In daily life, humans utilize hands to manipulate objects. Modeling the ...

Please sign up or login with your details

Forgot password? Click here to reset