Multi-modal Multi-level Fusion for 3D Single Object Tracking

by   Zhiheng Li, et al.

3D single object tracking plays a crucial role in computer vision. Mainstream methods mainly rely on point clouds to achieve geometry matching between target template and search area. However, textureless and incomplete point clouds make it difficult for single-modal trackers to distinguish objects with similar structures. To overcome the limitations of geometry matching, we propose a Multi-modal Multi-level Fusion Tracker (MMF-Track), which exploits the image texture and geometry characteristic of point clouds to track 3D target. Specifically, we first propose a Space Alignment Module (SAM) to align RGB images with point clouds in 3D space, which is the prerequisite for constructing inter-modal associations. Then, in feature interaction level, we design a Feature Interaction Module (FIM) based on dual-stream structure, which enhances intra-modal features in parallel and constructs inter-modal semantic associations. Meanwhile, in order to refine each modal feature, we introduce a Coarse-to-Fine Interaction Module (CFIM) to realize the hierarchical feature interaction at different scales. Finally, in similarity fusion level, we propose a Similarity Fusion Module (SFM) to aggregate geometry and texture clues from the target. Experiments show that our method achieves state-of-the-art performance on KITTI (39 against previous multi-modal method) and is also competitive on NuScenes.


page 1

page 6

page 7


STTracker: Spatio-Temporal Tracker for 3D Single Object Tracking

3D single object tracking with point clouds is a critical task in 3D com...

A Generalized Multi-Modal Fusion Detection Framework

LiDAR point clouds have become the most common data source in autonomous...

PointSee: Image Enhances Point Cloud

There is a trend to fuse multi-modal information for 3D object detection...

CAT-Det: Contrastively Augmented Transformer for Multi-modal 3D Object Detection

In autonomous driving, LiDAR point-clouds and RGB images are two major d...

4DRVO-Net: Deep 4D Radar-Visual Odometry Using Multi-Modal and Multi-Scale Adaptive Fusion

Four-dimensional (4D) radar–visual odometry (4DRVO) integrates complemen...

Juggling With Representations: On the Information Transfer Between Imagery, Point Clouds, and Meshes for Multi-Modal Semantics

The automatic semantic segmentation of the huge amount of acquired remot...

Towards Class-agnostic Tracking Using Feature Decorrelation in Point Clouds

Single object tracking in point clouds has been attracting more and more...

Please sign up or login with your details

Forgot password? Click here to reset