ClusterFusion: Leveraging Radar Spatial Features for Radar-Camera 3D Object Detection in Autonomous Vehicles

09/07/2023
by   Irfan Tito Kurniawan, et al.
0

Thanks to the complementary nature of millimeter wave radar and camera, deep learning-based radar-camera 3D object detection methods may reliably produce accurate detections even in low-visibility conditions. This makes them preferable to use in autonomous vehicles' perception systems, especially as the combined cost of both sensors is cheaper than the cost of a lidar. Recent radar-camera methods commonly perform feature-level fusion which often involves projecting the radar points onto the same plane as the image features and fusing the extracted features from both modalities. While performing fusion on the image plane is generally simpler and faster, projecting radar points onto the image plane flattens the depth dimension of the point cloud which might lead to information loss and makes extracting the spatial features of the point cloud harder. We proposed ClusterFusion, an architecture that leverages the local spatial features of the radar point cloud by clustering the point cloud and performing feature extraction directly on the point cloud clusters before projecting the features onto the image plane. ClusterFusion achieved the state-of-the-art performance among all radar-monocular camera methods on the test slice of the nuScenes dataset with 48.7 We also investigated the performance of different radar feature extraction strategies on point cloud clusters: a handcrafted strategy, a learning-based strategy, and a combination of both, and found that the handcrafted strategy yielded the best performance. The main goal of this work is to explore the use of radar's local spatial and point-wise features by extracting them directly from radar point cloud clusters for a radar-monocular camera 3D object detection method that performs cross-modal feature fusion on the image plane.

READ FULL TEXT

page 1

page 2

page 5

page 7

page 8

page 9

page 15

research
05/25/2023

RC-BEVFusion: A Plug-In Module for Radar-Camera Bird's Eye View Feature Fusion

Radars and cameras belong to the most frequently used sensors for advanc...
research
03/31/2022

Cross-modal Learning of Graph Representations using Radar Point Cloud for Long-Range Gesture Recognition

Gesture recognition is one of the most intuitive ways of interaction and...
research
07/20/2023

SMURF: Spatial Multi-Representation Fusion for 3D Object Detection with 4D Imaging Radar

The 4D Millimeter wave (mmWave) radar is a promising technology for vehi...
research
11/14/2022

LAPTNet: LiDAR-Aided Perspective Transform Network

Semantic grids are a useful representation of the environment around a r...
research
04/03/2023

CRN: Camera Radar Net for Accurate, Robust, Efficient 3D Perception

Autonomous driving requires an accurate and fast 3D perception system th...
research
01/28/2020

Learning to Catch Piglets in Flight

Catching objects in-flight is an outstanding challenge in robotics. In t...
research
07/14/2023

Achelous: A Fast Unified Water-surface Panoptic Perception Framework based on Fusion of Monocular Camera and 4D mmWave Radar

Current perception models for different tasks usually exist in modular f...

Please sign up or login with your details

Forgot password? Click here to reset