V2V4Real: A Real-world Large-scale Dataset for Vehicle-to-Vehicle Cooperative Perception

by   Runsheng Xu, et al.

Modern perception systems of autonomous vehicles are known to be sensitive to occlusions and lack the capability of long perceiving range. It has been one of the key bottlenecks that prevents Level 5 autonomy. Recent research has demonstrated that the Vehicle-to-Vehicle (V2V) cooperative perception system has great potential to revolutionize the autonomous driving industry. However, the lack of a real-world dataset hinders the progress of this field. To facilitate the development of cooperative perception, we present V2V4Real, the first large-scale real-world multi-modal dataset for V2V perception. The data is collected by two vehicles equipped with multi-modal sensors driving together through diverse scenarios. Our V2V4Real dataset covers a driving area of 410 km, comprising 20K LiDAR frames, 40K RGB frames, 240K annotated 3D bounding boxes for 5 classes, and HDMaps that cover all the driving routes. V2V4Real introduces three perception tasks, including cooperative 3D object detection, cooperative 3D object tracking, and Sim2Real domain adaptation for cooperative perception. We provide comprehensive benchmarks of recent cooperative perception algorithms on three tasks. The V2V4Real dataset and codebase can be found at https://github.com/ucla-mobility/V2V4Real.


page 1

page 4

page 12

page 13

page 16

page 17

page 18

page 19


DAIR-V2X: A Large-Scale Dataset for Vehicle-Infrastructure Cooperative 3D Object Detection

Autonomous driving faces great safety challenges for a lack of global pe...

IDD-3D: Indian Driving Dataset for 3D Unstructured Road Scenes

Autonomous driving and assistance systems rely on annotated data from tr...

OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication

Employing Vehicle-to-Vehicle communication to enhance perception perform...

Towards Autonomous Driving: a Multi-Modal 360^∘ Perception Proposal

In this paper, a multi-modal 360^∘ framework for 3D object detection and...

AmodalSynthDrive: A Synthetic Amodal Perception Dataset for Autonomous Driving

Unlike humans, who can effortlessly estimate the entirety of objects eve...

A9-Dataset: Multi-Sensor Infrastructure-Based Dataset for Mobility Research

Data-intensive machine learning based techniques increasingly play a pro...

IoT System for Real-Time Near-Crash Detection for Automated Vehicle Testing

Our world is moving towards the goal of fully autonomous driving at a fa...

Please sign up or login with your details

Forgot password? Click here to reset