Learning Spatial and Temporal Variations for 4D Point Cloud Segmentation

07/11/2022
by   Shi Hanyu, et al.
0

LiDAR-based 3D scene perception is a fundamental and important task for autonomous driving. Most state-of-the-art methods on LiDAR-based 3D recognition tasks focus on single frame 3D point cloud data, and the temporal information is ignored in those methods. We argue that the temporal information across the frames provides crucial knowledge for 3D scene perceptions, especially in the driving scenario. In this paper, we focus on spatial and temporal variations to better explore the temporal information across the 3D frames. We design a temporal variation-aware interpolation module and a temporal voxel-point refiner to capture the temporal variation in the 4D point cloud. The temporal variation-aware interpolation generates local features from the previous and current frames by capturing spatial coherence and temporal variation information. The temporal voxel-point refiner builds a temporal graph on the 3D point cloud sequences and captures the temporal variation with a graph convolution module. The temporal voxel-point refiner also transforms the coarse voxel-level predictions into fine point-level predictions. With our proposed modules, the new network TVSN achieves state-of-the-art performance on SemanticKITTI and SemantiPOSS. Specifically, our method achieves 52.5% in mIoU (+5.5 on SemanticKITTI, and 63.0 approaches).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset