Drive Segment: Unsupervised Semantic Segmentation of Urban Scenes via Cross-modal Distillation

by   Antonin Vobecky, et al.

This work investigates learning pixel-wise semantic image segmentation in urban scenes without any manual annotation, just from the raw non-curated data collected by cars which, equipped with cameras and LiDAR sensors, drive around a city. Our contributions are threefold. First, we propose a novel method for cross-modal unsupervised learning of semantic image segmentation by leveraging synchronized LiDAR and image data. The key ingredient of our method is the use of an object proposal module that analyzes the LiDAR point cloud to obtain proposals for spatially consistent objects. Second, we show that these 3D object proposals can be aligned with the input images and reliably clustered into semantically meaningful pseudo-classes. Finally, we develop a cross-modal distillation approach that leverages image data partially annotated with the resulting pseudo-classes to train a transformer-based model for image semantic segmentation. We show the generalization capabilities of our method by testing on four different testing datasets (Cityscapes, Dark Zurich, Nighttime Driving and ACDC) without any finetuning, and demonstrate significant improvements compared to the current state of the art on this problem. See project webpage for the code and more.


page 2

page 9

page 22

page 23

page 25

page 27

page 28

page 29


Cross-modal Cross-domain Learning for Unsupervised LiDAR Semantic Segmentation

In recent years, cross-modal domain adaptation has been studied on the p...

Learning 3D Semantic Segmentation with only 2D Image Supervision

With the recent growth of urban mapping and autonomous driving efforts, ...

Unsupervised Semantic Segmentation of 3D Point Clouds via Cross-modal Distillation and Super-Voxel Clustering

Semantic segmentation of point clouds usually requires exhausting effort...

3D Guided Weakly Supervised Semantic Segmentation

Pixel-wise clean annotation is necessary for fully-supervised semantic s...

BEV-DG: Cross-Modal Learning under Bird's-Eye View for Domain Generalization of 3D Semantic Segmentation

Cross-modal Unsupervised Domain Adaptation (UDA) aims to exploit the com...

Boosting LiDAR-based Semantic Labeling by Cross-Modal Training Data Generation

Mobile robots and autonomous vehicles rely on multi-modal sensor setups ...

"Just Drive": Colour Bias Mitigation for Semantic Segmentation in the Context of Urban Driving

Biases can filter into AI technology without our knowledge. Oftentimes, ...

Please sign up or login with your details

Forgot password? Click here to reset