Universal Semantic Segmentation for Fisheye Urban Driving Images

01/31/2020
by   Yaozu Ye, et al.
0

Semantic segmentation is a critical method in the field of autonomous driving. When performing semantic image segmentation, a wider field of view (FoV) helps to obtain more information about the surrounding environment, making automatic driving safer and more reliable, which could be offered by fisheye cameras. However, large public fisheye data sets are not available, and the fisheye images captured by the fisheye camera with large FoV comes with large distortion, so commonly-used semantic segmentation model cannot be directly utilized. In this paper, a seven degrees of freedom (DoF) augmentation method is proposed to transform rectilinear image to fisheye image in a more comprehensive way. In the training process, rectilinear images are transformed into fisheye images in seven DoF, which simulates the fisheye images taken by cameras of different positions, orientations and focal lengths. The result shows that training with the seven-DoF augmentation can evidently improve the model's accuracy and robustness against different distorted fisheye data. This seven-DoF augmentation provides an universal semantic segmentation solution for fisheye cameras in different autonomous driving applications. Also, we provide specific parameter settings of the augmentation for autonomous driving. At last, we tested our universal semantic segmentation model on real fisheye images and obtained satisfactory results. The code and configurations are released at <https://github.com/Yaozhuwa/FisheyeSeg>.

READ FULL TEXT
research
01/02/2018

Restricted Deformable Convolution based Road Scene Semantic Segmentation Using Surround View Cameras

Understanding the surrounding environment of the vehicle is still one of...
research
04/17/2020

IDDA: a large-scale multi-domain dataset for autonomous driving

Semantic segmentation is key in autonomous driving. Using deep visual le...
research
03/15/2023

MSeg3D: Multi-modal 3D Semantic Segmentation for Autonomous Driving

LiDAR and camera are two modalities available for 3D semantic segmentati...
research
07/28/2023

OAFuser: Towards Omni-Aperture Fusion for Light Field Semantic Segmentation of Road Scenes

Light field cameras can provide rich angular and spatial information to ...
research
03/09/2021

Capturing Omni-Range Context for Omnidirectional Segmentation

Convolutional Networks (ConvNets) excel at semantic segmentation and hav...
research
09/15/2019

Brno Urban Dataset -- The New Data for Self-Driving Agents and Mapping Tasks

Autonomous driving is a dynamically growing field of research, where qua...
research
02/19/2021

Adaptable Deformable Convolutions for Semantic Segmentation of Fisheye Images in Autonomous Driving Systems

Advanced Driver-Assistance Systems rely heavily on perception tasks such...

Please sign up or login with your details

Forgot password? Click here to reset