Robust Fusion for Bayesian Semantic Mapping

by   David Morilla-Cabello, et al.

The integration of semantic information in a map allows robots to understand better their environment and make high-level decisions. In the last few years, neural networks have shown enormous progress in their perception capabilities. However, when fusing multiple observations from a neural network in a semantic map, its inherent overconfidence with unknown data gives too much weight to the outliers and decreases the robustness of the resulting map. In this work, we propose a novel robust fusion method to combine multiple Bayesian semantic predictions. Our method uses the uncertainty estimation provided by a Bayesian neural network to calibrate the way in which the measurements are fused. This is done by regularizing the observations to mitigate the problem of overconfident outlier predictions and using the epistemic uncertainty to weigh their influence in the fusion, resulting in a different formulation of the probability distributions. We validate our robust fusion strategy by performing experiments on photo-realistic simulated environments and real scenes. In both cases, we use a network trained on different data to expose the model to varying data distributions. The results show that considering the model's uncertainty and regularizing the probability distribution of the observations distribution results in a better semantic segmentation performance and more robustness to outliers, compared with other methods.


page 1

page 4

page 5

page 6


UNO: Uncertainty-aware Noisy-Or Multimodal Fusion for Unanticipated Input Degradation

The fusion of multiple sensor modalities, especially through deep learni...

Attention-based fusion of semantic boundary and non-boundary information to improve semantic segmentation

This paper introduces a method for image semantic segmentation grounded ...

Dynamic Feature Fusion for Semantic Edge Detection

Features from multiple scales can greatly benefit the semantic edge dete...

Learning to Explore Informative Trajectories and Samples for Embodied Perception

We are witnessing significant progress on perception models, specificall...

Photo-zSNthesis: Converting Type Ia Supernova Lightcurves to Redshift Estimates via Deep Learning

Upcoming photometric surveys will discover tens of thousands of Type Ia ...

Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty

Deep learning provides a powerful tool for machine perception when the o...

A Bayesian Convolutional Neural Network for Robust Galaxy Ellipticity Regression

Cosmic shear estimation is an essential scientific goal for large galaxy...

Please sign up or login with your details

Forgot password? Click here to reset