CbwLoss: Constrained Bidirectional Weighted Loss for Self-supervised Learning of Depth and Pose

by   Fei Wang, et al.

Photometric differences are widely used as supervision signals to train neural networks for estimating depth and camera pose from unlabeled monocular videos. However, this approach is detrimental for model optimization because occlusions and moving objects in a scene violate the underlying static scenario assumption. In addition, pixels in textureless regions or less discriminative pixels hinder model training. To solve these problems, in this paper, we deal with moving objects and occlusions utilizing the difference of the flow fields and depth structure generated by affine transformation and view synthesis, respectively. Secondly, we mitigate the effect of textureless regions on model optimization by measuring differences between features with more semantic and contextual information without adding networks. In addition, although the bidirectionality component is used in each sub-objective function, a pair of images are reasoned about only once, which helps reduce overhead. Extensive experiments and visual analysis demonstrate the effectiveness of the proposed method, which outperform existing state-of-the-art self-supervised methods under the same conditions and without introducing additional auxiliary information.


page 1

page 5

page 12

page 13

page 14

page 17


Feature-metric Loss for Self-supervised Learning of Depth and Egomotion

Photometric loss is widely used for self-supervised depth and egomotion ...

Self-Supervised Learning of Depth and Ego-motion with Differentiable Bundle Adjustment

Learning to predict scene depth and camera motion from RGB inputs only i...

Self-supervised Learning of Occlusion Aware Flow Guided 3D Geometry Perception with Adaptive Cross Weighted Loss from Monocular Videos

Self-supervised deep learning-based 3D scene understanding methods can o...

DS-Depth: Dynamic and Static Depth Estimation via a Fusion Cost Volume

Self-supervised monocular depth estimation methods typically rely on the...

Self-Supervised Learning of Depth and Ego-Motion from Video by Alternative Training and Geometric Constraints from 3D to 2D

Self-supervised learning of depth and ego-motion from unlabeled monocula...

Attentional Separation-and-Aggregation Network for Self-supervised Depth-Pose Learning in Dynamic Scenes

Learning depth and ego-motion from unlabeled videos via self-supervision...

Self-Supervised Structure-from-Motion through Tightly-Coupled Depth and Egomotion Networks

Much recent literature has formulated structure-from-motion (SfM) as a s...

Please sign up or login with your details

Forgot password? Click here to reset