Crafting Monocular Cues and Velocity Guidance for Self-Supervised Multi-Frame Depth Learning

08/19/2022
by   Xiaofeng Wang, et al.
0

Self-supervised monocular methods can efficiently learn depth information of weakly textured surfaces or reflective objects. However, the depth accuracy is limited due to the inherent ambiguity in monocular geometric modeling. In contrast, multi-frame depth estimation methods improve the depth accuracy thanks to the success of Multi-View Stereo (MVS), which directly makes use of geometric constraints. Unfortunately, MVS often suffers from texture-less regions, non-Lambertian surfaces, and moving objects, especially in real-world video sequences without known camera motion and depth supervision. Therefore, we propose MOVEDepth, which exploits the MOnocular cues and VElocity guidance to improve multi-frame Depth learning. Unlike existing methods that enforce consistency between MVS depth and monocular depth, MOVEDepth boosts multi-frame depth learning by directly addressing the inherent problems of MVS. The key of our approach is to utilize monocular depth as a geometric priority to construct MVS cost volume, and adjust depth candidates of cost volume under the guidance of predicted camera velocity. We further fuse monocular depth and MVS depth by learning uncertainty in the cost volume, which results in a robust depth estimation against ambiguity in multi-view geometry. Extensive experiments show MOVEDepth achieves state-of-the-art performance: Compared with Monodepth2 and PackNet, our method relatively improves the depth accuracy by 20% and 19.8% on the KITTI benchmark. MOVEDepth also generalizes to the more challenging DDAD benchmark, relatively outperforming ManyDepth by 7.2%. The code is available at https://github.com/JeffWang987/MOVEDepth.

READ FULL TEXT

page 3

page 7

research
05/10/2023

FusionDepth: Complement Self-Supervised Monocular Depth Estimation with Cost Volume

Multi-view stereo depth estimation based on cost volume usually works be...
research
04/18/2023

Learning to Fuse Monocular and Multi-view Cues for Multi-frame Depth Estimation in Dynamic Scenes

Multi-frame depth estimation generally achieves high accuracy relying on...
research
12/15/2021

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-view depth estimation methods typically require the computation of...
research
11/24/2020

MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

In this paper, we propose MonoRec, a semi-supervised monocular dense rec...
research
04/15/2022

MVSTER: Epipolar Transformer for Efficient Multi-View Stereo

Learning-based Multi-View Stereo (MVS) methods warp source images into t...
research
09/14/2022

DevNet: Self-supervised Monocular Depth Learning via Density Volume Construction

Self-supervised depth learning from monocular images normally relies on ...
research
05/30/2023

DäRF: Boosting Radiance Fields from Sparse Inputs with Monocular Depth Adaptation

Neural radiance fields (NeRF) shows powerful performance in novel view s...

Please sign up or login with your details

Forgot password? Click here to reset