MetaComp: Learning to Adapt for Online Depth Completion

by   Yang Chen, et al.

Relying on deep supervised or self-supervised learning, previous methods for depth completion from paired single image and sparse depth data have achieved impressive performance in recent years. However, facing a new environment where the test data occurs online and differs from the training data in the RGB image content and depth sparsity, the trained model might suffer severe performance drop. To encourage the trained model to work well in such conditions, we expect it to be capable of adapting to the new environment continuously and effectively. To achieve this, we propose MetaComp. It utilizes the meta-learning technique to simulate adaptation policies during the training phase, and then adapts the model to new environments in a self-supervised manner in testing. Considering that the input is multi-modal data, it would be challenging to adapt a model to variations in two modalities simultaneously, due to significant differences in structure and form of the two modal data. Therefore, we further propose to disentangle the adaptation procedure in the basic meta-learning training into two steps, the first one focusing on the depth sparsity while the second attending to the image content. During testing, we take the same strategy to adapt the model online to new multi-modal data. Experimental results and comprehensive ablations show that our MetaComp is capable of adapting to the depth completion in a new environment effectively and robust to changes in different modalities.


page 1

page 4

page 5

page 6

page 7


Self-Supervised Deep Visual Odometry with Online Adaptation

Self-supervised VO methods have shown great success in jointly estimatin...

Online Adaptation through Meta-Learning for Stereo Depth Estimation

In this work, we tackle the problem of online adaptation for stereo dept...

Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera

Depth completion, the technique of estimating a dense depth image from s...

Adaptive Context-Aware Multi-Modal Network for Depth Completion

Depth completion aims to recover a dense depth map from the sparse depth...

Multi-Modal Mutual Information (MuMMI) Training for Robust Self-Supervised Deep Reinforcement Learning

This work focuses on learning useful and robust deep world models using ...

Multi-modal Bifurcated Network for Depth Guided Image Relighting

Image relighting aims to recalibrate the illumination setting in an imag...

Translate to Adapt: RGB-D Scene Recognition across Domains

Scene classification is one of the basic problems in computer vision res...

Please sign up or login with your details

Forgot password? Click here to reset