360SD-Net: 360° Stereo Depth Estimation with Learnable Cost Volume

11/11/2019
by   Ning-Hsu Wang, et al.
14

Recently, end-to-end trainable deep neural networks have significantly improved stereo depth estimation for perspective images. However, 360 images captured under equirectangular projection cannot benefit from directly adopting existing methods due to distortion introduced (i.e., lines in 3D are not projected onto lines in 2D). To tackle this issue, we present a novel architecture specifically designed for spherical disparity using the setting of top-bottom 360 camera pairs. Moreover, we propose to mitigate the distortion issue by (1) an additional input branch capturing the position and relation of each pixel in the spherical coordinate, and (2) a cost volume built upon a learnable shifting filter. Due to the lack of 360 stereo data, we collect two 360 stereo datasets from Matterport3D and Stanford3D for training and evaluation. Extensive experiments and ablation study are provided to validate our method against existing algorithms. Finally, we show promising results on real-world environments capturing images with two consumer-level cameras.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset