Depth Assisted Full Resolution Network for Single Image-based View Synthesis

11/17/2017
by   Xiaodong Cun, et al.
0

Researches in novel viewpoint synthesis majorly focus on interpolation from multi-view input images. In this paper, we focus on a more challenging and ill-posed problem that is to synthesize novel viewpoints from one single input image. To achieve this goal, we propose a novel deep learning-based technique. We design a full resolution network that extracts local image features with the same resolution of the input, which contributes to derive high resolution and prevent blurry artifacts in the final synthesized images. We also involve a pre-trained depth estimation network into our system, and thus 3D information is able to be utilized to infer the flow field between the input and the target image. Since the depth network is trained by depth order information between arbitrary pairs of points in the scene, global image features are also involved into our system. Finally, a synthesis layer is used to not only warp the observed pixels to the desired positions but also hallucinate the missing pixels with recorded pixels. Experiments show that our technique performs well on images of various scenes, and outperforms the state-of-the-art techniques.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset