Deep scene-scale material estimation from multi-view indoor captures

11/15/2022
by   Siddhant Prakash, et al.
0

The movie and video game industries have adopted photogrammetry as a way to create digital 3D assets from multiple photographs of a real-world scene. But photogrammetry algorithms typically output an RGB texture atlas of the scene that only serves as visual guidance for skilled artists to create material maps suitable for physically-based rendering. We present a learning-based approach that automatically produces digital assets ready for physically-based rendering, by estimating approximate material maps from multi-view captures of indoor scenes that are used with retopologized geometry. We base our approach on a material estimation Convolutional Neural Network (CNN) that we execute on each input image. We leverage the view-dependent visual cues provided by the multiple observations of the scene by gathering, for each pixel of a given image, the color of the corresponding point in other images. This image-space CNN provides us with an ensemble of predictions, which we merge in texture space as the last step of our approach. Our results demonstrate that the recovered assets can be directly used for physically-based rendering and editing of real indoor scenes from any viewpoint and novel lighting. Our method generates approximate material maps in a fraction of time compared to the closest previous solutions.

READ FULL TEXT

page 8

page 9

page 10

page 11

page 12

page 13

page 14

page 15

research
11/18/2022

Multi-view Inverse Rendering for Large-scale Real-world Indoor Scenes

We present a multi-view inverse rendering method for large-scale real-wo...
research
03/14/2022

NeILF: Neural Incident Light Field for Physically-based Material Estimation

We present a differentiable rendering framework for material and lightin...
research
03/12/2016

Towards Building an RGBD-M Scanner

We present a portable device to capture both shape and reflectance of an...
research
05/27/2021

Passing Multi-Channel Material Textures to a 3-Channel Loss

Our objective is to compute a textural loss that can be used to train te...
research
11/04/2020

Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding

For many fundamental scene understanding tasks, it is difficult or impos...
research
07/25/2020

OpenRooms: An End-to-End Open Framework for Photorealistic Indoor Scene Datasets

Large-scale photorealistic datasets of indoor scenes, with ground truth ...
research
02/23/2021

Generative Modelling of BRDF Textures from Flash Images

We learn a latent space for easy capture, semantic editing, consistent i...

Please sign up or login with your details

Forgot password? Click here to reset