Edit-DiffNeRF: Editing 3D Neural Radiance Fields using 2D Diffusion Model

06/15/2023
by   Lu Yu, et al.
0

Recent research has demonstrated that the combination of pretrained diffusion models with neural radiance fields (NeRFs) has emerged as a promising approach for text-to-3D generation. Simply coupling NeRF with diffusion models will result in cross-view inconsistency and degradation of stylized view syntheses. To address this challenge, we propose the Edit-DiffNeRF framework, which is composed of a frozen diffusion model, a proposed delta module to edit the latent semantic space of the diffusion model, and a NeRF. Instead of training the entire diffusion for each scene, our method focuses on editing the latent semantic space in frozen pretrained diffusion models by the delta module. This fundamental change to the standard diffusion framework enables us to make fine-grained modifications to the rendered views and effectively consolidate these instructions in a 3D scene via NeRF training. As a result, we are able to produce an edited 3D scene that faithfully aligns to input text instructions. Furthermore, to ensure semantic consistency across different viewpoints, we propose a novel multi-view semantic consistency loss that extracts a latent semantic embedding from the input view as a prior, and aim to reconstruct it in different views. Our proposed method has been shown to effectively edit real-world 3D scenes, resulting in 25 performed 3D edits with text instructions compared to prior work.

READ FULL TEXT

page 4

page 7

page 8

research
03/22/2023

Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions

We propose a method for editing NeRF scenes with text-instructions. Give...
research
03/14/2023

Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation

Text-to-3D generation has shown rapid progress in recent days with the a...
research
08/31/2023

MVDream: Multi-view Diffusion for 3D Generation

We propose MVDream, a multi-view diffusion model that is able to generat...
research
06/09/2023

RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models

The emergence of Neural Radiance Fields (NeRF) has promoted the developm...
research
03/28/2023

Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion

We propose a high-quality 3D-to-3D conversion method, Instruct 3D-to-3D....
research
03/24/2023

CompoNeRF: Text-guided Multi-object Compositional NeRF with Editable 3D Scene Layout

Recent research endeavors have shown that combining neural radiance fiel...
research
04/11/2023

Re-imagine the Negative Prompt Algorithm: Transform 2D Diffusion into 3D, alleviate Janus problem and Beyond

Although text-to-image diffusion models have made significant strides in...

Please sign up or login with your details

Forgot password? Click here to reset