Incremental Learning for Neural Radiance Field with Uncertainty-Filtered Knowledge Distillation

by   Mengqi Guo, et al.

Recent neural radiance field (NeRF) representation has achieved great success in the tasks of novel view synthesis and 3D reconstruction. However, they suffer from the catastrophic forgetting problem when continuously learning from streaming data without revisiting the previous training data. This limitation prohibits the application of existing NeRF models to scenarios where images come in sequentially. In view of this, we explore the task of incremental learning for neural radiance field representation in this work. We first propose a student-teacher pipeline to mitigate the catastrophic forgetting problem. Specifically, we iterate the process of using the student as the teacher at the end of each incremental step and let the teacher guide the training of the student in the next step. In this way, the student network is able to learn new information from the streaming data and retain old knowledge from the teacher network simultaneously. Given that not all information from the teacher network is helpful since it is only trained with the old data, we further introduce a random inquirer and an uncertainty-based filter to filter useful information. We conduct experiments on the NeRF-synthetic360 and NeRF-real360 datasets, where our approach significantly outperforms the baselines by 7.3 our approach can be applied to the large-scale camera facing-outwards dataset ScanNet, where we surpass the baseline by 60.0


page 1

page 8

page 12

page 14

page 15

page 16

page 17

page 18


Teacher Agent: A Non-Knowledge Distillation Method for Rehearsal-based Video Incremental Learning

With the rise in popularity of video-based social media, new categories ...

An Incremental Learning framework for Large-scale CTR Prediction

In this work we introduce an incremental learning framework for Click-Th...

Lifelong Generative Modeling

Lifelong learning is the problem of learning multiple consecutive tasks ...

Dual-Teacher Class-Incremental Learning With Data-Free Generative Replay

This paper proposes two novel knowledge transfer techniques for class-in...

One-Shot Optimal Topology Generation through Theory-Driven Machine Learning

We introduce a theory-driven mechanism for learning a neural network mod...

Cold Start Streaming Learning for Deep Networks

The ability to dynamically adapt neural networks to newly-available data...

A New Learning Paradigm for Stochastic Configuration Network: SCN+

Learning using privileged information (LUPI) paradigm, which pioneered t...

Please sign up or login with your details

Forgot password? Click here to reset