Frosting Weights for Better Continual Training

01/07/2020
by   Xiaofeng Zhu, et al.
0

Training a neural network model can be a lifelong learning process and is a computationally intensive one. A severe adverse effect that may occur in deep neural network models is that they can suffer from catastrophic forgetting during retraining on new data. To avoid such disruptions in the continuous learning, one appealing property is the additive nature of ensemble models. In this paper, we propose two generic ensemble approaches, gradient boosting and meta-learning, to solve the catastrophic forgetting problem in tuning pre-trained neural network models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset