Non-Iterative Knowledge Fusion in Deep Convolutional Neural Networks
Incorporation of a new knowledge into neural networks with simultaneous preservation of the previous one is known to be a nontrivial problem. This problem becomes even more complex when new knowledge is contained not in new training examples, but inside the parameters (connection weights) of another neural network. Here we propose and test two methods allowing combining the knowledge contained in separate networks. One method is based on a simple operation of summation of weights of constituent neural networks. Another method assumes incorporation of a new knowledge by modification of weights nonessential for the preservation of already stored information. We show that with these methods the knowledge from one network can be transferred into another one non-iteratively without requiring training sessions. The fused network operates efficiently, performing classification far better than a chance level. The efficiency of the methods is quantified on several publicly available data sets in classification tasks both for shallow and deep neural networks.
READ FULL TEXT