Reverse Back Propagation to Make Full Use of Derivative

02/13/2022
by   Weiming Xiong, et al.
0

The development of the back-propagation algorithm represents a landmark in neural networks. We provide an approach that conducts the back-propagation again to reverse the traditional back-propagation process to optimize the input loss at the input end of a neural network for better effects without extra costs during the inference time. Then we further analyzed its principles and advantages and disadvantages, reformulated the weight initialization strategy for our method. And experiments on MNIST, CIFAR10, and CIFAR100 convinced our approaches could adapt to a larger range of learning rate and learn better than vanilla back-propagation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset