Post-synaptic potential regularization has potential

07/19/2019
by   Enzo Tartaglione, et al.
0

Improving generalization is one of the main challenges for training deep neural networks on classification tasks. In particular, a number of techniques have been proposed, aiming to boost the performance on unseen data: from standard data augmentation techniques to the ℓ_2 regularization, dropout, batch normalization, entropy-driven SGD and many more. In this work we propose an elegant, simple and principled approach: post-synaptic potential regularization (PSP). We tested this regularization on a number of different state-of-the-art scenarios. Empirical results show that PSP achieves a classification error comparable to more sophisticated learning strategies in the MNIST scenario, while improves the generalization compared to ℓ_2 regularization in deep architectures trained on CIFAR-10.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset