Theroretical Insight into Batch Normalization: Data Dependant Auto-Tuning of Regularization Rate

by   Lakshmi Annamalai, et al.

Batch normalization is widely used in deep learning to normalize intermediate activations. Deep networks suffer from notoriously increased training complexity, mandating careful initialization of weights, requiring lower learning rates, etc. These issues have been addressed by Batch Normalization (BN), by normalizing the inputs of activations to zero mean and unit standard deviation. Making this batch normalization part of the training process dramatically accelerates the training process of very deep networks. A new field of research has been going on to examine the exact theoretical explanation behind the success of BN. Most of these theoretical insights attempt to explain the benefits of BN by placing them on its influence on optimization, weight scale invariance, and regularization. Despite BN undeniable success in accelerating generalization, the gap of analytically relating the effect of BN to the regularization parameter is still missing. This paper aims to bring out the data-dependent auto-tuning of the regularization parameter by BN with analytical proofs. We have posed BN as a constrained optimization imposed on non-BN weights through which we demonstrate its data statistics dependant auto-tuning of regularization parameter. We have also given analytical proof for its behavior under a noisy input scenario, which reveals the signal vs. noise tuning of the regularization parameter. We have also substantiated our claim with empirical results from the MNIST dataset experiments.


page 1

page 4


Understanding Batch Normalization

Batch normalization is a ubiquitous deep learning technique that normali...

Training Deep Neural Networks Without Batch Normalization

Training neural networks is an optimization problem, and finding a decen...

Why Regularized Auto-Encoders learn Sparse Representation?

While the authors of Batch Normalization (BN) identify and address an im...

Understanding Regularization in Batch Normalization

Batch Normalization (BN) makes output of hidden neuron had zero mean and...

Norm matters: efficient and accurate normalization schemes in deep networks

Over the past few years batch-normalization has been commonly used in de...

Guidelines for the Regularization of Gammas in Batch Normalization for Deep Residual Networks

L2 regularization for weights in neural networks is widely used as a sta...

Spreads in Effective Learning Rates: The Perils of Batch Normalization During Early Training

Excursions in gradient magnitude pose a persistent challenge when traini...

Please sign up or login with your details

Forgot password? Click here to reset