Shifting Mean Activation Towards Zero with Bipolar Activation Functions

09/12/2017
by   Lars Eidnes, et al.
0

We propose a simple extension to the ReLU-family of activation functions that allows them to shift the mean activation across a layer towards zero. Combined with proper weight initialization, this alleviates the need for normalization layers. We explore the training of deep vanilla recurrent neural networks (RNNs) with up to 144 layers, and show that bipolar activation functions help learning in this setting. On the Penn Treebank and Text8 language modeling tasks we obtain competitive results, improving on the best reported results for non-gated networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset