PLU: The Piecewise Linear Unit Activation Function

09/03/2018
by   Andrei Nicolae, et al.
0

Successive linear transforms followed by nonlinear "activation" functions can approximate nonlinear functions to arbitrary precision given sufficient layers. The number of necessary layers is dependent on, in part, by the nature of the activation function. The hyperbolic tangent (tanh) has been a favorable choice as an activation until the networks grew deeper and the vanishing gradients posed a hindrance during training. For this reason the Rectified Linear Unit (ReLU) defined by max(0, x) has become the prevailing activation function in deep neural networks. Unlike the tanh function which is smooth, the ReLU yields networks that are piecewise linear functions with a limited number of facets. This paper presents a new activation function, the Piecewise Linear Unit (PLU) that is a hybrid of tanh and ReLU and shown to outperform the ReLU on a variety of tasks while avoiding the vanishing gradients issue of the tanh.

READ FULL TEXT
research
11/08/2021

SMU: smooth activation function for deep networks using smoothing maximum technique

Deep learning researchers have a keen interest in proposing two new nove...
research
08/02/2021

Piecewise Linear Units Improve Deep Neural Networks

The activation function is at the heart of a deep neural networks nonlin...
research
04/10/2023

Approximation of Nonlinear Functionals Using Deep ReLU Networks

In recent years, functional neural networks have been proposed and studi...
research
08/06/2020

The nlogistic-sigmoid function

The variants of the logistic-sigmoid functions used in artificial neural...
research
06/04/2021

Regularization and Reparameterization Avoid Vanishing Gradients in Sigmoid-Type Networks

Deep learning requires several design choices, such as the nodes' activa...
research
04/08/2016

Norm-preserving Orthogonal Permutation Linear Unit Activation Functions (OPLU)

We propose a novel activation function that implements piece-wise orthog...
research
05/17/2021

Activation function design for deep networks: linearity and effective initialisation

The activation function deployed in a deep neural network has great infl...

Please sign up or login with your details

Forgot password? Click here to reset