Deep supervised feature selection using Stochastic Gates

10/09/2018
by   Yutaro Yamada, et al.
0

In this study, we propose a novel non-parametric embedded feature selection method based on minimizing the ℓ_0 norm of the vector of an indicator variable, whose point-wise product of an input selects a subset of features. Our approach relies on the continuous relaxation of Bernoulli distributions, which allows our model to learn the parameters of the approximate Bernoulli distributions via tractable methods. Using these tools we present a general neural network that simultaneously minimizes a loss function while selecting relevant features. We also provide an information-theoretic justification of incorporating Bernoulli distribution into our approach. Finally, we demonstrate the potential of the approach on synthetic and real-life applications.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset