Feature Selection via L1-Penalized Squared-Loss Mutual Information

10/06/2012
by   Wittawat Jitkrittum, et al.
0

Feature selection is a technique to screen out less important features. Many existing supervised feature selection algorithms use redundancy and relevancy as the main criteria to select features. However, feature interaction, potentially a key characteristic in real-world problems, has not received much attention. As an attempt to take feature interaction into account, we propose L1-LSMI, an L1-regularization based algorithm that maximizes a squared-loss variant of mutual information between selected features and outputs. Numerical results show that L1-LSMI performs well in handling redundancy, detecting non-linear dependency, and considering feature interaction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2023

SLM: End-to-end Feature Selection via Sparse Learnable Masks

Feature selection has been widely used to alleviate compute requirements...
research
11/24/2014

Mutual Information-Based Unsupervised Feature Transformation for Heterogeneous Feature Subset Selection

Conventional mutual information (MI) based feature selection (FS) method...
research
07/17/2019

Feature Selection via Mutual Information: New Theoretical Insights

Mutual information has been successfully adopted in filter feature-selec...
research
01/27/2020

Feature selection in machine learning: Rényi min-entropy vs Shannon entropy

Feature selection, in the context of machine learning, is the process of...
research
06/04/2021

Analysis of the robustness of NMF algorithms

We examine three non-negative matrix factorization techniques; L2-norm, ...

Please sign up or login with your details

Forgot password? Click here to reset