Efficient Wrapper Feature Selection using Autoencoder and Model Based Elimination

05/28/2019
by   Sharan Ramjee, et al.
0

We propose a computationally efficient wrapper feature selection method - called Autoencoder and Model Based Elimination of features using Relevance and Redundancy scores (AMBER) - that uses a single ranker model along with autoencoders to perform greedy backward elimination of features. The ranker model is used to prioritize the removal of features that are not critical to the classification task, while the autoencoders are used to prioritize the elimination of correlated features. We demonstrate the superior feature selection ability of AMBER on 4 well known datasets corresponding to different domain applications via comparing the classification accuracies with other computationally efficient state-of-the-art feature selection techniques. Interestingly, we find that the ranker model that is used for feature selection does not necessarily have to be the same as the final classifier that is trained on the selected features. Finally, we note how a smaller number of features can lead to higher accuracies on some datasets, and hypothesize that overfitting the ranker model on the training set facilitates the selection of more salient features.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/05/2012

Improving feature selection algorithms using normalised feature histograms

The proposed feature selection method builds a histogram of the most sta...
research
12/29/2022

On the utility of feature selection in building two-tier decision trees

Nowadays, feature selection is frequently used in machine learning when ...
research
05/17/2022

Unsupervised Features Ranking via Coalitional Game Theory for Categorical Data

Not all real-world data are labeled, and when labels are not available, ...
research
01/16/2015

Feature Selection based on Machine Learning in MRIs for Hippocampal Segmentation

Neurodegenerative diseases are frequently associated with structural cha...
research
07/29/2020

Fibonacci and k-Subsecting Recursive Feature Elimination

Feature selection is a data mining task with the potential of speeding u...
research
07/10/2022

FIB: A Method for Evaluation of Feature Impact Balance in Multi-Dimensional Data

Errors might not have the same consequences depending on the task at han...
research
06/14/2016

Max-Margin Feature Selection

Many machine learning applications such as in vision, biology and social...

Please sign up or login with your details

Forgot password? Click here to reset