RF+clust for Leave-One-Problem-Out Performance Prediction

01/23/2023
by   Ana Nikolikj, et al.
0

Per-instance automated algorithm configuration and selection are gaining significant moments in evolutionary computation in recent years. Two crucial, sometimes implicit, ingredients for these automated machine learning (AutoML) methods are 1) feature-based representations of the problem instances and 2) performance prediction methods that take the features as input to estimate how well a specific algorithm instance will perform on a given problem instance. Non-surprisingly, common machine learning models fail to make predictions for instances whose feature-based representation is underrepresented or not covered in the training data, resulting in poor generalization ability of the models for problems not seen during training.In this work, we study leave-one-problem-out (LOPO) performance prediction. We analyze whether standard random forest (RF) model predictions can be improved by calibrating them with a weighted average of performance values obtained by the algorithm on problem instances that are sufficiently close to the problem for which a performance prediction is sought, measured by cosine similarity in feature space. While our RF+clust approach obtains more accurate performance prediction for several problems, its predictive power crucially depends on the chosen similarity threshold as well as on the feature portfolio for which the cosine similarity is measured, thereby opening a new angle for feature selection in a zero-shot learning setting, as LOPO is termed in machine learning.

READ FULL TEXT

page 9

page 13

page 14

research
05/30/2023

Sensitivity Analysis of RF+clust for Leave-one-problem-out Performance Prediction

Leave-one-problem-out (LOPO) performance prediction requires machine lea...
research
05/31/2023

Assessing the Generalizability of a Performance Predictive Model

A key component of automated algorithm selection and configuration, whic...
research
06/20/2017

Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking

Machine-learned models are often described as "black boxes". In many rea...
research
12/27/2022

Feature Selection Approaches for Optimising Music Emotion Recognition Methods

The high feature dimensionality is a challenge in music emotion recognit...
research
10/11/2022

Synthetic Model Combination: An Instance-wise Approach to Unsupervised Ensemble Learning

Consider making a prediction over new test data without any opportunity ...
research
02/04/2022

Exploring the Feature Space of TSP Instances Using Quality Diversity

Generating instances of different properties is key to algorithm selecti...
research
02/01/2021

Towards Explainable Exploratory Landscape Analysis: Extreme Feature Selection for Classifying BBOB Functions

Facilitated by the recent advances of Machine Learning (ML), the automat...

Please sign up or login with your details

Forgot password? Click here to reset