Analyzing the Impact of Undersampling on the Benchmarking and Configuration of Evolutionary Algorithms

04/20/2022
by   Diederick Vermetten, et al.
10

The stochastic nature of iterative optimization heuristics leads to inherently noisy performance measurements. Since these measurements are often gathered once and then used repeatedly, the number of collected samples will have a significant impact on the reliability of algorithm comparisons. We show that care should be taken when making decisions based on limited data. Particularly, we show that the number of runs used in many benchmarking studies, e.g., the default value of 15 suggested by the COCO environment, can be insufficient to reliably rank algorithms on well-known numerical optimization benchmarks. Additionally, methods for automated algorithm configuration are sensitive to insufficient sample sizes. This may result in the configurator choosing a `lucky' but poor-performing configuration despite exploring better ones. We show that relying on mean performance values, as many configurators do, can require a large number of runs to provide accurate comparisons between the considered configurations. Common statistical tests can greatly improve the situation in most cases but not always. We show examples of performance losses of more than 20 number of runs, as done by irace. Our results underline the importance of appropriately considering the statistical distribution of performance values.

READ FULL TEXT

page 6

page 7

page 8

research
03/26/2018

Algorithm Configuration: Learning policies for the quick termination of poor performers

One way to speed up the algorithm configuration task is to use short run...
research
12/19/2019

Benchmarking Discrete Optimization Heuristics with IOHprofiler

Automated benchmarking environments aim to support researchers in unders...
research
02/07/2022

Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration

It has long been observed that the performance of evolutionary algorithm...
research
03/17/2022

Non-Elitist Selection among Survivor Configurations can Improve the Performance of Irace

Modern optimization strategies such as evolutionary algorithms, ant colo...
research
02/12/2021

Leveraging Benchmarking Data for Informed One-Shot Dynamic Algorithm Selection

A key challenge in the application of evolutionary algorithms in practic...
research
06/12/2018

Benchmarking Evolutionary Algorithms For Real-valued Constrained Optimization - A Critical Review

Benchmarking plays an important role in the development of novel search ...
research
10/11/2018

IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics

IOHprofiler is a new tool for analyzing and comparing iterative optimiza...

Please sign up or login with your details

Forgot password? Click here to reset