Generalization Ability of Feature-based Performance Prediction Models: A Statistical Analysis across Benchmarks - Sorbonne Université Access content directly
Conference Papers Year : 2024

Generalization Ability of Feature-based Performance Prediction Models: A Statistical Analysis across Benchmarks

Ana Nikolikj
Ana Kostovska
Gjorgjina Cenikj
Carola Doerr
Tome Eftimov

Abstract

This study examines the generalization ability of algorithm performance prediction models across various benchmark suites. Comparing the statistical similarity between the problem collections with the accuracy of performance prediction models that are based on exploratory landscape analysis features, we observe that there is a positive correlation between these two measures. Specifically, when the high-dimensional feature value distributions between training and testing suites lack statistical significance, the model tends to generalize well, in the sense that the testing errors are in the same range as the training errors. Two experiments validate these findings: one involving the standard benchmark suites, the BBOB and CEC collections, and another using five collections of affine combinations of BBOB problem instances.
Fichier principal
Vignette du fichier
CEC-2024-generalization.pdf (1.26 Mo) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-04580601 , version 1 (20-05-2024)

Identifiers

  • HAL Id : hal-04580601 , version 1

Cite

Ana Nikolikj, Ana Kostovska, Gjorgjina Cenikj, Carola Doerr, Tome Eftimov. Generalization Ability of Feature-based Performance Prediction Models: A Statistical Analysis across Benchmarks. 2024 IEEE Congress on Evolutionary Computation (CEC), Jun 2024, Yokohama, Japan. ⟨hal-04580601⟩
0 View
0 Download

Share

Gmail Mastodon Facebook X LinkedIn More