Generalization Ability of Feature-based Performance Prediction Models: A Statistical Analysis across Benchmarks - Sorbonne Université
Communication Dans Un Congrès Année : 2024

Generalization Ability of Feature-based Performance Prediction Models: A Statistical Analysis across Benchmarks

Ana Nikolikj
Ana Kostovska
Gjorgjina Cenikj
Carola Doerr
Tome Eftimov

Résumé

This study examines the generalization ability of algorithm performance prediction models across various benchmark suites. Comparing the statistical similarity between the problem collections with the accuracy of performance prediction models that are based on exploratory landscape analysis features, we observe that there is a positive correlation between these two measures. Specifically, when the high-dimensional feature value distributions between training and testing suites lack statistical significance, the model tends to generalize well, in the sense that the testing errors are in the same range as the training errors. Two experiments validate these findings: one involving the standard benchmark suites, the BBOB and CEC collections, and another using five collections of affine combinations of BBOB problem instances.
Fichier principal
Vignette du fichier
CEC-2024-generalization.pdf (1.26 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04580601 , version 1 (20-05-2024)

Identifiants

  • HAL Id : hal-04580601 , version 1

Citer

Ana Nikolikj, Ana Kostovska, Gjorgjina Cenikj, Carola Doerr, Tome Eftimov. Generalization Ability of Feature-based Performance Prediction Models: A Statistical Analysis across Benchmarks. 2024 IEEE Congress on Evolutionary Computation (CEC), Jun 2024, Yokohama, Japan. ⟨hal-04580601⟩
43 Consultations
18 Téléchargements

Partager

More