Assessing the Generalizability of a Performance Predictive Model - Sorbonne Université Access content directly
Conference Poster Year : 2023

Assessing the Generalizability of a Performance Predictive Model

Ana Nikolikj
Gjorgjina Cenikj
Gordana Ispirova
Diederick Vermetten
Ryan Dieter Lang
Andries Petrus Engelbrecht
Carola Doerr
Peter Korošec
Tome Eftimov

Abstract

A key component of automated algorithm selection and configuration, which in most cases are performed using supervised machine learning (ML) methods is a good-performing predictive model. The predictive model uses the feature representation of a set of problem instances as input data and predicts the algorithm performance achieved on them. Common machine learning models struggle to make predictions for instances with feature representations not covered by the training data, resulting in poor generalization to unseen problems. In this study, we propose a workflow to estimate the generalizability of a predictive model for algorithm performance, trained on one benchmark suite to another. The workflow has been tested by training predictive models across benchmark suites and the results show that generalizability patterns in the landscape feature space are reflected in the performance space.
Fichier principal
Vignette du fichier
GECCO_poster_Generalizability_HAL.pdf (912.79 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-04180600 , version 1 (12-08-2023)

Identifiers

Cite

Ana Nikolikj, Gjorgjina Cenikj, Gordana Ispirova, Diederick Vermetten, Ryan Dieter Lang, et al.. Assessing the Generalizability of a Performance Predictive Model. GECCO '23 Companion: Companion Conference on Genetic and Evolutionary Computation, Jul 2023, Lisbon, Portugal. ACM, GECCO '23 Companion: Proceedings of the Companion Conference on Genetic and Evolutionary Computation, pp.311-314, 2023, ⟨10.1145/3583133.3590617⟩. ⟨hal-04180600⟩
17 View
5 Download

Altmetric

Share

Gmail Mastodon Facebook X LinkedIn More