Large-scale Benchmarking of Metaphor-based Optimization Heuristics - Sorbonne Université Access content directly
Conference Papers Year : 2024

Large-scale Benchmarking of Metaphor-based Optimization Heuristics

Abstract

The number of proposed iterative optimization heuristics is growing steadily, and with this growth, there have been many points of discussion within the wider community. One particular criticism that is raised towards many new algorithms is their focus on metaphors used to present the method, rather than emphasizing their potential algorithmic contributions. Several studies into popular metaphor-based algorithms have highlighted these problems, even showcasing algorithms that are functionally equivalent to older existing methods. Unfortunately, this detailed approach is not scalable to the whole set of metaphor-based algorithms. Because of this, we investigate ways in which benchmarking can shed light on these algorithms. To this end, we run a set of $294$ algorithm implementations on the BBOB function suite. We investigate how the choice of the budget, the performance measure, or other aspects of experimental design impact the comparison of these algorithms. Our results emphasize why benchmarking is a key step in expanding our understanding of the algorithm space, and what challenges still need to be overcome to fully gauge the potential improvements to the state-of-the-art hiding behind the metaphors.
Fichier principal
Vignette du fichier
GECCO-2024-LargeScaleBenchmarking.pdf (1.26 Mo) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-04580572 , version 1 (20-05-2024)

Identifiers

Cite

Diederick Vermetten, Carola Doerr, Hao Wang, Anna Kononova, Thomas Bäck. Large-scale Benchmarking of Metaphor-based Optimization Heuristics. GECCO '24: Proceedings of the Genetic and Evolutionary Computation Conference, Jul 2024, Melbourne, Australia. ⟨10.1145/3638529.3654122⟩. ⟨hal-04580572⟩
0 View
0 Download

Altmetric

Share

Gmail Mastodon Facebook X LinkedIn More