Learned Features vs. Classical ELA on Affine BBOB Functions
Abstract
Automated algorithm selection has proven to be effective to improve optimization performance by using machine learning to select the best-performing algorithm for the particular problem being solved. However, doing so requires the ability to describe the landscape of optimization problems using numerical features, which is a difficult task.
In this work, we analyze the synergies and complementarity of recently proposed feature sets TransOpt and Deep ELA, which are based on deeplearning, and compare them to the commonly used classical ELA features. We analyze the correlation between the feature sets as well as how well one set can predict the other. We show that while the feature sets contain some shared information, each also contains important unique information. Further, we compare and benchmark the different feature sets for the task of automated algorithm selection on the recently proposed affine black-box optimization problems. We find that while classical ELA is the best-performing feature set by itself, using selected features from a combination of all three feature sets provides superior performance, and all three sets individually substantially outperform the single best solver.