2022 IEEE Congress on Evolutionary Computation (CEC), Год журнала: 2024, Номер unknown, С. 1 - 8
Опубликована: Июнь 30, 2024
Язык: Английский
2022 IEEE Congress on Evolutionary Computation (CEC), Год журнала: 2024, Номер unknown, С. 1 - 8
Опубликована: Июнь 30, 2024
Язык: Английский
ACM Computing Surveys, Год журнала: 2022, Номер 55(12), С. 1 - 31
Опубликована: Ноя. 30, 2022
Instance Space Analysis (ISA) is a recently developed methodology to (a) support objective testing of algorithms and (b) assess the diversity test instances. Representing instances as feature vectors, ISA extends Rice’s 1976 Algorithm Selection Problem framework enable visualization entire space possible instances, gain insights into how algorithm performance affected by instance properties. Rather than reporting on average across chosen set problems, standard practice, offers more nuanced understanding unique strengths weaknesses different regions that may otherwise be hidden average. It also facilitates assessment any bias in provides guidance about adequacy benchmark suites. This article comprehensive tutorial has been evolving over several years, includes details all software tools are enabling its worldwide adoption many disciplines. A case study comparing for university timetabling presented illustrate tools.
Язык: Английский
Процитировано
40Lecture notes in computer science, Год журнала: 2025, Номер unknown, С. 361 - 376
Опубликована: Янв. 1, 2025
Язык: Английский
Процитировано
1IEEE Access, Год журнала: 2022, Номер 10, С. 8262 - 8278
Опубликована: Янв. 1, 2022
Benchmarking plays a crucial role in both development of new optimization methods, and conducting proper comparisons between already existing particularly the field evolutionary computation. In this paper, we develop benchmark functions for bound-constrained single-objective that are based on zigzag function. The proposed function has three parameters control its behaviour difficulty resulting problems. Utilizing function, introduce four conduct extensive computational experiments to evaluate their performance as benchmarks. comprise using newly 100 different parameter settings comparison eight algorithms, which mix canonical methods best performing from Congress Evolutionary Computation competitions. Using results comparison, choose some parametrization devise an ambiguous set each problems introduces statistically significant ranking among but entire is with no clear dominating relationship algorithms. We also exploratory landscape analysis compare used Black-Box-Optimization-Benchmarking suite. suggest well suited algorithmic comparisons.
Язык: Английский
Процитировано
24ACM Transactions on Evolutionary Learning and Optimization, Год журнала: 2021, Номер 1(4), С. 1 - 21
Опубликована: Окт. 13, 2021
Experimental studies are prevalent in Evolutionary Computation ( EC ), and concerns about the reproducibility replicability of such have increased recent times, reflecting similar other scientific fields. In this article, we discuss, within context EC, different types suggest a classification that refines badge system Association Computing Machinery ACM ) adopted by Transactions on Learning Optimization TELO ). We identify cultural technical obstacles to field. Finally, provide guidelines tools may help overcome some these obstacles.
Язык: Английский
Процитировано
30Computers & Operations Research, Год журнала: 2020, Номер 128, С. 105184 - 105184
Опубликована: Дек. 18, 2020
Язык: Английский
Процитировано
30Algorithms, Год журнала: 2021, Номер 14(3), С. 78 - 78
Опубликована: Фев. 27, 2021
The choice of which objective functions, or benchmark problems, should be used to test an optimization algorithm is a crucial part the selection framework. Benchmark suites that are often in literature have been shown exhibit poor coverage problem space. Exploratory landscape analysis can quantify characteristics functions. However, exploratory measures based on samples function, and there lack work appropriate sample size needed produce reliable measures. This study presents approach determine minimum obtain robust Based measures, self-organizing feature map cluster comprehensive set From this, suite has better single-objective, boundary-constrained space proposed.
Язык: Английский
Процитировано
26Mathematics, Год журнала: 2022, Номер 10(3), С. 432 - 432
Опубликована: Янв. 29, 2022
In optimization, algorithm selection, which is the selection of most suitable for a specific problem, great importance, as performance heavily dependent on problem being solved. However, when using machine learning model depends data used to train and test model, existing optimization benchmarks only provide limited amount data. To help with this artificial generation has been shown be useful tool augmenting benchmark problems. paper, we are interested in knowledge transfer between artificially generated handmade problems domain continuous numerical optimization. That is, can an trained purely correctly recommendations We show that such produces low-quality results, also explanations about how works differences sets order explain model’s performance.
Язык: Английский
Процитировано
19Proceedings of the Genetic and Evolutionary Computation Conference, Год журнала: 2022, Номер unknown, С. 620 - 629
Опубликована: Июль 8, 2022
Fair algorithm evaluation is conditioned on the existence of high-quality benchmark datasets that are non-redundant and representative typical optimization scenarios. In this paper, we evaluate three heuristics for selecting diverse problem instances which should be involved in comparison algorithms order to ensure robust statistical performance analysis. The first approach employs clustering identify similar groups subsequent sampling from each cluster construct new benchmarks, while other two approaches use graph identifying dominating maximal independent sets nodes. We demonstrate applicability proposed by performing a analysis five portfolios consisting most commonly used benchmarks.
Язык: Английский
Процитировано
17ACM Transactions on Evolutionary Learning and Optimization, Год журнала: 2024, Номер 5(1), С. 1 - 19
Опубликована: Июнь 21, 2024
Choosing a set of benchmark problems is often key component any empirical evaluation iterative optimization heuristics. In continuous, single-objective optimization, several sets have become widespread, including the well-established BBOB suite. While this suite designed to enable rigorous benchmarking, it also commonly used for testing methods such as algorithm selection, which was never around. We present MA-BBOB function generator, uses functions in an affine combination. work, we describe full procedure create these combinations and highlight tradeoffs design decisions, specifically choice place optimum uniformly at random domain. then illustrate how generator can be gain more low-level insight into landscapes through use exploratory landscape analysis. Finally, show potential use-case generating wide training data selectors. Using setup, that basic scheme using features predict best does not lead optimal results, selector trained purely on generalizes poorly combinations.
Язык: Английский
Процитировано
4Natural computing series, Год журнала: 2025, Номер unknown, С. 117 - 148
Опубликована: Янв. 1, 2025
Язык: Английский
Процитировано
0