Neural Networks as Black-Box Benchmark Functions Optimized for Exploratory Landscape Features DOI Open Access
Raphael Patrick Prager, Konstantin Dietrich, Lennart Schneider

et al.

Published: July 31, 2023

Artificial benchmark functions are commonly used in optimization research because of their ability to rapidly evaluate potential solutions, making them a preferred substitute for real-world problems. However, these have faced criticism limited resemblance In response, recent has focused on automatically generating new areas where established test suites inadequate. These approaches limitations, such as the difficulty that exhibit exploratory landscape analysis (ELA) features beyond those existing benchmarks.

Language: Английский

New Benchmark Functions for Single-Objective Optimization Based on a Zigzag Pattern DOI Creative Commons
Jakub Kůdela, Radomil Matoušek

IEEE Access, Journal Year: 2022, Volume and Issue: 10, P. 8262 - 8278

Published: Jan. 1, 2022

Benchmarking plays a crucial role in both development of new optimization methods, and conducting proper comparisons between already existing particularly the field evolutionary computation. In this paper, we develop benchmark functions for bound-constrained single-objective that are based on zigzag function. The proposed function has three parameters control its behaviour difficulty resulting problems. Utilizing function, introduce four conduct extensive computational experiments to evaluate their performance as benchmarks. comprise using newly 100 different parameter settings comparison eight algorithms, which mix canonical methods best performing from Congress Evolutionary Computation competitions. Using results comparison, choose some parametrization devise an ambiguous set each problems introduces statistically significant ranking among but entire is with no clear dominating relationship algorithms. We also exploratory landscape analysis compare used Black-Box-Optimization-Benchmarking suite. suggest well suited algorithmic comparisons.

Language: Английский

Citations

24

SELECTOR DOI
Gjorgjina Cenikj, Ryan Dieter Lang, Andries P. Engelbrecht

et al.

Proceedings of the Genetic and Evolutionary Computation Conference, Journal Year: 2022, Volume and Issue: unknown, P. 620 - 629

Published: July 8, 2022

Fair algorithm evaluation is conditioned on the existence of high-quality benchmark datasets that are non-redundant and representative typical optimization scenarios. In this paper, we evaluate three heuristics for selecting diverse problem instances which should be involved in comparison algorithms order to ensure robust statistical performance analysis. The first approach employs clustering identify similar groups subsequent sampling from each cluster construct new benchmarks, while other two approaches use graph identifying dominating maximal independent sets nodes. We demonstrate applicability proposed by performing a analysis five portfolios consisting most commonly used benchmarks.

Language: Английский

Citations

17

Belief space-guided approach to self-adaptive particle swarm optimization DOI Creative Commons

Daniel von Eschwege,

Andries P. Engelbrecht

Swarm Intelligence, Journal Year: 2024, Volume and Issue: 18(1), P. 31 - 78

Published: Jan. 31, 2024

Abstract Particle swarm optimization (PSO) performance is sensitive to the control parameter values used, but tuning of parameters for problem at hand computationally expensive. Self-adaptive particle (SAPSO) algorithms attempt adjust during process, ideally without introducing additional which sensitive. This paper proposes a belief space (BS) approach, borrowed from cultural (CAs), towards development SAPSO. The resulting BS-SAPSO utilizes direct search optimal by excluding non-promising configurations space. achieves an improvement in 3–55% above various baselines, based on solution quality objective function achieved functions tested.

Language: Английский

Citations

3

Optimization test function synthesis with generative adversarial networks and adaptive neuro-fuzzy systems DOI
Miguel Melgarejo,

Mariana Medina,

Juan Lopez

et al.

Information Sciences, Journal Year: 2024, Volume and Issue: 686, P. 121371 - 121371

Published: Aug. 28, 2024

Language: Английский

Citations

3

Landscape features in single-objective continuous optimization: Have we hit a wall in algorithm selection generalization? DOI Creative Commons
Gjorgjina Cenikj, Gašper Petelin, Moritz Vinzent Seiler

et al.

Swarm and Evolutionary Computation, Journal Year: 2025, Volume and Issue: 94, P. 101894 - 101894

Published: Feb. 28, 2025

Language: Английский

Citations

0

DynamoRep: Trajectory-Based Population Dynamics for Classification of Black-box Optimization Problems DOI Open Access
Gjorgjina Cenikj, Gašper Petelin, Carola Doerr

et al.

Proceedings of the Genetic and Evolutionary Computation Conference, Journal Year: 2023, Volume and Issue: unknown, P. 813 - 821

Published: July 12, 2023

The application of machine learning (ML) models to the analysis optimization algorithms requires representation problems using numerical features. These features can be used as input for ML that are trained select or configure a suitable algorithm problem at hand. Since in pure black-box information about instance only obtained through function evaluation, common approach is dedicate some evaluations feature extraction, e.g., random sampling. This has two key downsides: (1) It reduces budget left actual phase, and (2) it neglects valuable could from problem-solver interaction.

Language: Английский

Citations

8

An Analysis of Differential Evolution Population Size DOI Creative Commons

A.E.H. Saad,

Andries P. Engelbrecht, Salman Khan

et al.

Applied Sciences, Journal Year: 2024, Volume and Issue: 14(21), P. 9976 - 9976

Published: Oct. 31, 2024

The performance of the differential evolution algorithm (DE) is known to be highly sensitive values assigned its control parameters. While numerous studies DE parameters do exist, these have limitations, particularly in context setting population size regardless problem-specific characteristics. Moreover, complex interrelationships between are frequently overlooked. This paper addresses limitations by critically analyzing existing guidelines for and assessing their efficacy problems various modalities. relative importance interrelationship using functional analysis variance (fANOVA) approach investigated. empirical uses thirty varying complexities from IEEE Congress on Evolutionary Computation (CEC) 2014 benchmark suite. results suggest that conventional one-size-fits-all possess possibility overestimating initial sizes. further explores how sizes impact across different fitness landscapes, highlighting important interactions other research lays groundwork subsequent thoughtful selection optimal algorithms, facilitating development more efficient adaptive strategies.

Language: Английский

Citations

3

Explainable Landscape Analysis in Automated Algorithm Performance Prediction DOI
Risto Trajanov,

Stefan Dimeski,

Martin Popovski

et al.

Lecture notes in computer science, Journal Year: 2022, Volume and Issue: unknown, P. 207 - 222

Published: Jan. 1, 2022

Language: Английский

Citations

12

Adaptive local landscape feature vector for problem classification and algorithm selection DOI
Yaxin Li, Jing Liang, Kunjie Yu

et al.

Applied Soft Computing, Journal Year: 2022, Volume and Issue: 131, P. 109751 - 109751

Published: Oct. 26, 2022

Language: Английский

Citations

12

Algorithm Instance Footprint: Separating Easily Solvable and Challenging Problem Instances DOI Open Access
Ana Nikolikj, Sašo Džeroski, Mario Andrés Muñoz

et al.

Proceedings of the Genetic and Evolutionary Computation Conference, Journal Year: 2023, Volume and Issue: unknown, P. 529 - 537

Published: July 12, 2023

In black-box optimization, it is essential to understand why an algorithm instance works on a set of problem instances while failing others and provide explanations its behavior. We propose methodology for formulating footprint that consists are easy be solved difficult solved, instance. This behavior the further linked landscape properties which make some or challenging. The proposed uses meta-representations embed performance into same vector space. These obtained by training supervised machine learning regression model prediction applying explainability techniques assess importance features predictions. Next, deterministic clustering demonstrates using them captures across space detects regions poor good performance, together with explanation leading it.

Language: Английский

Citations

6