Artificial
benchmark
functions
are
commonly
used
in
optimization
research
because
of
their
ability
to
rapidly
evaluate
potential
solutions,
making
them
a
preferred
substitute
for
real-world
problems.
However,
these
have
faced
criticism
limited
resemblance
In
response,
recent
has
focused
on
automatically
generating
new
areas
where
established
test
suites
inadequate.
These
approaches
limitations,
such
as
the
difficulty
that
exhibit
exploratory
landscape
analysis
(ELA)
features
beyond
those
existing
benchmarks.
IEEE Access,
Journal Year:
2022,
Volume and Issue:
10, P. 8262 - 8278
Published: Jan. 1, 2022
Benchmarking
plays
a
crucial
role
in
both
development
of
new
optimization
methods,
and
conducting
proper
comparisons
between
already
existing
particularly
the
field
evolutionary
computation.
In
this
paper,
we
develop
benchmark
functions
for
bound-constrained
single-objective
that
are
based
on
zigzag
function.
The
proposed
function
has
three
parameters
control
its
behaviour
difficulty
resulting
problems.
Utilizing
function,
introduce
four
conduct
extensive
computational
experiments
to
evaluate
their
performance
as
benchmarks.
comprise
using
newly
100
different
parameter
settings
comparison
eight
algorithms,
which
mix
canonical
methods
best
performing
from
Congress
Evolutionary
Computation
competitions.
Using
results
comparison,
choose
some
parametrization
devise
an
ambiguous
set
each
problems
introduces
statistically
significant
ranking
among
but
entire
is
with
no
clear
dominating
relationship
algorithms.
We
also
exploratory
landscape
analysis
compare
used
Black-Box-Optimization-Benchmarking
suite.
suggest
well
suited
algorithmic
comparisons.
Proceedings of the Genetic and Evolutionary Computation Conference,
Journal Year:
2022,
Volume and Issue:
unknown, P. 620 - 629
Published: July 8, 2022
Fair
algorithm
evaluation
is
conditioned
on
the
existence
of
high-quality
benchmark
datasets
that
are
non-redundant
and
representative
typical
optimization
scenarios.
In
this
paper,
we
evaluate
three
heuristics
for
selecting
diverse
problem
instances
which
should
be
involved
in
comparison
algorithms
order
to
ensure
robust
statistical
performance
analysis.
The
first
approach
employs
clustering
identify
similar
groups
subsequent
sampling
from
each
cluster
construct
new
benchmarks,
while
other
two
approaches
use
graph
identifying
dominating
maximal
independent
sets
nodes.
We
demonstrate
applicability
proposed
by
performing
a
analysis
five
portfolios
consisting
most
commonly
used
benchmarks.
Swarm Intelligence,
Journal Year:
2024,
Volume and Issue:
18(1), P. 31 - 78
Published: Jan. 31, 2024
Abstract
Particle
swarm
optimization
(PSO)
performance
is
sensitive
to
the
control
parameter
values
used,
but
tuning
of
parameters
for
problem
at
hand
computationally
expensive.
Self-adaptive
particle
(SAPSO)
algorithms
attempt
adjust
during
process,
ideally
without
introducing
additional
which
sensitive.
This
paper
proposes
a
belief
space
(BS)
approach,
borrowed
from
cultural
(CAs),
towards
development
SAPSO.
The
resulting
BS-SAPSO
utilizes
direct
search
optimal
by
excluding
non-promising
configurations
space.
achieves
an
improvement
in
3–55%
above
various
baselines,
based
on
solution
quality
objective
function
achieved
functions
tested.
Proceedings of the Genetic and Evolutionary Computation Conference,
Journal Year:
2023,
Volume and Issue:
unknown, P. 813 - 821
Published: July 12, 2023
The
application
of
machine
learning
(ML)
models
to
the
analysis
optimization
algorithms
requires
representation
problems
using
numerical
features.
These
features
can
be
used
as
input
for
ML
that
are
trained
select
or
configure
a
suitable
algorithm
problem
at
hand.
Since
in
pure
black-box
information
about
instance
only
obtained
through
function
evaluation,
common
approach
is
dedicate
some
evaluations
feature
extraction,
e.g.,
random
sampling.
This
has
two
key
downsides:
(1)
It
reduces
budget
left
actual
phase,
and
(2)
it
neglects
valuable
could
from
problem-solver
interaction.
Applied Sciences,
Journal Year:
2024,
Volume and Issue:
14(21), P. 9976 - 9976
Published: Oct. 31, 2024
The
performance
of
the
differential
evolution
algorithm
(DE)
is
known
to
be
highly
sensitive
values
assigned
its
control
parameters.
While
numerous
studies
DE
parameters
do
exist,
these
have
limitations,
particularly
in
context
setting
population
size
regardless
problem-specific
characteristics.
Moreover,
complex
interrelationships
between
are
frequently
overlooked.
This
paper
addresses
limitations
by
critically
analyzing
existing
guidelines
for
and
assessing
their
efficacy
problems
various
modalities.
relative
importance
interrelationship
using
functional
analysis
variance
(fANOVA)
approach
investigated.
empirical
uses
thirty
varying
complexities
from
IEEE
Congress
on
Evolutionary
Computation
(CEC)
2014
benchmark
suite.
results
suggest
that
conventional
one-size-fits-all
possess
possibility
overestimating
initial
sizes.
further
explores
how
sizes
impact
across
different
fitness
landscapes,
highlighting
important
interactions
other
research
lays
groundwork
subsequent
thoughtful
selection
optimal
algorithms,
facilitating
development
more
efficient
adaptive
strategies.
Proceedings of the Genetic and Evolutionary Computation Conference,
Journal Year:
2023,
Volume and Issue:
unknown, P. 529 - 537
Published: July 12, 2023
In
black-box
optimization,
it
is
essential
to
understand
why
an
algorithm
instance
works
on
a
set
of
problem
instances
while
failing
others
and
provide
explanations
its
behavior.
We
propose
methodology
for
formulating
footprint
that
consists
are
easy
be
solved
difficult
solved,
instance.
This
behavior
the
further
linked
landscape
properties
which
make
some
or
challenging.
The
proposed
uses
meta-representations
embed
performance
into
same
vector
space.
These
obtained
by
training
supervised
machine
learning
regression
model
prediction
applying
explainability
techniques
assess
importance
features
predictions.
Next,
deterministic
clustering
demonstrates
using
them
captures
across
space
detects
regions
poor
good
performance,
together
with
explanation
leading
it.