Mathematics,
Год журнала:
2023,
Номер
11(15), С. 3312 - 3312
Опубликована: Июль 27, 2023
Metaheuristic
optimization
algorithms
play
a
crucial
role
in
problems.
However,
the
traditional
identification
methods
have
following
problems:
(1)
difficulties
nonlinear
data
processing;
(2)
high
error
rates
caused
by
local
stagnation;
and
(3)
low
classification
resulting
from
premature
convergence.
This
paper
proposed
variant
based
on
gray
wolf
algorithm
(GWO)
with
chaotic
disturbance,
candidate
migration,
attacking
mechanisms,
naming
it
enhanced
optimizer
(EGWO),
to
solve
problem
of
convergence
stagnation.
The
performance
EGWO
was
tested
IEEE
CEC
2014
benchmark
functions,
results
were
compared
three
GWO
variants,
five
popular
algorithms,
six
recent
algorithms.
In
addition,
optimized
weights
biases
multi-layer
perceptron
(MLP)
an
EGWO-MLP
disease
model;
model
verified
UCI
dataset
including
Tic-Tac-Toe,
Heart,
XOR,
Balloon
datasets.
experimental
demonstrate
that
can
effectively
avoid
problems
provide
quasi-optimal
solution
for
problem.
Journal Of Big Data,
Год журнала:
2024,
Номер
11(1)
Опубликована: Янв. 2, 2024
Abstract
Beluga
Whale
Optimization
(BWO)
is
a
new
metaheuristic
algorithm
that
simulates
the
social
behaviors
of
beluga
whales
swimming,
foraging,
and
whale
falling.
Compared
with
other
optimization
algorithms,
BWO
shows
certain
advantages
in
solving
unimodal
multimodal
problems.
However,
convergence
speed
performance
still
have
some
deficiencies
when
complex
multidimensional
Therefore,
this
paper
proposes
hybrid
method
called
HBWO
combining
Quasi-oppositional
based
learning
(QOBL),
adaptive
spiral
predation
strategy,
Nelder-Mead
simplex
search
(NM).
Firstly,
initialization
phase,
QOBL
strategy
introduced.
This
reconstructs
initial
spatial
position
population
by
pairwise
comparisons
to
obtain
more
prosperous
higher
quality
population.
Subsequently,
an
designed
exploration
exploitation
phases.
The
first
learns
optimal
individual
positions
dimensions
through
avoid
loss
local
optimality.
At
same
time,
movement
motivated
cosine
factor
introduced
maintain
balance
between
exploitation.
Finally,
NM
added.
It
corrects
multiple
scaling
methods
improve
accurately
efficiently.
verified
utilizing
CEC2017
CEC2019
test
functions.
Meanwhile,
superiority
six
engineering
design
examples.
experimental
results
show
has
feasibility
effectiveness
practical
problems
than
methods.
Scientific Reports,
Год журнала:
2024,
Номер
14(1)
Опубликована: Июнь 20, 2024
Abstract
As
a
newly
proposed
optimization
algorithm
based
on
the
social
hierarchy
and
hunting
behavior
of
gray
wolves,
grey
wolf
(GWO)
has
gradually
become
popular
method
for
solving
problems
in
various
engineering
fields.
In
order
to
further
improve
convergence
speed,
solution
accuracy,
local
minima
escaping
ability
traditional
GWO
algorithm,
this
work
proposes
multi-strategy
fusion
improved
(IGWO)
algorithm.
First,
initial
population
is
optimized
using
lens
imaging
reverse
learning
laying
foundation
global
search.
Second,
nonlinear
control
parameter
strategy
cosine
variation
coordinate
exploration
exploitation
Finally,
inspired
by
tunicate
swarm
(TSA)
particle
(PSO),
tuning
parameters,
correction
individual
historical
optimal
positions
are
added
position
update
equations
speed
up
The
assessed
23
benchmark
test
problems,
15
CEC2014
2
well-known
constraint
problems.
results
show
that
IGWO
balanced
E&P
capability
coping
with
as
analyzed
Wilcoxon
rank
sum
Friedman
tests,
clear
advantage
over
other
state-of-the-art
algorithms.
Electronics,
Год журнала:
2025,
Номер
14(1), С. 197 - 197
Опубликована: Янв. 5, 2025
The
Dung
Beetle
Optimization
Algorithm
(DBO)
is
characterized
by
its
great
convergence
accuracy
and
quick
speed.
However,
like
other
swarm
intelligent
optimization
algorithms,
it
also
has
the
disadvantages
of
having
an
unbalanced
ability
to
explore
world
use
local
resources,
as
well
being
prone
settling
into
optimal
search
in
latter
stages
optimization.
In
order
address
these
issues,
this
research
suggests
a
multi-strategy
fusion
dung
beetle
method
(MSFDBO).
To
enhance
quality
first
solution,
refractive
reverse
learning
technique
expands
algorithm
space
stage.
algorithm’s
increased
adding
adaptive
curve
control
population
size
prevent
from
reaching
optimum.
improve
balance
exploitation
global
exploration,
respectively,
triangle
wandering
strategy
subtractive
averaging
optimizer
were
later
added
Rolling
Breeding
Beetle.
Individual
beetles
will
congregate
at
current
position,
which
near
value,
during
last
stage
MSFDBO;
however,
value
could
not
be
value.
Thus,
variationally
perturb
solution
(so
that
leaps
out
final
MSFDBO)
algorithmic
performance
(generally
specifically,
effect
optimizing
search),
Gaussian–Cauchy
hybrid
variational
perturbation
factor
introduced.
Using
CEC2017
benchmark
function,
MSFDBO’s
verified
comparing
seven
different
intelligence
algorithms.
MSFDBO
ranks
terms
average
performance.
can
lower
labor
production
expenses
associated
with
welding
beam
reducer
design
after
testing
two
engineering
application
challenges.
When
comes
lowering
manufacturing
costs
overall
weight,
outperforms
methods.
Biomimetics,
Год журнала:
2025,
Номер
10(2), С. 92 - 92
Опубликована: Фев. 6, 2025
Aiming
at
the
problem
that
honey
badger
algorithm
easily
falls
into
local
convergence,
insufficient
global
search
ability,
and
low
convergence
speed,
this
paper
proposes
a
optimization
(Global
Optimization
HBA)
(GOHBA),
which
improves
ability
of
population,
with
better
to
jump
out
optimum,
faster
stability.
The
introduction
Tent
chaotic
mapping
initialization
enhances
population
diversity
initializes
quality
HBA.
Replacing
density
factor
range
in
entire
solution
space
avoids
premature
optimum.
addition
golden
sine
strategy
capability
HBA
accelerates
speed.
Compared
seven
algorithms,
GOHBA
achieves
optimal
mean
value
on
14
23
tested
functions.
On
two
real-world
engineering
design
problems,
was
optimal.
three
path
planning
had
higher
accuracy
convergence.
above
experimental
results
show
performance
is
indeed
excellent.
Algorithms,
Год журнала:
2025,
Номер
18(3), С. 160 - 160
Опубликована: Март 10, 2025
This
paper
presents
JADEDO,
a
hybrid
optimization
method
that
merges
the
dandelion
optimizer’s
(DO)
dispersal-inspired
stages
with
JADE’s
(adaptive
differential
evolution)
dynamic
mutation
and
crossover
operators.
By
integrating
these
complementary
mechanisms,
JADEDO
effectively
balances
global
exploration
local
exploitation
for
both
unimodal
multimodal
search
spaces.
Extensive
benchmarking
against
classical
cutting-edge
metaheuristics
on
IEEE
CEC2022
functions—encompassing
unimodal,
multimodal,
landscapes—demonstrates
achieves
highly
competitive
results
in
terms
of
solution
accuracy,
convergence
speed,
robustness.
Statistical
analysis
using
Wilcoxon
sum-rank
tests
further
underscores
JADEDO’s
consistent
advantage
over
several
established
optimizers,
reflecting
its
proficiency
navigating
complex,
high-dimensional
problems.
To
validate
real-world
applicability,
was
also
evaluated
three
engineering
design
problems
(pressure
vessel,
spring,
speed
reducer).
Notably,
it
achieved
top-tier
or
near-optimal
designs
constrained,
high-stakes
environments.
Moreover,
to
demonstrate
suitability
security-oriented
tasks,
applied
an
attack-response
scenario,
efficiently
identifying
cost-effective,
low-risk
countermeasures
under
stringent
time
constraints.
These
collective
findings
highlight
as
robust,
flexible,
high-performing
framework
capable
tackling
benchmark-oriented
practical
challenges.