Journal of Computational Design and Engineering,
Journal Year:
2023,
Volume and Issue:
11(1), P. 12 - 33
Published: Dec. 20, 2023
Abstract
As
science
and
technology
advance,
the
need
for
novel
optimization
techniques
has
led
to
an
increase.
The
recently
proposed
metaheuristic
algorithm,
Gradient-based
optimizer
(GBO),
is
rooted
in
gradient-based
Newton's
method.
GBO
a
more
concrete
theoretical
foundation.
However,
gradient
search
rule
(GSR)
local
escaping
operator
(LEO)
operators
still
have
some
shortcomings.
insufficient
updating
method
simple
selection
process
limit
performance
of
algorithm.
In
this
paper,
improved
version
compensate
above
shortcomings,
called
RL-SDOGBO.
First,
during
GSR
phase,
Spearman
rank
correlation
coefficient
used
determine
weak
solutions
on
which
perform
dynamic
opposite
learning.
This
operation
assists
algorithm
escape
from
optima
enhance
exploration
capability.
Secondly,
optimize
exploitation
capability,
reinforcement
learning
guide
solution
update
modes
LEO
operator.
RL-SDOGBO
tested
12
classical
benchmark
functions
CEC2022
with
seven
representative
metaheuristics,
respectively.
impact
improvements,
scalability
running
time
balance
are
analyzed
discussed.
Combining
experimental
results
statistical
results,
exhibits
excellent
numerical
provides
high-quality
most
cases.
addition,
also
solve
anchor
clustering
problem
small
target
detection,
making
it
potential
competitive
option.
Scientific Reports,
Journal Year:
2025,
Volume and Issue:
15(1)
Published: Feb. 28, 2025
Abstract
Developments
in
object
detection
algorithms
are
critical
for
urban
planning,
environmental
monitoring,
surveillance,
and
many
other
applications.
The
primary
objective
of
the
article
was
to
improve
precision
model
efficiency.
paper
compared
performance
six
different
metaheuristic
optimization
including
Gray
Wolf
Optimizer
(GWO),
Particle
Swarm
Optimization
(PSO),
Genetic
Algorithm
(GA),
Remora
(ROA),
Aquila
(AO),
Hybrid
PSO–GWO
(HPSGWO)
combined
with
YOLOv7
YOLOv8.
study
included
two
distinct
remote
sensing
datasets,
RSOD
VHR-10.
Many
measures
as
precision,
recall,
mean
average
(mAP)
were
used
during
training,
validation,
testing
processes,
well
fit
score.
results
show
significant
improvements
both
YOLO
variants
following
using
these
strategies.
GWO-optimized
0.96
mAP
50,
0.69
50:95,
HPSGWO-optimized
YOLOv8
0.97
0.72
50:95
had
best
dataset.
Similarly,
versions
on
VHR-10
dataset
0.87
0.58
0.99
YOLOv8,
indicating
greater
performance.
findings
supported
usefulness
increasing
recall
rates
demonstrated
major
significance
improving
recognition
tasks
imaging,
opening
up
a
viable
route
applications
variety
disciplines.
Journal of Computational Design and Engineering,
Journal Year:
2023,
Volume and Issue:
10(6), P. 2065 - 2093
Published: Oct. 5, 2023
Abstract
The
beluga
whale
optimization
(BWO)
algorithm
is
a
recently
proposed
metaheuristic
that
simulates
three
behaviors:
whales
interacting
in
pairs
to
perform
mirror
swimming,
population
sharing
information
cooperate
predation,
and
fall.
However,
the
performance
of
BWO
still
needs
be
improved
enhance
its
practicality.
This
paper
proposes
modified
(MBWO)
with
multi-strategy.
It
was
inspired
by
whales’
two
group
gathering
for
foraging
searching
new
habitats
long-distance
migration.
aggregation
strategy
(GAs)
migration
(Ms).
GAs
can
improve
local
development
ability
accelerate
overall
rate
convergence
through
fine
search;
Ms
randomly
moves
towards
periphery
population,
enhancing
jump
out
optima.
In
order
verify
MBWO,
this
article
conducted
comprehensive
testing
on
MBWO
using
23
benchmark
functions,
IEEE
CEC2014,
CEC2021.
experimental
results
indicate
has
strong
ability.
also
tests
MBWO’s
solve
practical
engineering
problems
five
problems.
final
prove
effectiveness
solving
Computer Modeling in Engineering & Sciences,
Journal Year:
2023,
Volume and Issue:
139(3), P. 2557 - 2604
Published: Dec. 26, 2023
This
research
paper
presents
a
novel
optimization
method
called
the
Synergistic
Swarm
Optimization
Algorithm
(SSOA).The
SSOA
combines
principles
of
swarm
intelligence
and
synergistic
cooperation
to
search
for
optimal
solutions
efficiently.A
mechanism
is
employed,
where
particles
exchange
information
learn
from
each
other
improve
their
behaviors.This
enhances
exploitation
promising
regions
in
space
while
maintaining
exploration
capabilities.Furthermore,
adaptive
mechanisms,
such
as
dynamic
parameter
adjustment
diversification
strategies,
are
incorporated
balance
exploitation.By
leveraging
collaborative
nature
integrating
cooperation,
aims
achieve
superior
convergence
speed
solution
quality
performance
compared
algorithms.The
effectiveness
proposed
investigated
solving
23
benchmark
functions
various
engineering
design
problems.The
experimental
results
highlight
potential
addressing
challenging
problems,
making
it
tool
wide
range
applications
beyond.Matlab
codes
available
at:
https://www.mathworks.com/matlabcentral/fileexchange/153466-synergistic
Journal of Computational Design and Engineering,
Journal Year:
2023,
Volume and Issue:
10(6), P. 2223 - 2250
Published: Oct. 26, 2023
Abstract
The
coati
optimization
algorithm
(COA)
is
a
meta-heuristic
proposed
in
2022.
It
creates
mathematical
models
according
to
the
habits
and
social
behaviors
of
coatis:
(i)
In
group
organization
coatis,
half
coatis
climb
trees
chase
their
prey
away,
while
other
wait
beneath
catch
it
(ii)
Coatis
avoidance
predators
behavior,
which
gives
strong
global
exploration
ability.
However,
over
course
our
experiment,
we
uncovered
opportunities
for
enhancing
algorithm’s
performance.
When
confronted
with
intricate
problems,
certain
limitations
surfaced.
Much
like
long-nosed
raccoon
gradually
narrowing
its
search
range
as
approaches
optimal
solution,
COA
exhibited
tendencies
that
could
result
reduced
convergence
speed
risk
becoming
trapped
local
optima.
this
paper,
propose
an
improved
(ICOA)
enhance
efficiency.
Through
sound-based
envelopment
strategy,
can
capture
more
quickly
accurately,
allowing
converge
rapidly.
By
employing
physical
exertion
have
greater
variety
escape
options
when
being
chased,
thereby
exploratory
capabilities
ability
Finally,
lens
opposition-based
learning
strategy
added
improve
To
validate
performance
ICOA,
conducted
tests
using
IEEE
CEC2014
CEC2017
benchmark
functions,
well
six
engineering
problems.
Artificial Intelligence Review,
Journal Year:
2024,
Volume and Issue:
58(1)
Published: Nov. 4, 2024
The
sand
cat
swarm
optimization
algorithm
(SCSO)
is
a
metaheuristic
proposed
by
Amir
Seyyedabbasi
et
al.
SCSO
mimics
the
predatory
behavior
of
cats,
which
gives
strong
optimized
performance.
However,
as
number
iterations
increases,
moving
efficiency
decreases,
resulting
in
decline
search
ability.
convergence
speed
gradually
and
it
easy
to
fall
into
local
optimum,
difficult
find
better
solution.
In
order
improve
movement
cat,
enhance
global
ability
performance
algorithm,
an
improved
Swarm
Optimization
(ISCSO)
was
proposed.
ISCSO
we
propose
low-frequency
noise
strategy
spiral
contraction
walking
according
habit
add
random
opposition-based
learning
restart
strategy.
frequency
factor
used
control
direction
hunting
carried
out,
effectively
randomness
population,
expanded
range
enhanced
accelerated
algorithm.
We
use
23
standard
benchmark
functions
IEEE
CEC2014
compare
with
10
algorithms,
prove
effectiveness
Finally,
evaluated
using
five
constrained
engineering
design
problems.
results
these
problems,
has
3.08%,
0.23%,
0.37%,
22.34%,
1.38%
improvement
compared
original
respectively,
proves
practical
application
source
code
website
for
https://github.com/Ruiruiz30/ISCSO-s-code.