Applied Sciences,
Год журнала:
2023,
Номер
13(18), С. 10397 - 10397
Опубликована: Сен. 17, 2023
Drones
are
widely
used
for
wildlife
monitoring.
Deep
learning
algorithms
key
to
the
success
of
monitoring
with
drones,
although
they
face
problem
detecting
small
targets.
To
solve
this
problem,
we
have
introduced
SE-YOLO
model,
which
incorporates
a
channel
self-attention
mechanism
into
advanced
real-time
object
detection
algorithm
YOLOv7,
enabling
model
perform
effectively
on
However,
there
is
another
barrier;
lack
publicly
available
UAV
aerial
datasets
hampers
research
algorithms.
fill
gap,
present
large-scale,
multi-class,
high-quality
dataset
called
WAID
(Wildlife
Aerial
Images
from
Drone),
contains
14,375
images
different
environmental
conditions,
covering
six
species
and
multiple
habitat
types.
We
conducted
statistical
analysis
experiment,
an
comparison
generalization
experiment.
The
experiment
demonstrated
characteristics
both
quantitatively
intuitively.
experiments
compared
types
as
well
method
perspective
practical
application
UAVs
experimental
results
show
that
suitable
study
UAVs,
most
effective
in
scenario,
mAP
up
0.983.
This
brings
new
methods,
data,
inspiration
field
by
UAVs.
Machine Learning and Knowledge Extraction,
Год журнала:
2023,
Номер
5(4), С. 1680 - 1716
Опубликована: Ноя. 20, 2023
YOLO
has
become
a
central
real-time
object
detection
system
for
robotics,
driverless
cars,
and
video
monitoring
applications.
We
present
comprehensive
analysis
of
YOLO’s
evolution,
examining
the
innovations
contributions
in
each
iteration
from
original
up
to
YOLOv8,
YOLO-NAS,
with
transformers.
start
by
describing
standard
metrics
postprocessing;
then,
we
discuss
major
changes
network
architecture
training
tricks
model.
Finally,
summarize
essential
lessons
development
provide
perspective
on
its
future,
highlighting
potential
research
directions
enhance
systems.
IEEE Access,
Год журнала:
2024,
Номер
12, С. 42816 - 42833
Опубликована: Янв. 1, 2024
This
paper
implements
a
systematic
methodological
approach
to
review
the
evolution
of
YOLO
variants.
Each
variant
is
dissected
by
examining
its
internal
architectural
composition,
providing
thorough
understanding
structural
components.
Subsequently,
highlights
key
innovations
introduced
in
each
variant,
shedding
light
on
incremental
refinements.
The
includes
benchmarked
performance
metrics,
offering
quantitative
measure
variant's
capabilities.
further
presents
variants
across
diverse
range
domains,
manifesting
their
real-world
impact.
structured
ensures
comprehensive
examination
YOLOs
journey,
methodically
communicating
advancements
and
before
delving
into
domain
applications.
It
envisioned,
incorporation
concepts
such
as
federated
learning
can
introduce
collaborative
training
paradigm,
where
models
benefit
from
multiple
edge
devices,
enhancing
privacy,
adaptability,
generalisation.
IEEE Access,
Год журнала:
2024,
Номер
12, С. 59782 - 59806
Опубликована: Янв. 1, 2024
Deep
learning
has
revolutionized
object
detection,
with
YOLO
(You
Only
Look
Once)
leading
in
real-time
accuracy.
However,
detecting
moving
objects
visual
streams
presents
distinct
challenges.
This
paper
proposes
a
refined
YOLOv8
detection
model,
emphasizing
motion-specific
detections
varied
contexts.
Through
tailored
preprocessing
and
architectural
adjustments,
we
heighten
the
model's
sensitivity
to
movements.
Rigorous
testing
against
KITTI,
LASIESTA,
PESMOD,
MOCS
benchmark
datasets
revealed
that
modified
outperforms
state-of-the-art
models,
especially
environments
significant
movement.
Specifically,
our
model
achieved
an
accuracy
of
90%,
mean
Average
Precision
(mAP)
maintained
processing
speed
30
frames
per
second
(FPS),
Intersection
over
Union
(IoU)
score
80%.
offers
detailed
insight
into
trajectories,
proving
invaluable
areas
like
security,
traffic
management,
film
analysis
where
motion
understanding
is
critical.
As
importance
dynamic
scene
interpretation
grows
artificial
intelligence
computer
vision,
proposed
enhanced
highlights
potential
specialized
underscores
significance
findings
evolving
field
detection.
Agronomy,
Год журнала:
2023,
Номер
13(2), С. 477 - 477
Опубликована: Фев. 6, 2023
Soybeans
(Glycine
max
(L.)
Merr.),
a
popular
food
resource
worldwide,
have
various
uses
throughout
the
industry,
from
everyday
foods
and
health
functional
to
cosmetics.
are
vulnerable
pests
such
as
stink
bugs,
beetles,
mites,
moths,
which
reduce
yields.
Riptortus
pedestris
(R.
pedestris)
has
been
reported
cause
damage
pods
leaves
soybean
growing
season.
In
this
study,
an
experiment
was
conducted
detect
R.
according
three
different
environmental
conditions
(pod
filling
stage,
maturity
artificial
cage)
by
developing
surveillance
platform
based
on
unmanned
ground
vehicle
(UGV)
GoPro
CAM.
Deep
learning
technology
(MRCNN,
YOLOv3,
Detectron2)-based
models
used
in
can
be
quickly
challenged
(i.e.,
built
with
lightweight
parameter)
immediately
through
web
application.
The
image
dataset
distributed
random
selection
for
training,
validation,
testing
then
preprocessed
labeling
annotation.
deep
model
localized
classified
individuals
bounding
box
masking
data.
achieved
high
performances,
at
0.952,
0.716,
0.873,
respectively,
represented
calculated
means
of
average
precision
(mAP)
value.
manufactured
will
enable
identification
field
effective
tool
insect
forecasting
early
stage
pest
outbreaks
crop
production.
Information,
Год журнала:
2023,
Номер
14(4), С. 218 - 218
Опубликована: Апрель 3, 2023
Pedestrian
tracking
and
detection
have
become
critical
aspects
of
advanced
driver
assistance
systems
(ADASs),
due
to
their
academic
commercial
potential.
Their
objective
is
locate
various
pedestrians
in
videos
assign
them
unique
identities.
The
data
association
task
problematic,
particularly
when
dealing
with
inter-pedestrian
occlusion.
This
occurs
multiple
cross
paths
or
move
too
close
together,
making
it
difficult
for
the
system
identify
track
individual
pedestrians.
Inaccurate
can
lead
false
alarms,
missed
detections,
incorrect
decisions.
To
overcome
this
challenge,
our
paper
focuses
on
improving
pedestrian
system’s
Deep-SORT
algorithm,
which
solved
as
a
linear
optimization
problem
using
newly
generated
cost
matrix.
We
introduce
set
new
matrices
that
rely
metrics
such
intersections,
distances,
bounding
boxes.
evaluate
trackers
real
time,
we
use
YOLOv5
images.
also
perform
experimental
evaluations
Multiple
Object
Tracking
17
(MOT17)
challenge
dataset.
proposed
demonstrate
promising
results,
showing
an
improvement
most
MOT
performance
compared
default
intersection
over
union
(IOU)