Enhancing Autonomous Orchard Navigation: A Real-Time Convolutional Neural Network-Based Obstacle Classification System for Distinguishing ‘Real’ and ‘Fake’ Obstacles in Agricultural Robotics
Agriculture,
Journal Year:
2025,
Volume and Issue:
15(8), P. 827 - 827
Published: April 10, 2025
Autonomous
navigation
in
agricultural
environments
requires
precise
obstacle
classification
to
ensure
collision-free
movement.
This
study
proposes
a
convolutional
neural
network
(CNN)-based
model
designed
enhance
for
robots,
particularly
orchards.
Building
upon
previously
developed
YOLOv8n-based
real-time
detection
system,
the
incorporates
Ghost
Modules
and
Squeeze-and-Excitation
(SE)
blocks
feature
extraction
while
maintaining
computational
efficiency.
Obstacles
are
categorized
as
“Real”—those
that
physically
impact
navigation,
such
tree
trunks
persons—and
“Fake”—those
do
not,
tall
weeds
branches—allowing
decisions.
The
was
trained
on
separate
orchard
campus
datasets
fine-tuned
using
Hyperband
optimization
evaluated
an
external
test
set
assess
generalization
unseen
obstacles.
model’s
robustness
tested
under
varied
lighting
conditions,
including
low-light
scenarios,
real-world
applicability.
Computational
efficiency
analyzed
based
inference
speed,
memory
consumption,
hardware
requirements.
Comparative
analysis
against
state-of-the-art
models
(VGG16,
ResNet50,
MobileNetV3,
DenseNet121,
EfficientNetB0,
InceptionV3)
confirmed
proposed
superior
precision
(p),
recall
(r),
F1-score,
complex
scenarios.
maintained
strong
across
diverse
environmental
varying
illumination
Furthermore,
revealed
orchard-combined
achieved
highest
speed
at
2.31
FPS
balance
between
accuracy
When
deployed
real-time,
95.0%
orchards
92.0%
environments.
system
demonstrated
false
positive
rate
of
8.0%
environment
2.0%
orchard,
with
consistent
negative
both
These
results
validate
effectiveness
differentiation
settings.
Its
generalization,
obstacles,
make
it
well-suited
deployment
agriculture.
Future
work
will
focus
enhancing
improving
performance
occlusion,
expanding
dataset
diversity
further
strengthen
Language: Английский
Shaping the Future of Horticulture: Innovative Technologies, Artificial Intelligence, and Robotic Automation Through a Bibliometric Lens
Horticulturae,
Journal Year:
2025,
Volume and Issue:
11(5), P. 449 - 449
Published: April 22, 2025
This
study
conducts
a
bibliometric
and
content
analysis
based
on
publications
indexed
in
the
Web
of
Science
Core
Collection,
aiming
to
map
evolution
key
themes
horticultural
research
context
technological
innovation
sustainability.
The
results
reveal
strong
orientation
toward
digitalization
automation,
particularly
through
integration
artificial
intelligence,
mechatronic
systems,
sensor-based
monitoring
crop
management.
In
field
biotechnology,
keywords
such
as
gene
expression,
genetic
diversity,
micropropagation
reflect
sustained
interest
improving
resilience
disease
resistance
vitro
propagation
techniques.
Furthermore,
concepts
environmental
control,
soilless
culture,
energy
efficiency,
co-generation
highlight
focus
optimizing
growing
conditions
integrating
renewable
sources
into
protected
systems.
geographical
distribution
studies
highlights
increased
academic
output
countries
like
India
regions
sub-Saharan
Africa,
reflecting
global
transferring
advanced
technologies
vulnerable
areas.
Moreover,
collaboration
networks
are
dominated
by
leading
institutions
Wageningen
University,
which
act
hubs
for
knowledge
diffusion.
findings
suggest
that
future
should
prioritize
development
durable,
energy-efficient
adapted
various
agro-climatic
zones.
It
is
recommended
policymakers
stakeholders
support
interdisciplinary
initiatives,
promote
transfer
mechanisms,
ensure
equitable
access
smallholder
farmers
emerging
economies.
Language: Английский
CV-YOLOv10-AR-M: Foreign Object Detection in Pu-Erh Tea Based on Five-Fold Cross-Validation
Foods,
Journal Year:
2025,
Volume and Issue:
14(10), P. 1680 - 1680
Published: May 9, 2025
To
address
the
problem
of
detecting
foreign
bodies
in
Pu-erh
tea,
this
study
proposes
an
intelligent
detection
method
based
on
improved
YOLOv10
network.
By
introducing
MPDIoU
loss
function,
network
is
optimized
to
effectively
enhance
positioning
accuracy
model
complex
background
and
improve
small
target
objects.
Using
AssemFormer
optimize
structure,
network’s
ability
perceive
objects
its
process
global
information
are
improved.
Rectangular
Self-Calibrated
Module,
prediction
bounding
box
optimized,
further
improving
classification
target-positioning
abilities
scenes.
The
results
showed
that
Box,
Cls,
Dfl
functions
CV-YOLOv10-AR-M
One-to-Many
Head
task
were,
respectively,
14.60%,
19.74%,
20.15%
lower
than
those
In
One-to-One
task,
they
decreased
by
10.42%,
29.11%,
20.15%,
respectively.
Compared
with
original
network,
accuracy,
recall
rate,
mAP
were
increased
5.35%,
11.72%
8.32%,
improves
model’s
attention
sizes,
backgrounds,
detailed
information,
providing
effective
technical
support
for
quality
control
agricultural
field.
Language: Английский