Remote Sensing,
Journal Year:
2024,
Volume and Issue:
16(23), P. 4385 - 4385
Published: Nov. 24, 2024
Phenotypic
traits,
such
as
plant
height,
internode
length,
and
node
count,
are
essential
indicators
of
the
growth
status
tomato
plants,
carrying
significant
implications
for
research
on
genetic
breeding
cultivation
management.
Deep
learning
algorithms
object
detection
segmentation
have
been
widely
utilized
to
extract
phenotypic
parameters.
However,
segmentation-based
methods
labor-intensive
due
their
requirement
extensive
annotation
during
training,
while
approaches
exhibit
limitations
in
capturing
intricate
structural
features.
To
achieve
real-time,
efficient,
precise
extraction
traits
seedling
tomatoes,
a
novel
phenotyping
approach
based
2D
pose
estimation
was
proposed.
We
enhanced
heatmap-free
method,
YOLOv8s-pose,
by
integrating
Convolutional
Block
Attention
Module
(CBAM)
Content-Aware
ReAssembly
FEatures
(CARAFE),
develop
an
improved
YOLOv8s-pose
(IYOLOv8s-pose)
model,
which
efficiently
focuses
salient
image
features
with
minimal
parameter
overhead
achieving
superior
recognition
performance
complex
backgrounds.
IYOLOv8s-pose
manifested
considerable
enhancement
detecting
bending
points
stem
nodes.
Particularly
detection,
attained
Precision
99.8%,
exhibiting
improvement
over
RTMPose-s,
YOLOv5s6-pose,
YOLOv7s-pose,
2.9%,
5.4%,
3.5%,
respectively.
Regarding
height
estimation,
achieved
RMSE
0.48
cm
rRMSE
2%,
65.1%,
68.1%,
65.6%,
51.1%
reduction
compared
When
confronted
more
also
exhibited
15.5%,
23.9%,
27.2%,
12.5%
YOLOv8s-pose.
achieves
high
precision
simultaneously
enhancing
efficiency
convenience,
rendering
it
particularly
well
suited
extracting
parameters
plants
grown
naturally
within
greenhouse
environments.
This
innovative
provides
new
means
rapid,
intelligent,
real-time
acquisition
Agriculture,
Journal Year:
2025,
Volume and Issue:
15(3), P. 298 - 298
Published: Jan. 30, 2025
Phenotypic
traits
of
fungi
and
their
automated
extraction
are
crucial
for
evaluating
genetic
diversity,
breeding
new
varieties,
estimating
yield.
However,
research
on
the
high-throughput,
rapid,
non-destructive
fungal
phenotypic
using
3D
point
clouds
remains
limited.
In
this
study,
a
smart
phone
is
used
to
capture
multi-view
images
shiitake
mushrooms
(Lentinula
edodes)
from
three
different
heights
angles,
employing
YOLOv8x
model
segment
primary
image
regions.
The
segmented
were
reconstructed
in
Structure
Motion
(SfM)
Multi-View
Stereo
(MVS).
To
automatically
individual
mushroom
instances,
we
developed
CP-PointNet++
network
integrated
with
clustering
methods,
achieving
an
overall
accuracy
(OA)
97.45%
segmentation.
computed
phenotype
correlated
strongly
manual
measurements,
yielding
R2
>
0.8
nRMSE
<
0.09
pileus
transverse
longitudinal
diameters,
=
0.53
RMSE
3.26
mm
height,
0.79
0.12
stipe
diameter,
0.65
4.98
height.
Using
these
parameters,
yield
estimation
was
performed
PLSR,
SVR,
RF,
GRNN
machine
learning
models,
demonstrating
superior
performance
(R2
0.91).
This
approach
also
adaptable
extracting
other
fungi,
providing
valuable
support
initiatives.
Frontiers in Plant Science,
Journal Year:
2024,
Volume and Issue:
15
Published: Aug. 19, 2024
Wheat
exhibits
complex
characteristics
during
its
growth,
such
as
extensive
tillering,
slender
and
soft
leaves,
severe
organ
cross-obscuration,
posing
a
considerable
challenge
in
full-cycle
phenotypic
monitoring.
To
address
this,
this
study
presents
synthesized
method
based
on
SFM-MVS
(Structure-from-Motion,
Multi-View
Stereo)
processing
for
handling
segmenting
wheat
point
clouds,
covering
the
entire
growth
cycle
from
seedling
to
grain
filling
stages.
First,
multi-view
image
acquisition
platform
was
constructed
capture
sequences
of
plants,
dense
clouds
were
generated
using
technology.
High-quality
produced
by
implementing
improved
Euclidean
clustering
combined
with
centroids,
color
filtering,
statistical
filtering
methods.
Subsequently,
segmentation
plant
stems
leaves
performed
region
algorithm.
Although
performance
suboptimal
jointing,
booting
stages
due
glut
overlap,
there
salient
improvement
leaf
efficiency
over
cycle.
Finally,
parameters
analyzed
across
different
stages,
comparing
automated
measurements
height,
length,
width
actual
measurements.
The
results
demonstrated
coefficients
determination
(
R2
)
0.9979,
0.9977,
0.995;
root
mean
square
errors
(RMSE)
1.0773
cm,
0.2612
0.0335
cm;
relative
(RRMSE)
2.1858%,
1.7483%,
2.8462%,
respectively.
These
validate
reliability
accuracy
our
proposed
workflow
automatically
extracting
width,
indicating
that
3D
reconstructed
model
achieves
high
precision
can
quickly,
accurately,
non-destructively
extract
parameters.
Additionally,
convex
hull
volume,
surface
area,
Crown
area
extracted,
providing
detailed
analysis
dynamic
changes
throughout
ANOVA
conducted
cultivars,
accurately
revealing
significant
differences
at
various
This
proposes
convenient,
rapid,
quantitative
method,
offering
crucial
technical
support
dynamics
monitoring,
applicable
precise
monitoring
wheat.
Plant Phenomics,
Journal Year:
2024,
Volume and Issue:
6
Published: Jan. 1, 2024
The
rice
panicle
traits
substantially
influence
grain
yield,
making
them
a
primary
target
for
phenotyping
studies.
However,
most
existing
techniques
are
limited
to
controlled
indoor
environments
and
have
difficulty
in
capturing
the
under
natural
growth
conditions.
Here,
we
developed
PanicleNeRF,
novel
method
that
enables
high-precision
low-cost
reconstruction
of
three-dimensional
(3D)
models
field
based
on
video
acquired
by
smartphone.
proposed
combined
large
model
Segment
Anything
Model
(SAM)
small
You
Only
Look
Once
version
8
(YOLOv8)
achieve
segmentation
images.
neural
radiance
fields
(NeRF)
technique
was
then
employed
3D
using
images
with
2D
segmentation.
Finally,
resulting
point
clouds
processed
successfully
extract
traits.
results
show
PanicleNeRF
effectively
addressed
image
task,
achieving
mean
F1
score
86.9%
Intersection
over
Union
(IoU)
79.8%,
nearly
double
boundary
overlap
(BO)
performance
compared
YOLOv8.
As
cloud
quality,
significantly
outperformed
traditional
SfM-MVS
(structure-from-motion
multi-view
stereo)
methods,
such
as
COLMAP
Metashape.
length
accurately
extracted
rRMSE
2.94%
indica
1.75%
japonica
rice.
volume
estimated
from
strongly
correlated
number
(
R
2
=
0.85
0.82
)
mass
(0.80
0.76
).
This
provides
solution
high-throughput
in-field
panicles,
accelerating
efficiency
breeding.
Agronomy,
Journal Year:
2024,
Volume and Issue:
14(9), P. 2016 - 2016
Published: Sept. 4, 2024
Traditional
deep
learning
methods
employing
2D
images
can
only
classify
healthy
and
unhealthy
seedlings;
consequently,
this
study
proposes
a
method
by
which
to
further
seedlings
into
primary
secondary
finally
differentiate
three
classes
of
seedling
through
3D
point
cloud
for
the
detection
useful
eggplant
transplants.
Initially,
RGB
types
substrate-cultivated
(primary,
secondary,
unhealthy)
were
collected,
classified
using
ResNet50,
VGG16,
MobilNetV2.
Subsequently,
was
generated
types,
series
filtering
processes
(fast
Euclidean
clustering,
filtering,
voxel
filtering)
employed
remove
noise.
Parameters
(number
leaves,
plant
height,
stem
diameter)
extracted
from
found
be
highly
correlated
with
manually
measured
values.
The
box
plot
shows
that
clearly
differentiated
parameters.
clouds
ultimately
directly
classification
models
PointNet++,
dynamic
graph
convolutional
neural
network
(DGCNN),
PointConv,
in
addition
complementary
operation
plants
missing
leaves.
PointConv
model
demonstrated
best
performance,
an
average
accuracy,
precision,
recall
95.83,
95.88%,
respectively,
loss
0.01.
This
employs
spatial
feature
information
analyse
different
categories
more
effectively
than
two-dimensional
(2D)
image
three-dimensional
(3D)
extraction
methods.
However,
there
is
paucity
studies
applying
predict
Consequently,
has
potential
identify
high
accuracy.
Furthermore,
it
enables
quality
inspection
during
agricultural
production.
Plants,
Journal Year:
2024,
Volume and Issue:
13(21), P. 3088 - 3088
Published: Nov. 2, 2024
Spectral
imaging
technique
has
been
widely
applied
in
plant
phenotype
analysis
to
improve
trait
selection
and
genetic
advantages.
The
latest
developments
applications
of
various
optical
techniques
phenotypes
were
reviewed,
their
advantages
applicability
compared.
X-ray
computed
tomography
(X-ray
CT)
light
detection
ranging
(LiDAR)
are
more
suitable
for
the
three-dimensional
reconstruction
surfaces,
tissues,
organs.
Chlorophyll
fluorescence
(ChlF)
thermal
(TI)
can
be
used
measure
physiological
characteristics
plants.
Specific
symptoms
caused
by
nutrient
deficiency
detected
hyperspectral
multispectral
imaging,
LiDAR,
ChlF.
Future
research
based
on
spectral
closely
integrated
with
processes.
It
effectively
support
related
disciplines,
such
as
metabolomics
genomics,
focus
micro-scale
activities,
oxygen
transport
intercellular
chlorophyll
transmission.
Plant Methods,
Journal Year:
2024,
Volume and Issue:
20(1)
Published: Aug. 20, 2024
Soybean
seeds
are
susceptible
to
damage
from
the
Riptortus
pedestris,
which
is
a
significant
factor
affecting
quality
of
soybean
seeds.
Currently,
manual
screening
methods
for
limited
visual
inspection,
making
it
difficult
identify
that
phenotypically
defect-free
but
have
been
punctured
by
stink
bugs
on
sub-surface.
To
facilitate
convenient
and
efficient
identification
healthy
seeds,
this
paper
proposes
seed
pest
detection
method
based
spatial
frequency
domain
imaging
combined
with
RL-SVM.
Firstly,
optical
data
obtained
using
single
integration
sphere
technique,
vigor
index
through
germination
experiments.
Then,
above
two
items
feature
extraction
algorithms
(the
successive
projections
algorithm
competitive
adaptive
reweighted
sampling
algorithm),
characteristic
wavelengths
soybeans
identified.
Subsequently,
technique
used
obtain
sub-surface
images
in
forward
manner,
coefficients
such
as
reduced
scattering
coefficient
$${{\mu
}{\prime}}_{s}$$
absorption
$${\mu
}_{a}$$
inverted.
Finally,
RL-MLR,
RL-GRNN,
RL-SVM
prediction
models
established
ratio
area
insect-damaged
entire
seed,
varieties,
at
three
(502
nm,
813
712
nm)
predicting
identifying
stinging
sucking
levels
The
experimental
results
show
yields
small
errors
less
than
15%
10%
.
After
parameter
adjustment
reinforcement
learning,
Macro-Recall
metrics
each
model
improved
10%-15%,
achieves
high
value
0.9635
classifying