PosE-Enhanced Point Transformer with Local Surface Features (LSF) for Wood–Leaf Separation
Forests,
Journal Year:
2024,
Volume and Issue:
15(12), P. 2244 - 2244
Published: Dec. 20, 2024
Wood–leaf
separation
from
forest
LiDAR
point
clouds
is
a
challenging
task
due
to
the
complex
and
irregular
structures
of
tree
canopies.
Traditional
machine
vision
deep
learning
methods
often
struggle
accurately
distinguish
between
fine
branches
leaves.
This
challenge
arises
primarily
lack
suitable
features
limitations
existing
position
encodings
in
capturing
unique
intricate
characteristics
clouds.
In
this
work,
we
propose
an
innovative
approach
that
integrates
Local
Surface
Features
(LSF)
Position
Encoding
(PosE)
module
within
Point
Transformer
(PT)
network
address
these
challenges.
We
began
by
preprocessing
applying
technique,
supplemented
manual
correction,
create
wood–leaf-separated
datasets
for
training.
Next,
introduced
Feature
Histogram
(PFH)
construct
LSF
each
input,
while
utilizing
Fast
PFH
(FPFH)
enhance
computational
efficiency.
Subsequently,
designed
PosE
PT,
leveraging
trigonometric
dimensionality
expansion
Random
Fourier
Feature-based
Transformation
(RFFT)
nuanced
feature
analysis.
design
significantly
enhances
representational
richness
precision
Afterward,
segmented
branch
cloud
was
used
model
skeletons
automatically,
leaves
were
incorporated
complete
digital
twin.
Our
enhanced
network,
tested
on
three
different
types
forests,
achieved
up
96.23%
accuracy
91.51%
mean
intersection
over
union
(mIoU)
wood–leaf
separation,
outperforming
original
PT
approximately
5%.
study
not
only
expands
limits
research
but
also
demonstrates
significant
improvements
reconstruction
results,
particularly
twigs,
which
paves
way
more
accurate
resource
surveys
advanced
twin
construction.
Language: Английский
Advancing a Vision Foundation Model for Ming-Style Furniture Image Segmentation: A New Dataset and Method
Yuehua Wan,
No information about this author
W. Wang,
No information about this author
Meng Zhang
No information about this author
et al.
Sensors,
Journal Year:
2024,
Volume and Issue:
25(1), P. 96 - 96
Published: Dec. 27, 2024
This
paper
tackles
the
challenge
of
accurately
segmenting
images
Ming-style
furniture,
an
important
aspect
China’s
cultural
heritage,
to
aid
in
its
preservation
and
analysis.
Existing
vision
foundation
models,
like
segment
anything
model
(SAM),
struggle
with
complex
structures
Ming
furniture
due
need
for
manual
prompts
imprecise
segmentation
outputs.
To
address
these
limitations,
we
introduce
two
key
innovations:
material
attribute
prompter
(MAP),
which
automatically
generates
based
on
furniture’s
properties,
structure
refinement
module
(SRM),
enhances
by
combining
high-
low-level
features.
Additionally,
present
MF2K
dataset,
includes
2073
annotated
pixel-level
masks
across
eight
materials
environments.
Our
experiments
demonstrate
that
proposed
method
significantly
improves
accuracy,
outperforming
state-of-the-art
models
terms
mean
intersection
over
union
(mIoU).
Ablation
studies
highlight
contributions
MAP
SRM
both
performance
computational
efficiency.
work
offers
a
powerful
automated
solution
intricate
structures,
facilitating
digital
in-depth
analysis
furniture.
Language: Английский