HI-CMAIM: Hybrid Intelligence-Based Multi-Source Unstructured Chinese Map Annotation Interpretation Model
Jiaxin Ren,
No information about this author
Wanzeng Liu,
No information about this author
Jun Chen
No information about this author
et al.
Remote Sensing,
Journal Year:
2025,
Volume and Issue:
17(2), P. 204 - 204
Published: Jan. 8, 2025
Map
annotation
interpretation
is
crucial
for
geographic
information
extraction
and
intelligent
map
analysis.
This
study
addresses
the
challenges
associated
with
interpreting
Chinese
annotations,
specifically
visual
complexity
data
scarcity
issues,
by
proposing
a
hybrid
intelligence-based
multi-source
unstructured
method
(HI-CMAIM).
Firstly,
leveraging
expert
knowledge
in
an
innovative
way,
we
constructed
high-quality
knowledge-based
dataset
(EKMAD),
which
significantly
enhanced
diversity
accuracy.
Furthermore,
improved
detection
model
(CMA-DB)
recognition
(CMA-CRNN)
were
designed
based
on
characteristics
of
both
incorporating
knowledge.
A
two-stage
transfer
learning
strategy
was
employed
to
tackle
issue
limited
training
samples.
Experimental
results
demonstrated
superiority
HI-CMAIM
over
existing
algorithms.
In
task,
CMA-DB
achieved
8.54%
improvement
Hmean
(from
87.73%
96.27%)
compared
DB
algorithm.
CMA-CRNN
15.54%
accuracy
79.77%
95.31%)
4-fold
reduction
NED
0.1026
0.0242),
confirming
effectiveness
advancement
proposed
method.
research
not
only
provides
novel
approach
support
but
also
fills
gap
high-quality,
diverse
datasets.
It
holds
practical
application
value
fields
such
as
systems
cartography,
contributing
interpretation.
Language: Английский
Vehicle and Pedestrian Detection Based on Improved YOLOv7-Tiny
Zhen Liang,
No information about this author
Wei Wang,
No information about this author
Ruifeng Meng
No information about this author
et al.
Electronics,
Journal Year:
2024,
Volume and Issue:
13(20), P. 4010 - 4010
Published: Oct. 12, 2024
To
improve
the
detection
accuracy
of
vehicles
and
pedestrians
in
traffic
scenes
using
object
algorithms,
this
paper
presents
modifications,
compression,
deployment
single-stage
typical
algorithm
YOLOv7-tiny.
In
model
improvement
section:
firstly,
to
address
problem
small
missed
detection,
shallower
feature
layer
information
is
incorporated
into
original
fusion
branch,
forming
a
four-scale
head;
secondly,
Multi-Stage
Feature
Fusion
(MSFF)
module
proposed
fully
integrate
shallow,
middle,
deep
extract
more
comprehensive
information.
compression
Layer-Adaptive
Magnitude-based
Pruning
(LAMP)
Torch-Pruning
library
are
combined,
setting
different
pruning
rates
for
improved
model.
V7-tiny-P2-MSFF
model,
pruned
by
45%
LAMP,
deployed
on
embedded
platform
NVIDIA
Jetson
AGX
Xavier.
Experimental
results
show
that
achieves
12.3%
increase
[email protected]
compared
with
parameter
volume,
computation
size
reduced
76.74%,
7.57%,
70.94%,
respectively.
Moreover,
inference
speed
single
image
quantized
Xavier
9.5
ms.
Language: Английский