Neuromorphic event-based recognition boosted by motion-aware learning
Neurocomputing,
Journal Year:
2025,
Volume and Issue:
unknown, P. 129678 - 129678
Published: Feb. 1, 2025
Language: Английский
ASOD-YOLOX: a study on small object detection in aerial images based on YOLOX
H. Zhang,
No information about this author
Wentao Liu,
No information about this author
Enyao Chen
No information about this author
et al.
The Journal of Supercomputing,
Journal Year:
2025,
Volume and Issue:
81(5)
Published: April 16, 2025
Language: Английский
LCFF-Net: A lightweight cross-scale feature fusion network for tiny target detection in UAV aerial imagery
D. Tang,
No information about this author
Shuyun Tang,
No information about this author
Zhipeng Fan
No information about this author
et al.
PLoS ONE,
Journal Year:
2024,
Volume and Issue:
19(12), P. e0315267 - e0315267
Published: Dec. 19, 2024
In
the
field
of
UAV
aerial
image
processing,
ensuring
accurate
detection
tiny
targets
is
essential.
Current
target
algorithms
face
challenges
such
as
low
computational
demands,
high
accuracy,
and
fast
speeds.
To
address
these
issues,
we
propose
an
improved,
lightweight
algorithm:
LCFF-Net.
First,
LFERELAN
module,
designed
to
enhance
extraction
features
optimize
use
resources.
Second,
a
cross-scale
feature
pyramid
network
(LC-FPN)
employed
further
enrich
information,
integrate
multi-level
maps,
provide
more
comprehensive
semantic
information.
Finally,
increase
model
training
speed
achieve
greater
efficiency,
lightweight,
detail-enhanced,
shared
convolution
head
(LDSCD-Head)
original
head.
Moreover,
present
different
scale
versions
LCFF-Net
algorithm
suit
various
deployment
environments.
Empirical
assessments
conducted
on
VisDrone
dataset
validate
efficacy
proposed.
Compared
baseline-s
model,
LCFF-Net-n
outperforms
by
achieving
2.8%
in
mAP
50
metric
3.9%
improvement
50–95
metric,
while
reducing
parameters
89.7%,
FLOPs
50.5%,
computation
delay
24.7%.
Thus,
offers
accuracy
speeds
for
images,
providing
effective
solution.
Language: Английский
EventSegNet: Direct Sparse Semantic Segmentation from Event Data
Remote Sensing,
Journal Year:
2024,
Volume and Issue:
17(1), P. 84 - 84
Published: Dec. 29, 2024
Semantic
segmentation
tasks
encompass
various
applications,
such
as
autonomous
driving,
medical
imaging,
and
robotics.
Achieving
accurate
semantic
information
retrieval
under
conditions
of
high
dynamic
range
rapid
scene
changes
remains
a
significant
challenge
for
image-based
algorithms.
This
is
primarily
attributable
to
the
limitations
conventional
image
sensors,
which
can
experience
motion
blur
or
exposure
artifacts.
In
contrast,
event-based
vision
asynchronously
report
in
pixel
intensity,
offer
compelling
solution
by
acquiring
visual
at
same
rate
dynamics,
thereby
mitigating
these
limitations.
However,
we
encounter
tasks:
need
expend
time
on
converting
event
data
into
frame
images
align
with
existing
techniques.
approach
squanders
inherently
temporal
resolution
data,
compromising
accuracy
real-time
performance
tasks.
To
address
issues,
this
work
explores
sparse
that
directly
addresses
data.
We
propose
network
named
EventSegNet
improves
ability
extract
geometric
features
from
combining
feature
enhancement
operations
attention
mechanisms.
Based
this,
large-scale
dataset
provides
labels
each
event.
Our
achieved
new
F1
score
84.2%
dataset.
addition,
lightweight
edge-oriented
AI
inference
deployment
technique
was
implemented
model.
Compared
baseline
model,
optimized
model
reduces
1.1%
but
more
than
twice
fast
computationally,
enabling
NVIDIA
AGX
Xavier.
Language: Английский