Hybrid quantum-classical 3D object detection using multi-channel quantum convolutional neural network
Emily Jimin Roh,
No information about this author
Joo Yong Shim,
No information about this author
Joongheon Kim
No information about this author
et al.
The Journal of Supercomputing,
Journal Year:
2025,
Volume and Issue:
81(3)
Published: Feb. 1, 2025
A Novel Continual Learning and Adaptive Sensing State Response‐Based Target Recognition and Long‐Term Tracking Framework for Smart Industrial Applications
Lu Chen,
No information about this author
Gun Li,
No information about this author
Jie Tan
No information about this author
et al.
Expert Systems,
Journal Year:
2025,
Volume and Issue:
42(5)
Published: April 7, 2025
ABSTRACT
Purpose
With
the
rapid
development
of
artificial
intelligence
technology,
highly
intelligent
and
unmanned
factories
have
become
an
important
trend.
In
complex
environments
smart
factories,
long‐term
tracking
inspection
specified
targets,
such
as
operators
special
products,
well
comprehensive
visual
recognition
decision‐making
capabilities
throughout
whole
production
process,
are
critical
components
automated
factories.
However,
challenges
target
occlusion
disappearance
frequently
occur,
complicating
tracking.
Currently,
there
is
limited
research
specifically
focused
on
developing
robust
frameworks
for
particularly
those
designed
to
integrate
with
embedded
platforms
overcome
various
challenges.
Methods
We
first
construct
three
new
benchmark
datasets
in
workshop
environment
a
factory
(referred
SF‐Complex3
data),
which
include
challenging
conditions
complete
partial
targets.
A
brain
memory‐inspired
approach
used
determine
uncertainty
estimation
parameters,
including
confidence,
peak‐to‐sidelobe
ratio
average
peak‐to‐correlation
energy,
develop
continual
learning‐based
adaptive
model
update
method.
Additionally,
we
design
lightweight
detection
automatically
detect
locate
targets
initial
frame
during
re‐detection.
Finally,
algorithm
ground
mobile
robots
aerial
vehicles‐based
imaging
processing
equipment
build
framework,
Results
conducted
extensive
tests
UAV20L
datasets.
The
proposed
demonstrates
performance
improvement
6%
when
addressing
key
attributes,
compared
state‐of‐the‐art
methods.
was
capable
running
efficiently
platforms,
UAVs,
at
real‐time
speed
36.4
frames
per
second.
Conclusions
SFC‐RT
framework
effectively
addresses
loss
within
environments.
meets
requirements
performance,
robustness
design,
making
it
suited
practical
deployment.
Language: Английский
A Novel Multi-Sensor Nonlinear Tightly-Coupled Framework for Composite Robot Localization and Mapping
Lu Chen,
No information about this author
Amir Hussain,
No information about this author
Yu Liu
No information about this author
et al.
Sensors,
Journal Year:
2024,
Volume and Issue:
24(22), P. 7381 - 7381
Published: Nov. 19, 2024
Composite
robots
often
encounter
difficulties
due
to
changes
in
illumination,
external
disturbances,
reflective
surface
effects,
and
cumulative
errors.
These
challenges
significantly
hinder
their
capabilities
environmental
perception
the
accuracy
reliability
of
pose
estimation.
We
propose
a
nonlinear
optimization
approach
overcome
these
issues
develop
an
integrated
localization
navigation
framework,
IIVL-LM
(IMU,
Infrared,
Vision,
LiDAR
Fusion
for
Localization
Mapping).
This
framework
achieves
tightly
coupled
integration
at
data
level
using
inputs
from
IMU
(Inertial
Measurement
Unit),
infrared
camera,
RGB
(Red,
Green
Blue)
LiDAR.
real-time
luminance
calculation
model
verify
its
conversion
accuracy.
Additionally,
we
designed
fast
approximation
method
weighted
fusion
features
frames
based
on
values.
Finally,
optimize
VIO
(Visual-Inertial
Odometry)
module
R3LIVE++
(Robust,
Real-time,
Radiance
Reconstruction
with
LiDAR-Inertial-Visual
state
Estimation)
camera's
capability
acquire
depth
information.
In
controlled
study,
simulated
indoor
rescue
scenario
dataset,
system
demonstrated
significant
performance
enhancements
challenging
conditions,
particularly
low-light
environments.
Specifically,
average
RMSE
ATE
(Root
Mean
Square
Error
absolute
trajectory
Error)
improved
by
23%
39%,
reductions
0.006
0.013.
At
same
time,
conducted
comparative
experiments
publicly
available
TUM-VI
(Technical
University
Munich
Visual-Inertial
Dataset)
without
image
input.
It
was
found
that
no
leading
results
were
achieved,
which
verifies
importance
fusion.
By
maintaining
active
engagement
least
three
sensors
all
times,
boosts
robustness
both
unknown
expansive
environments
while
ensuring
high
precision.
enhancement
is
critical
applications
complex
environments,
such
as
operations.
Language: Английский