Remote Sensing,
Journal Year:
2023,
Volume and Issue:
15(11), P. 2784 - 2784
Published: May 26, 2023
Traditional
image
fusion
techniques
generally
use
symmetrical
methods
to
extract
features
from
different
sources
of
images.
However,
these
conventional
approaches
do
not
resolve
the
information
domain
discrepancy
multiple
sources,
resulting
in
incompleteness
fusion.
To
solve
problem,
we
propose
an
asymmetric
decomposition
method.
Firstly,
abundance
discrimination
method
is
used
sort
images
into
detailed
and
coarse
categories.
Then,
are
proposed
at
scales.
Next,
strategies
adopted
for
scale
features,
including
sum
fusion,
variance-based
transformation,
integrated
energy-based
Finally,
result
obtained
through
summation,
retaining
vital
both
Eight
metrics
two
datasets
containing
registered
visible,
ISAR,
infrared
were
evaluate
performance
The
experimental
results
demonstrate
that
could
preserve
more
details
than
symmetric
one,
performed
better
objective
subjective
evaluations
compared
with
fifteen
state-of-the-art
methods.
These
findings
can
inspire
researchers
consider
a
new
framework
adapt
differences
richness
images,
promote
development
technology.
Journal of Visual Communication and Image Representation,
Journal Year:
2024,
Volume and Issue:
101, P. 104179 - 104179
Published: May 1, 2024
Infrared
and
visible
image
fusion
represents
a
significant
segment
within
the
domain.
The
recent
surge
in
processing
hardware
advancements,
including
GPUs,
TPUs,
cloud
computing
platforms,
has
facilitated
of
extensive
datasets
from
multiple
sensors.
Given
remarkable
proficiency
neural
networks
feature
extraction
fusion,
their
application
infrared
emerged
as
prominent
research
area
years.
This
article
begins
by
providing
an
overview
current
mainstream
algorithms
for
based
on
networks,
detailing
principles
various
algorithms,
representative
works,
respective
advantages
disadvantages.
Subsequently,
it
introduces
domain-relevant
datasets,
evaluation
metrics,
some
typical
scenarios.
Finally,
conducts
qualitative
quantitative
evaluations
results
state-of-the-art
offers
future
prospects
experimental
results.
Applied Sciences,
Journal Year:
2023,
Volume and Issue:
13(19), P. 10891 - 10891
Published: Sept. 30, 2023
Infrared
and
visible
light
image
fusion
combines
infrared
images
by
extracting
the
main
information
from
each
fusing
it
together
to
provide
a
more
comprehensive
with
features
two
photos.
has
gained
popularity
in
recent
years
is
increasingly
being
employed
sectors
such
as
target
recognition
tracking,
night
vision,
scene
segmentation,
others.
In
order
concise
overview
of
picture
fusion,
this
paper
first
explores
its
historical
context
before
outlining
current
domestic
international
research
efforts.
Then,
conventional
approaches
for
multi-scale
decomposition
method
sparse
representation
method,
are
thoroughly
introduced.
The
advancement
deep
learning
greatly
aided
field
fusion.
outcomes
have
wide
range
potential
applications
due
neural
networks’
strong
feature
extraction
reconstruction
skills.
As
result,
also
evaluates
techniques.
After
that,
some
common
objective
evaluation
indexes
provided,
performance
datasets
areas
sorted
out
at
same
time.
Datasets
play
significant
role
an
essential
component
testing.
application
many
domains
then
simply
studied
practical
examples,
particularly
developing
fields,
used
show
application.
Finally,
prospect
presented,
full
text
summarized.
Sensors,
Journal Year:
2023,
Volume and Issue:
23(5), P. 2434 - 2434
Published: Feb. 22, 2023
A
perception
module
is
a
vital
component
of
modern
robotic
system.
Vision,
radar,
thermal,
and
LiDAR
are
the
most
common
choices
sensors
for
environmental
awareness.
Relying
on
singular
sources
information
prone
to
be
affected
by
specific
conditions
(e.g.,
visual
cameras
glary
or
dark
environments).
Thus,
relying
different
an
essential
step
introduce
robustness
against
various
conditions.
Hence,
system
with
sensor
fusion
capabilities
produces
desired
redundant
reliable
awareness
critical
real-world
systems.
This
paper
proposes
novel
early
that
individual
cases
failure
when
detecting
offshore
maritime
platform
UAV
landing.
The
model
explores
still
unexplored
combination
visual,
infrared,
modalities.
contribution
described
suggesting
simple
methodology
intends
facilitate
training
inference
lightweight
state-of-the-art
object
detector.
based
detector
achieves
solid
detection
recalls
up
99%
all
extreme
weather
such
as
glary,
dark,
foggy
scenarios
in
fair
real-time
duration
below
6
ms.
Sensors,
Journal Year:
2023,
Volume and Issue:
23(18), P. 7870 - 7870
Published: Sept. 13, 2023
The
infrared
and
visible
image
fusion
task
aims
to
generate
a
single
that
preserves
complementary
features
reduces
redundant
information
from
different
modalities.
Although
convolutional
neural
networks
(CNNs)
can
effectively
extract
local
obtain
better
performance,
the
size
of
receptive
field
limits
its
feature
extraction
ability.
Thus,
Transformer
architecture
has
gradually
become
mainstream
global
features.
However,
current
Transformer-based
methods
ignore
enhancement
details,
which
is
important
tasks
other
downstream
vision
tasks.
To
this
end,
new
super
attention
mechanism
wavelet-guided
pooling
operation
are
applied
network
form
novel
network,
termed
SFPFusion.
Specifically,
able
establish
long-range
dependencies
images
fully
extracted
processed
by
multi-scale
base
enhance
detail
With
powerful
representation
ability,
only
simple
strategies
utilized
achieve
performance.
superiority
our
method
compared
with
state-of-the-art
demonstrated
in
qualitative
quantitative
experiments
on
multiple
benchmarks.
Remote Sensing,
Journal Year:
2024,
Volume and Issue:
16(17), P. 3246 - 3246
Published: Sept. 1, 2024
Infrared
and
visible
image
fusion
integrates
complementary
information
from
different
modalities
into
a
single
image,
providing
sufficient
imaging
for
scene
interpretation
downstream
target
recognition
tasks.
However,
existing
methods
often
focus
only
on
highlighting
salient
targets
or
preserving
details,
failing
to
effectively
combine
entire
features
during
the
process,
resulting
in
underutilized
poor
overall
effects.
To
address
these
challenges,
global
local
four-branch
feature
extraction
network
(GLFuse)
is
proposed.
On
one
hand,
Super
Token
Transformer
(STT)
block,
which
capable
of
rapidly
sampling
predicting
super
tokens,
utilized
capture
scene.
other
Detail
Extraction
Block
(DEB)
developed
extract
Additionally,
two
modules,
namely
Attention-based
Feature
Selection
Fusion
Module
(ASFM)
Dual
Attention
(DAFM),
are
designed
facilitate
selective
modalities.
Of
more
importance,
various
perceptual
maps
learned
modality
images
at
layers
investigated
design
loss
function
better
restore
detail
highlight
by
treating
separately.
Extensive
experiments
confirm
that
GLFuse
exhibits
excellent
performance
both
subjective
objective
evaluations.
It
deserves
note
improves
detection
unified
benchmark.