Remote Sensing,
Journal Year:
2023,
Volume and Issue:
15(11), P. 2784 - 2784
Published: May 26, 2023
Traditional
image
fusion
techniques
generally
use
symmetrical
methods
to
extract
features
from
different
sources
of
images.
However,
these
conventional
approaches
do
not
resolve
the
information
domain
discrepancy
multiple
sources,
resulting
in
incompleteness
fusion.
To
solve
problem,
we
propose
an
asymmetric
decomposition
method.
Firstly,
abundance
discrimination
method
is
used
sort
images
into
detailed
and
coarse
categories.
Then,
are
proposed
at
scales.
Next,
strategies
adopted
for
scale
features,
including
sum
fusion,
variance-based
transformation,
integrated
energy-based
Finally,
result
obtained
through
summation,
retaining
vital
both
Eight
metrics
two
datasets
containing
registered
visible,
ISAR,
infrared
were
evaluate
performance
The
experimental
results
demonstrate
that
could
preserve
more
details
than
symmetric
one,
performed
better
objective
subjective
evaluations
compared
with
fifteen
state-of-the-art
methods.
These
findings
can
inspire
researchers
consider
a
new
framework
adapt
differences
richness
images,
promote
development
technology.
Electronics,
Journal Year:
2023,
Volume and Issue:
12(13), P. 2773 - 2773
Published: June 22, 2023
The
fusion
of
infrared
and
visible
images
produces
a
complementary
image
that
captures
both
radiation
information
texture
structure
details
using
the
respective
sensors.
However,
current
deep-learning-based
approaches
mainly
tend
to
prioritize
visual
quality
statistical
metrics,
leading
an
increased
model
complexity
weight
parameter
sizes.
To
address
these
challenges,
we
propose
novel
dual-light
approach
adaptive
DenseNet
with
knowledge
distillation
learn
compress
from
pre-existing
models,
which
achieves
goals
compression
through
use
hyperparameters
such
as
width
depth
network.
effectiveness
our
proposed
is
evaluated
on
new
dataset
comprising
three
public
datasets
(MSRS,
M3FD,
LLVIP),
qualitative
quantitative
experimental
results
show
distillated
effectively
matches
original
models’
performance
smaller
parameters
shorter
inference
times.
PLoS ONE,
Journal Year:
2023,
Volume and Issue:
18(9), P. e0290231 - e0290231
Published: Sept. 18, 2023
Infrared
and
visible
image
fusion
can
generate
a
with
clear
texture
prominent
goals
under
extreme
conditions.
This
capability
is
important
for
all-day
climate
detection
other
tasks.
However,
most
existing
methods
extracting
features
from
infrared
images
are
based
on
convolutional
neural
networks
(CNNs).
These
often
fail
to
make
full
use
of
the
salient
objects
in
raw
image,
leading
problems
such
as
insufficient
details
low
contrast
fused
images.
To
this
end,
we
propose
an
unsupervised
end-to-end
Fusion
Decomposition
Network
(FDNet)
fusion.
Firstly,
construct
network
that
extracts
gradient
intensity
information
images,
using
multi-scale
layers,
depthwise
separable
convolution,
improved
convolution
block
attention
module
(I-CBAM).
Secondly,
FDNet
feature
extraction,
loss
designed
accordingly.
Intensity
adopts
Frobenius
norm
adjust
weighing
values
between
two
select
more
effective
information.
The
introduces
adaptive
weight
determines
optimized
objective
richness
at
pixel
scale,
ultimately
guiding
abundant
Finally,
design
single
dual
channel
layer
decomposition
network,
which
keeps
decomposed
possible
input
forcing
contain
richer
detail
Compared
various
representative
methods,
our
proposed
method
not
only
has
good
subjective
vision,
but
also
achieves
advanced
performance
evaluation.
IET Image Processing,
Journal Year:
2024,
Volume and Issue:
18(10), P. 2774 - 2787
Published: May 23, 2024
Abstract
To
effectively
enhance
the
ability
to
acquire
information
by
making
full
use
of
complementary
features
infrared
and
visible
images,
widely
used
image
fusion
algorithm
is
faced
with
challenges
such
as
loss
blurring.
In
response
this
issue,
authors
propose
a
dual‐branch
deep
hierarchical
network
(ADF‐Net)
guided
an
attention
mechanism.
Initially,
convolution
module
extracts
shallow
image.
Subsequently,
decomposition
feature
extractor
introduced,
where
in
transformer
encoder
block
(TEB)
employs
remote
process
low‐frequency
global
features,
while
CNN
(CEB)
high‐frequency
local
information.
Ultimately,
layer
based
on
TEB
CEB
produce
fused
through
encoder.
Multiple
experiments
demonstrate
that
ADF‐Net
excels
various
aspects
utilizing
two‐stage
training
appropriate
function
for
testing.
Applied Sciences,
Journal Year:
2023,
Volume and Issue:
13(9), P. 5640 - 5640
Published: May 3, 2023
To
obtain
fused
images
with
excellent
contrast,
distinct
target
edges,
and
well-preserved
details,
we
propose
an
adaptive
image
fusion
network
called
the
adjacent
feature
shuffle-fusion
(AFSFusion).
The
proposed
adopts
a
UNet-like
architecture
incorporates
key
refinements
to
enhance
loss
functions.
Regarding
architecture,
two-branch
module,
AFSF,
expands
number
of
channels
fuse
several
convolutional
layers
in
first
half
AFSFusion,
enhancing
its
ability
extract,
transmit,
modulate
information.
We
replace
original
rectified
linear
unit
(ReLU)
leaky
ReLU
alleviate
problem
gradient
disappearance
add
channel
shuffling
operation
at
end
AFSF
facilitate
information
interaction
capability
between
features.
Concerning
functions,
weight
adjustment
(AWA)
strategy
assign
values
corresponding
pixels
infrared
(IR)
visible
images,
according
VGG16
response
IR
images.
This
efficiently
handles
different
scene
contents.
After
normalization,
are
used
as
weighting
coefficients
for
two
sets
applied
three
items
simultaneously:
mean
square
error
(MSE),
structural
similarity
(SSIM),
total
variation
(TV),
resulting
clearer
objects
richer
texture
detail
conducted
series
experiments
on
benchmark
databases,
results
demonstrate
effectiveness
superiority
compared
other
state-of-the-art
methods.
It
ranks
objective
metrics,
showing
best
performance
exhibiting
sharper
edges
specific
targets,
which
is
more
line
human
visual
perception.
remarkable
enhancement
ascribed
module
AWA
strategy,
enabling
balanced
extraction,
fusion,
modulation
features
throughout
process.
Applied Sciences,
Journal Year:
2024,
Volume and Issue:
14(7), P. 2821 - 2821
Published: March 27, 2024
Image
stitching
is
an
important
method
for
digital
image
processing,
which
often
prone
to
the
problem
of
irregularity
stitched
images
after
stitching.
And
traditional
cropping
or
complementation
methods
usually
lead
a
large
number
information
loss.
Therefore,
this
paper
proposes
rectification
based
on
deformable
mesh
and
residual
network.
The
aims
minimize
loss
at
edges
spliced
inside
image.
Specifically,
can
select
most
suitable
shape
network
regression
according
different
images.
Its
function
includes
global
local
loss,
aiming
within
grid
target.
in
not
only
greatly
reduces
caused
by
irregular
shapes
stitching,
but
also
adapts
with
various
rigid
structures.
Meanwhile,
its
validation
DIR-D
dataset
shows
that
outperforms
state-of-the-art
rectification.