Multi-Focus Image Fusion Based on Fractal Dimension and Parameter Adaptive Unit-Linking Dual-Channel PCNN in Curvelet Transform Domain
Liangliang Li,
No information about this author
Sensen Song,
No information about this author
Ming Lv
No information about this author
et al.
Fractal and Fractional,
Journal Year:
2025,
Volume and Issue:
9(3), P. 157 - 157
Published: March 3, 2025
Multi-focus
image
fusion
is
an
important
method
for
obtaining
fully
focused
information.
In
this
paper,
a
novel
multi-focus
based
on
fractal
dimension
(FD)
and
parameter
adaptive
unit-linking
dual-channel
pulse-coupled
neural
network
(PAUDPCNN)
in
the
curvelet
transform
(CVT)
domain
proposed.
The
source
images
are
decomposed
into
low-frequency
high-frequency
sub-bands
by
CVT,
respectively.
FD
PAUDPCNN
models,
along
with
consistency
verification,
employed
to
fuse
sub-bands,
average
used
sub-band,
final
fused
generated
inverse
CVT.
experimental
results
demonstrate
that
proposed
shows
superior
performance
Lytro,
MFFW,
MFI-WHU
datasets.
Language: Английский
Multi-focus image fusion based on pulse coupled neural network and WSEML in DTCWT domain
Yuan Jia,
No information about this author
Teng Ma
No information about this author
Frontiers in Physics,
Journal Year:
2025,
Volume and Issue:
13
Published: April 2, 2025
The
goal
of
multi-focus
image
fusion
is
to
merge
near-focus
and
far-focus
images
the
same
scene
obtain
an
all-focus
that
accurately
comprehensively
represents
focus
information
entire
scene.
current
algorithms
lead
issues
such
as
loss
details
edges,
well
local
blurring
in
resulting
images.
To
solve
these
problems,
a
novel
method
based
on
pulse
coupled
neural
network
(PCNN)
weighted
sum
eight-neighborhood-based
modified
Laplacian
(WSEML)
dual-tree
complex
wavelet
transform
(DTCWT)
domain
proposed
this
paper.
source
are
decomposed
by
DTCWT
into
low-
high-frequency
components,
respectively;
then
average
gradient
(AG)
motivate
PCNN-based
rule
used
process
low-frequency
WSEML-based
components;
we
conducted
simulation
experiments
public
Lytro
dataset,
demonstrating
superiority
algorithm
proposed.
Language: Английский
Robust Infrared–Visible Fusion Imaging with Decoupled Semantic Segmentation Network
Sensors,
Journal Year:
2025,
Volume and Issue:
25(9), P. 2646 - 2646
Published: April 22, 2025
The
fusion
of
infrared
and
visible
images
provides
complementary
information
from
both
modalities
has
been
widely
used
in
surveillance,
military,
other
fields.
However,
most
the
available
methods
have
only
evaluated
with
subjective
metrics
visual
quality
fused
images,
which
are
often
independent
following
relevant
high-level
tasks.
Moreover,
as
a
useful
technique
especially
low-light
scenarios,
effect
conditions
on
result
not
well-addressed
yet.
To
address
these
challenges,
decoupled
semantic
segmentation-driven
image
network
is
proposed
this
paper,
connects
downstream
task
to
drive
be
optimized.
Firstly,
cross-modality
transformer
module
designed
learn
rich
hierarchical
feature
representations.
Secondly,
semantic-driven
developed
enhance
key
features
prominent
targets.
Thirdly,
weighted
strategy
adopted
automatically
adjust
weights
different
modality
features.
This
effectively
merges
thermal
characteristics
detailed
images.
Additionally,
we
design
refined
loss
function
that
employs
decoupling
constrain
pixel
distributions
produce
more-natural
evaluate
robustness
generalization
method
practical
challenge
applications,
Maritime
Infrared
Visible
(MIV)
dataset
created
verified
for
maritime
environmental
perception,
will
made
soon.
experimental
results
public
datasets
practically
collected
MIV
highlight
notable
strengths
best-ranking
among
its
counterparts.
Of
more
importance,
achieved
over
96%
target
detection
accuracy
dominant
high
mAP@[50:95]
value
far
surpasses
all
competitors.
Language: Английский
Leveraging Land Cover Priors for Isoprene Emission Super-Resolution
Christopher Ummerle,
No information about this author
Antonio Giganti,
No information about this author
Sara Mandelli
No information about this author
et al.
Remote Sensing,
Journal Year:
2025,
Volume and Issue:
17(10), P. 1715 - 1715
Published: May 14, 2025
Satellite
remote
sensing
plays
a
crucial
role
in
monitoring
Earth’s
ecosystems,
yet
satellite-derived
data
often
suffer
from
limited
spatial
resolution,
restricting
the
availability
of
accurate
and
precise
for
atmospheric
modeling
climate
research.
Errors
biases
may
also
be
introduced
into
applications
due
to
use
with
insufficient
temporal
resolution.
In
this
work,
we
propose
deep
learning-based
Super-Resolution
(SR)
framework
that
leverages
land
cover
information
enhance
accuracy
Biogenic
Volatile
Organic
Compound
(BVOC)
emissions,
particular
focus
on
isoprene.
Our
approach
integrates
priors
as
emission
drivers,
capturing
patterns
more
effectively
than
traditional
methods.
We
evaluate
model’s
performance
across
various
conditions
analyze
statistical
correlations
between
isoprene
emissions
key
environmental
such
cropland
tree
data.
Additionally,
assess
generalization
capabilities
our
SR
model
by
applying
it
unseen
zones
geographical
regions.
Experimental
results
demonstrate
incorporating
significantly
improves
accuracy,
particularly
heterogeneous
landscapes.
This
study
contributes
chemistry
providing
cost-effective,
data-driven
refining
BVOC
maps.
The
proposed
method
enhances
usability
satellite-based
data,
supporting
air
quality
forecasting,
impact
assessments,
studies.
Language: Английский